Learning-oriented and technology-enhanced assessment

My post on ‘Adjectives of assessment’ omitted ‘learning-oriented’, and to be honest it wasn’t until reading this afternoon that I realised what a  powerful concept learning-oriented assessment might be.  I was reading Keppell, M. and Carless, D. (2006) Learning-oriented assessment: a technology-based case study, Assessment in Education, 13(2), 179-191. Keppell and Carless deliberately use the term ‘learning-oriented’ instead of the more common ‘formative’ or ‘assessment for learning’ and they make the point that both formative and summative assessment can be learning-oriented (and I’d add that both formative and summative assessment can be anti-learning-oriented too). It is also noteworthy that Keppell and Carless’s work was done in Hong Kong, where assessment is generally characterised as being exam oriented.

I’d also overlooked the full impact of the phrase technology-enhanced assessment. That little world ‘enhanced’ presumably means that the assessment is better because of the use of technology. So if the technology doesn’t make it better, perhaps we shouldn’t be using it. Nice.

Posted in terminology | Tagged , , , | Leave a comment

Peerwise

Also at the ‘More effective assessment and feedback’ meeting on Wednesday, Simon Bates spoke about the use of ‘Peerwise’ at the University of Edinburgh. Peerwise (see http://peerwise.cs.auckland.ac.nz/) enables students to write their own assessment questions, and to share and discus them with their colleagues. Physics students at Edinburgh have written some excellent and imaginative questions. Simon rightly described the work as exciting and typically of the Edinburgh Physics Education Research Group, they are trying to evaluate its impact.  So far they have found that students who engage with Peerwise are likely to do better than those who don’t . This is hardly surprising – better motivated students are likely both to engage with Peerwise and to do better. More surprising is the fact that students at all levels seem to benefit – it’s not just the best or the weakest. Most students also seem to like Peerwise.

I was excited and intrigued by Simon’s talk and look forward to hearing more. I can see that writing your own e-assessment questions will be a fun and motivating experience for students. But how much do they really learn in doing this? Is it really assessment? I’m not sure.

Posted in e-assessment, peer assessment, Peerwise | 1 Comment

Is it worth the effort?

I’m taking a short break from reporting findings from my analysis into student engagement with short-answer free-text questions to reflect on a couple of things following the HEA UK Physical Sciences Centre workshop on ‘More effective assessment and feedback’ at the University of Leicester on Wednesday. It was an interesting meeting – initially people sat very quietly listening to presentations, but by the afternoon there was lots of discussion. I spoke twice –  in the morning I wittered on about the problem of students not answering the question you (the academic) thought you had asked; in the afternoon I was on the more familiar ground of writing short-answer free-text e-assessment questions, with feedback.

Steve Swithenby ran two discussion sessions and at the end he got us classifying various ‘methods of providing feedback’ as high/medium/low in terms of ‘importance and value to student’, ‘level of resources needed’ and ‘level of teacher expertise required’. Obviously, in the current economic climate we’re looking for something that is high, low, low. I agree with Steve that e-assessment, done properly, is high, high, high.

Right at the end, someone asked me ‘Is it worth the effort?’ It’s a fair point. On one level, in my own context at the UK Open University I know that all the considerable effort I have put into writing good e-assessment questions has been worthwhile, on financial as well as pedagogical grounds – simply because we have so many students and can re-use low-stakes questions from year to year. I’m quite used to explaining that this is not necessarily the case in other places, with smaller student numbers. However, the question went deeper than that – do we over-assess? is the effort that we put into assessment per se worth it, in terms of improving learning? It’s a truism that assessment drives learning and I have certainly seen learning take place as students are assessed, in a way that I doubt would happen otherwise. But is this generally true? What would be the effect of reducing the amount of assessment on our modules and programmes? I don’t know.

Posted in assessment design, quality | Tagged , , , | Leave a comment

Helpful and unhelpful feedback : a story of sandstone

One of the general findings that is coming out of my evaluation of student responses to multi-try e-assessment questions relates to that wonderful thing that I’ll call the ‘Law of unintended consequences’. I used to think that ‘students don’t read assessment questions’, but actually they do. It’s just that they don’t always interpret questions in the way that you (the author) had intended; sometimes students read things into the questions that you didn’t intend to say. The same is true of feedback; sometimes you are giving a message that you didn’t intend to give. All of this is true for assessment of all types, but I’ll talk you through some of the issues I have discovered in the context of one of our short-answer free-text questions. Continue reading

Posted in feedback, short-answer free text questions, student engagement | Tagged , , , | 4 Comments

Computers as social actors

Some of the findings I’ve been blogging about recently (and some still to come) are contradictory. On the one hand students seem to be very aware that their answers have been marked by a computer not a human-marker, but in other ways they are behaving as if they are being marked by a human marker.

Back in 1996 Reeves and Nass introduced the ‘Computers as social actors’ paradigm, arguing that people’s interaction with computers, television and other new media are fundamentally social and natural, just like interactions in real life. More recent work has shown that people are polite to computers (even though they know this is stupid) and, when they feel that a computer has ‘cheated’ them, they behave spitefully towards it.

Lipnevich and Smith‘s work on assessment feedback that students believed to be generated by a human or computer provided some support for the computers as social actors hypothesis, but also some evidence that runs counter to the hypothesis. When all was going well, feedback from a computer was received as if from a person; when recipients didn’t want to believe the feedback, then the computer was…a stupid computer! I think I’m beginning to find similar evidence in my own work. Some students type their answers as carefully crafted sentences and in many situations they seem to react to feedback from a computer as if it came from a human marker. However when an answer is marked as incorrect (whether or not it is), and especially in the absence of targeted feedback that explains the error, students seem to think that they are right and the computer is wrong. As I’ve said before, this is something of a problem for e-assessment.

Posted in Computers as Social Actors, e-assessment, feedback | Tagged , , | 1 Comment

decease or decrease?

Back in 2007, we were observing students attempting our short-answer free-text e-assessment questions in a Usability Laboratory. One student repeatedly typed ‘decease’ instead of ‘decrease’ and he didn’t realise he was doing it. At the time, the answer matching was linguistically based and so, on the basis of the fact that ‘decrease’ and ‘decease’ have different meanings, the student’s answers were all marked as incorrect, even when they weren’t. He was not amused. As a short-term fix we added ‘decease’ as a synonym for ‘decrease’, and paradoxically our current very much simpler answer-matching copes fine in its default setting (which allows  one missing letter in the middle of a word – as well as one additional letter and/or one transposition), enabling appropriate marking and tailored feedback for responses that contain malapropisms of this sort:

There are some interesting lessons to be learnt. Continue reading

Posted in short-answer free text questions, student engagement | Tagged , , , | 1 Comment

Spelling mistakes in student responses to short-answer free-text e-assessment questions

I get asked a lot about how the answer-matching copes with poorly spelt responses to our short-answer free-text responses, and this is certainly something that used to worry me. Fortunately all the evidence is that our answer matching has coped remarkably well with poor spelling – and this is true both for the IAT software that we used to use as well as the PMatch software that we use now. We’ve dealt with poor spelling in a number of different ways – for a long time we relied on fixes within the answer-matching software itself, but we now also use a pre-marking spell-checker, which warns students if they use a word that has not been recognised and suggests alternatives, as shown in the screen-shot shown below.

For obvious reasons, our analysis of spelling mistakes in student responses has used responses gathered before the introduction of the pre-marking spell-checker.

So how many of these responses contained spelling mistakes? Continue reading

Posted in short-answer free text questions, student engagement | Tagged , , | Leave a comment

Challenging my own practice

The JISC ‘From challenge to change’ workshop yesterday (see previous post) started with an invitation to record aspects of assessment and feedback provision in our own context that we felt to be strengths or remained a challenge.

I was there as the case study speaker, outlining the challenges of my situation (huge student numbers, distance, open-ness) and then going on to describe what we have done to address the challenges. It’s all positive stuff – we have provided thousands of students with instantaneous feedback, we have helped them to pace their study, we have freed up staff time to do better things, we have marked consistently and reliably. Students and tutors like our questions and we have even saved the University money.

So why is it that, when asked to identify the strengths and the challenges, I still see more challenges than strengths. We are scaffolding learning but are we constraining it too much? We are using short-answer free-text questions because we want to ‘go beyond’ multiple choice computer-marked questions, but are we really causing students to think any more than they would when confronted by a good multiple-choice question? We are using ‘low-stakes summative’ iCMA questions to encourage students to engage more deeply with the process (and we know that, at a certain level, this works) but are they really learning? I have similar anxieties about our tutor-marked assignments – Are we giving just too much feedback? Are we overwhelming our students? Are we smothering them? Most fundamentally, do we have a shared understanding with our students (and our part-time tutors) about what assessment is for and what it is about?  If I’d like this blog to achieve one thing it would be to challenge all of us to reflect more and to evaluate more. Then perhaps we’ll get some answers.

Posted in assessment design | Tagged , , , , , | Leave a comment

Challenging received wisdom

Our case study ‘Designing interactive assessments to promote independent learning’ from the JISC guide Effective Assessment in a Digital Age featured at the JISC Birmingham Assessment Workshop ‘From challenge to change’ yesterday, so I was speaking at the workshop.  These workshops are thoughtful and thought-provoking and there are some wonderful resources at http://jiscdesignstudio.pbworks.com/w/page/33596916/Effective-Assessment-in-a-Digital-Age-Workshops.

However, whenever I start thinking too deeply I end up worrying that we are not ‘getting it right’. That goes for my own work as much as anyone elses (see next post) but I do wonder whether all the noise being made in the ‘well tramped’ field of assessment is really making things better. In particular, we have principles of good assessment design, conditions under which assessment supports learning, guides to good assessment for learning etc. from every expert you can think of. I quote them regularly! But are these lists of sound underpinning principles really enabling us to deliver assessment that is more useful to our students and ourselves? I know that there is some inspirational work going on and that there have been some improvements over the years, but can we link improvement in learning to the underpinning principles? Where’s the evidence? If you have some, please do add a comment.

Posted in assessment design | Tagged , , | 2 Comments

More on length of student answers

So what else affects the length of student responses to short-answer free-text questions? Continue reading

Posted in short-answer free text questions, student engagement | Tagged , , | Leave a comment