Archive for October, 2012

Plagiarism proof blog?

Wednesday, October 31st, 2012

I just had reason to check the spam filter for this blog and found a comment with a link to a site which would produce a ‘plagiarism proof’ version of my postings. Given that the blog is about assessment, that’s funny. But it’s not funny that I am being encouraged to copy the work of others and then get someone else to reword it so that I can pass it off as my own without plagiarism-detection software detecting it. It’s really sad.

Note: this blog does sometimes copy other people’s words, but I hope I always give credit where it is due!

Getting feedback right

Tuesday, October 30th, 2012

This is not new, but it is something we  seem slow to understand, so I’m going to say it again. Just giving feedback is not the same as students acting on or learning from that feedback. And perhaps the point that we find the most difficult to understand: just giving more feedback or feedback that we think is better will not necessarily help.

We are also a bit to ready to blame our students – if they can’t be bothered even to collect their marked work, then it’s their fault isn’t it? Maybe not. Perhaps they think (or know) that it won’t be useful. When students read the feedback but don’t act on it, perhaps they don’t understand it or perhaps they can’t see its relevance to future assignments.

So we need to find out what is going on. We need to talk to our students and monitor their behaviour – and act on what we discover – rather than assuming that we know best. Then we need to work with our students to help them to learn from assessment and feedback and to help us to do better in the future.

This rant (which is really a plea for honest evaluation, and acting on the findings of this) is directed at myself as much as anyone else. We (the teachers) need to collect real feedback as much as anyone else ; this will include monitoring what our students do as well as what they say. Then we need to learn from it .

Distractors for multiple-choice questions

Friday, October 26th, 2012

I’ve just been asked a question (well, actually three questions) about the summative use of multiple-choice questions. I don’t know the answer. Can anyone help?

If we want 3 correct answers, what’s the recommended number of distractors?

If we want 4 correct answers, what’s the recommended number of distractors?

If we want 5 correct answers, what’s the recommended number of distractors?

I have [31st October] now found out a little more and the answer has to be completely correct and no partial credit is allowed. I have also realised that the answer lies in our own previous work on random guess scores. We’ve repeated the sums for these particular examples (see multiple-choice-distractors) and my recommendation to the module team is that if they require 8 options for each question (so, if they require 3 correct answers there are 5 distractors; if they require 4 correct answers there are 4 distractors etc.). The probability of getting the question right completely by chance will then always be less than 2%, and the probability of getting multiple questions right in this way is vanishingly small.

Assessing investigative science

Friday, October 19th, 2012

In my last post, just over a month ago (sorry folks, I’ve been a bit busy) I was ambivalent about the news that GCSEs in England are to be replaced by an ‘English Baccalaureate’ and the more general trend towards increasing use of examinations and decreasing use of other methods of summative assessment.

My views have now firmed up a bit – I don’t like it! Yes, I had a very conventional education with lots of exams (O-levels, A-levels and a degree in the 1970s) and I thrived on it. But many didn’t. And does the fact that I was good at exams mean that I was good at everything those exams were supposedly assessing? I rather doubt it.

This brings me to the issue of (lack of) authenticity. There are many skills that are very important in real life that are rather difficult to assess by exam. And of course there are other ways (though perhaps not quite so watertight) to reduce plagiarism – ask questions that expect students to use the internet, referencing their sources; ask questions that expect students to work together, reflecting on the process.

I’d like to concentrate on the difficulty of assessing investigative science by exam. At a meeting yesterday, Brian Whalley highlighted the difficulty in assessing field work in this way. It is possible to assess some skills in practical science by exam  – back in the 1970s I did physics practical exams alongside written A-level papers, but these generally just expected you to demonstrate a particular technique that you had practised and practised and practised. There are far more appropriate methods for assessing the sort of investigative science that real scientists do. Please UK government, think carefully before imposing exams in places where they do not belong.