Plagiarism proof blog?

I just had reason to check the spam filter for this blog and found a comment with a link to a site which would produce a ‘plagiarism proof’ version of my postings. Given that the blog is about assessment, that’s funny. But it’s not funny that I am being encouraged to copy the work of others and then get someone else to reword it so that I can pass it off as my own without plagiarism-detection software detecting it. It’s really sad.

Note: this blog does sometimes copy other people’s words, but I hope I always give credit where it is due!

Posted in plagiarism | Tagged | Leave a comment

Getting feedback right

This is not new, but it is something we  seem slow to understand, so I’m going to say it again. Just giving feedback is not the same as students acting on or learning from that feedback. And perhaps the point that we find the most difficult to understand: just giving more feedback or feedback that we think is better will not necessarily help.

We are also a bit to ready to blame our students – if they can’t be bothered even to collect their marked work, then it’s their fault isn’t it? Maybe not. Perhaps they think (or know) that it won’t be useful. When students read the feedback but don’t act on it, perhaps they don’t understand it or perhaps they can’t see its relevance to future assignments.

So we need to find out what is going on. We need to talk to our students and monitor their behaviour – and act on what we discover – rather than assuming that we know best. Then we need to work with our students to help them to learn from assessment and feedback and to help us to do better in the future.

This rant (which is really a plea for honest evaluation, and acting on the findings of this) is directed at myself as much as anyone else. We (the teachers) need to collect real feedback as much as anyone else ; this will include monitoring what our students do as well as what they say. Then we need to learn from it .

Posted in feedback | Tagged | Leave a comment

Distractors for multiple-choice questions

I’ve just been asked a question (well, actually three questions) about the summative use of multiple-choice questions. I don’t know the answer. Can anyone help?

If we want 3 correct answers, what’s the recommended number of distractors?

If we want 4 correct answers, what’s the recommended number of distractors?

If we want 5 correct answers, what’s the recommended number of distractors?

I have [31st October] now found out a little more and the answer has to be completely correct and no partial credit is allowed. I have also realised that the answer lies in our own previous work on random guess scores. We’ve repeated the sums for these particular examples (see multiple-choice-distractors) and my recommendation to the module team is that if they require 8 options for each question (so, if they require 3 correct answers there are 5 distractors; if they require 4 correct answers there are 4 distractors etc.). The probability of getting the question right completely by chance will then always be less than 2%, and the probability of getting multiple questions right in this way is vanishingly small.

Posted in multiple-choice questions, statistics | Tagged , , , | Leave a comment

Assessing investigative science

In my last post, just over a month ago (sorry folks, I’ve been a bit busy) I was ambivalent about the news that GCSEs in England are to be replaced by an ‘English Baccalaureate’ and the more general trend towards increasing use of examinations and decreasing use of other methods of summative assessment.

My views have now firmed up a bit – I don’t like it! Yes, I had a very conventional education with lots of exams (O-levels, A-levels and a degree in the 1970s) and I thrived on it. But many didn’t. And does the fact that I was good at exams mean that I was good at everything those exams were supposedly assessing? I rather doubt it.

This brings me to the issue of (lack of) authenticity. There are many skills that are very important in real life that are rather difficult to assess by exam. And of course there are other ways (though perhaps not quite so watertight) to reduce plagiarism – ask questions that expect students to use the internet, referencing their sources; ask questions that expect students to work together, reflecting on the process.

I’d like to concentrate on the difficulty of assessing investigative science by exam. At a meeting yesterday, Brian Whalley highlighted the difficulty in assessing field work in this way. It is possible to assess some skills in practical science by exam  – back in the 1970s I did physics practical exams alongside written A-level papers, but these generally just expected you to demonstrate a particular technique that you had practised and practised and practised. There are far more appropriate methods for assessing the sort of investigative science that real scientists do. Please UK government, think carefully before imposing exams in places where they do not belong.

Posted in exams, field work, investigative science | Tagged , , , | Leave a comment

The GCSE debacle

I can’t quite decide what I think of today’s news that, in England at least, GCSEs are to be replaced by an English Baccalaureate. I can see some good points in what is proposed, but I do wish that the Government (indeed, governments of all political hues) would stop messing around with education.

I do know what I think about the hiatus following the apparent tightening of the marking of GCSE papers in the summer. I feel very sorry for those who had expected a Grade C in English, only to be awarded a D. And, again, I wish they would stop messing around (it really doesn’t seem fair that our public examinations – at least in some sense – got easier and are now getting more difficult again). However, the awarding of grades on the basis of assessment is not, and never has been, an exact science. Markers are not consistent; difficult students react in different ways to different exam papers; we all have bad days and good days. So are exams ever fair? No, really and truly, they’re not.

That in itself is  not the real problem – we accept that whether or not one passes one’s driving test on a particular day is largely in the lap of the gods. The cause of the difficulty is the perception that academic exams are reliable and consistent;  that students who get a grade C in one set of circumstances, would also get a grade C on a different day with a different paper and a different person marking it. That’s just not true.

Posted in exams, GCSEs | Tagged , | Leave a comment

Using games and simulations in e-assessment

At CAA 2012 there were several very welcome papers that addressed ways in which technology (e.g. forums, wikis, blogs) can be used to make assessment more authentic. At first sight (to me, an oldy who has never really ‘understood’ computer games) the final symposium ‘Games and simulations in e-assessment’ seemed to be encouraging a step in the opposite direction. Games don’t seem very authentic! However after listening to the presentations and discussion (and a very interesting presentation from Jean Phua : What do secondary school students think about multimedia science computer assisted assessment?) I think I may have been wrong.

I think the point here is that if students can ‘have a go’ at a simulation of an experiment or whatever at the same time as doing an assignment (and then recheck when they receive feedback), learning will be enhanced. So this is a bit like us at the OU requiring students to use a computer-based activity as part of the assessment of a module (incidentally, I have long been arguing that we should have direct links – in both directions – between our assignments and our activities). And if we can make the activities as authentic as possible (e.g. by using ‘interactive screen experiments’ , based on photographs of a real experiment in progress, rather than simulations) so much the better.

Posted in games, simulations | Tagged , , | Leave a comment

Science started here

Sadly, the final presentation of S154 Science starts here has now ended. It was a 10-credit module so didn’t fit well with the 30-credit study intensity that is necessary for English students to get funding. But it was a lovely little module – popular with students and tutors alike, and highly effective in preparing students to study the longer S104 Exploring science.

S154’s assessment strategy was written to be complementary to that of S104, so it was at first something of a mystery when a ‘different’ student behaviour was observed on S154. The figures below show the way in which three individual (but typical) students interacted with a lightly-weighted  S154 iCMA. The red dot indicates the date on which the student first engaged with the iCMA and the blue dots indicate subsequent interactions.

‘Student 1’ is typical of many students, on S154 and all other modules – the last-minute merchants! (spot the iCMA’s cut-off date…). ‘Student 2’ is also typical of many students on many modules – the student starts the iCMA and completes the questions that they can; then over the next week or so, they use the feedback to improve on their answers to the other questions.

‘Student 3’ is typical for S154 but not for other modules. How odd I thought! But it turns out that there is a very simple explanation. S154 was an introductory module, with careful scaffolding. When students had completed Chapter 2 they were advised to attempt the first 4 questions on the iCMA; when they had completed Chapter 3 they were advised to attempt the next 2 questions; when they had completed Chapter 4 they were advised to attempt the final 4 questions. And that was exactly what they did! If surprises some people, but I have found on many occasions that students actually do what they think that they are expected to do.

Posted in student engagement | Tagged , , | Leave a comment

Multiple choice questions in Peerwise

Yesterday morning I particated in a wonderful webinar on Peerwise (http://peerwise.cs.auckland.ac.nz/), led by Paul Denny from the University of Auckland. The more I see of it, the more I am impressed by Peerwise – yesterday I attempted to write questions for myself for the first time, and also reviewed other people’s questions. It was tremendous fun and we all agreed that students would be likely to learn by authoring questions and attempting and reviewing other students’ questions.

Someone asked Paul if has plans to add question types other than multiple choice (the answer is yes, but not many). However this led to an interesting point – Paul explained that for student authoring of questions, multiple choice is good, because they have to think about the distractors. He could be right!

Posted in multiple-choice questions, Peerwise | Tagged , , | 1 Comment

Using pattern matching software

PMatch is a new Moodle question type (based on OpenMark’s pattern matching question type that is currently in use at the Open University for the short-answer free text questions that I have written). There is more information here.

Follow the links and you’ll see that it is pretty simple and, despite the fact that I have no computer programming experience, I found it easy to use. If you wanted to, you could use PMatch simply to look for keywords – however, for most of the questions that I have written that would not be sufficient. Word order and negation can be very important, especially when students really do give answers that are ‘opposite’ to a correct one, as in the examples that follow. So, in one question, you need to be able to mark ‘kinetic energy is converted to gravitational potential energy’ (and synonyms) as right but ‘gravitational potential energy is converted to kinetic energy’ (and synonyms) as wrong. In another question, you need to be able to mark ‘the forces are balanced’ and ‘There are no unbalanced forces’ as right (note that students very often use double negatives in their answers) whilst marking ‘The forces are not balanced’ and ‘The forces are unbalanced’ as wrong.

Continue reading

Posted in short-answer free text questions | Tagged , , | 1 Comment

Why don’t more people use short-answer free-text questions?

At CAA 2012 I gave a paper with the title ‘Short-answer e-assessment questions : five years on’  in which I discussed OU work in this area. There was a  lot of interest in what I said, especially concerning evaluation findings. However I wanted to get a discussion going on the reasons why more people don’t use assessment items of this type, and this didn’t really happen. So I’m trying again here. (My CAA 2012 paper is at Open Research Online if you want more background information.)

Continue reading

Posted in short-answer free text questions | Tagged , , | 5 Comments