Problems with trigonometry or rounding?

It is not a mistake that I start this post with a screenshot of the same variant of the same question that I was talking about last time.

I said that 8.2% of responses got the trigonometry or the algebra wrong. The problem is that 6.2% of responses are wrong simply because they can’t round correctly.

To five significant figures, the correct answer is 39.495 metres. As I’ve said before, people seem to be very poor at rounding this sort of number. The correct answer, to the requested two significant figures, is 39 metres, but those 6.2% of responses give the answer of 40 metres.

There are two lessons here

1. Improve the teaching on rounding (already done);

2. Don’t assume you know what students will do wrong in e-assessment questions. Look at real responses from real students. I don’t apologise for saying that yet again. If you don’t do this you will be left as I have been left here with a question that, for a sizeable fraction of students, is not assessing what the author (me!) intended.

Posted in mathematical misunderstandings | Tagged , | Leave a comment

Problems with trigonometry or algebra?

Ask a university teacher of science about their new students’ mathematical difficulties and the chances are you’ll be told that students can’t rearrange equations. They may go on to tell you that this is the fault of poor school-teaching or of dumming down the school curriculum ‘these days’. I used to think that this argument was wrong on two counts. I still think that we should be looking deeper into the causes of our students’ misunderstandings rather than apportioning blame. But what about rearranging equations? Continue reading

Posted in mathematical misunderstandings | Tagged , | Leave a comment

More on checking questions – an unhelpful tool

I’m still thinking about guidelines for checking questions. Except this is a guideline for what not to do…

My husband has been checking some Moodle questions for colleagues today and has mentioned two things to me. First of all he told me that when he’s checking the questions there’s a ‘helpful’ button he can press to reveal the correct answer. I’m sure it is meant to be helpful. But if we want the best questions for our students, I’m afraid I don’t think this is it. It encourages checkers to be lazy and to just check that the question ‘seems right’. Maybe I’m being mean, but I’d prefer checkers to be forced to attempt all questions as if they were a student.

The second thing Richard told me was about a problem he’d found in a particular question – and I doubt he’d have found it if he’d used the ‘helpful button’. I suspect that the question author has done just that. It’s a drag and drop question where you have to fill in the blanks in a sentence. The problem is that, unless you are being incredibly pedantic, some of the dragable terms meant as an option for one place in the sentence are  interchangeable with those meant for elsewhere, and if you get the wrong one e.g. ‘lower than’ instead of  ‘less than’ you get marked wrong, with no targeted feedback. This is the sort of thing that gives interactive computer-marked assessment a bad name.

Posted in question checking | Tagged , | 3 Comments

In the hands of our students

I’ve just attended a very interesting JISC webinair in which David Nicol spoke on the subject ‘Assessment and feedback : in the hands of the student’. He was focusing on the cognitive processes surrounding the receiving and giving of feedback, and made the point that feedback effectiveness relies as much on what goes on in the student’s mind as on what we (teachers) do.

David spoke a bit about things we can do to make teacher-generated feedback more useful but then he went on to talk about the role of self review (which is not the same as self assessment). I was particularly taken with his reference to Chi’s studies of self-explaining, in which she asked learners to explain their understanding. The other point that particularly interested me was that David was talking here about self review taking the lead rather than following teacher-generated feedback.

David went on to talk about peer review (again, not peer assessment), making the point that peer review has benefits in providing feedback that is more akin to that received in real life (where feedback does not come from a single source and where people are producers of feedback as well as being consumers of feedback). Interestingly (and not surprisingly), students report find giving feedback more useful than receiving it. This was the subject of my earlier post ‘Peer assessment : is it better to give or to receive?

I may have thought about that aspect before, but in general I feel I still have a huge amount to learn. Thank you David for making me think (and what is a blog if not self-review…).

Posted in peer review, self review | Tagged , , , | Leave a comment

Multiple choice vs short answer questions

I’m indebted to Silvester Draaijer for leading me towards an interesting article:

Funk, S.C. & Dickson, K.L (2011) Multiple-choice and short-answer exam performance in a college classroom. Teaching of Psychology, 38 (4), 273-277.

The authors used exactly the same questions in  multiple-choice and short-answer free-text response format – except (obviously) the short-answer questions did not provide answer choices. 50 students in an ‘introduction to personality’ psychology class attempted both versions of each question, with half the students completing a 10 question short-answer pretest before a 50 question multiple-choice exam and half the students completing the 10 short-answer questions as a post-test after the multiple-choice exam. The experiment was run twice (‘Exam 2’ and ‘Exam 3’, where students didn’t know what format to expect in Exam 2, but did in Exam 3). Continue reading

Posted in multiple-choice questions, question difficulty, Uncategorized | Tagged , | Leave a comment

Checking questions

At last week’s Quality Enhancement Seminar, I was asked for guidance on checking interactive computer-marked assignment (iCMA) questions and for guidance on writing questions that are easy to check.

We have detailed but rather OU- and OpenMark-centric guidelines for question checkers (if anyone from the OU would like a copy, please email me) but this has made me think about the more general points.

Firstly, checking iCMA questions is vitally important and I think it is reasonable that this might take as long as writing the questions in the first place. Checking all assessment material is important, but with tutor-marked assignments, you don’t need to check the answer matching, and there are tutors to mediate any unexpected answers. Not that this should be an excuse for not doing everything you can to make the questions as good as they possibly should be before students see them. There are two general points that apply to checking both computer-marked and tutor-marked questions : (1) try the questions for yourself as if you were a student; (2) if possible get someone else to check your questions too – a new pair of eyes will often see ambiguities etc. that you have missed. Continue reading

Posted in question checking | Tagged , , | Leave a comment

A consistent approach to assessment design

Last week, I participated in a ‘Quality Enhancement Seminar’ on ‘Effective use of interactive computer-marked assessment’. A list of tips I gave included the following two points:

11. Embed iCMAs carefully within the module’s assessment strategy, considering carefully whether you want them to be formative-only, summative, thresholded etc.;

12. Work towards a consistent approach at the qualification, programme and Faculty level (at least).

Afterwards I was asked how the above two points can be reconciled. The questioner had a point. We want each module to have a carefully thought out assessment strategy, using e-assessment when but only when appropriate. But yet I have become more and more worried about the confusion we cause by having lots of different models for different modules. I’ve become convinced that we need to take a more joined up approach. Which is more important, having the detailed assessment strategy that is absolutely right for each module or having a consistent approach across a qualification? I’m not sure. It is at least good that so many people are giving careful consideration to these matters.

Posted in assessment design | Tagged | Leave a comment

Feedback and anger

My previous two posts have identified two conditions which lead to feedback being less than useful:

1. when the recipient doesn’t understand the feedback;

2. when there is a lack of alignment between what is said and information received from another source.

Both previous posts include an aspect of another condition, namely a strong emotional reaction. How well do you receive feedback when you’re angry? To be honest, I’m not sure. Anger makes you remember the incident and I have already posted on the effectiveness of feedback when an answer that you were sure was correct turns out to be incorrect. But anger can prevent you from understanding what was really wrong and make it more likely that you will blame the deliver of the feedback (whether human or computer) rather than actually learning from the feedback received.

Posted in feedback | Tagged , , | Leave a comment

Conditions under which feedback is useless

Reflecting on the previous post, where a feedback intervention was not understood by a student, I really wonder how useful much of our feedback is. And some of the theory (especially frequently referenced lists of conditions under which feedback supports learning) may just be a load of twaddle. Who am I to say?

So let’s start from the opposite end. Let me start to produce a list of conditions under which feedback is useless. One such condition is that the mark awarded (or other outcome)  is not consistent with the feedback given. This happens when a student scores poorly but the feedback (in an attempt to be supportive) says ‘this was a good attempt’. Clearly not true. Continue reading

Posted in feedback | Tagged , | Leave a comment

When students don’t understand our feedback

One of the consequences of my ‘day job’ is that I tend to hear more from students who are disastified in some way with what we do, than from those who are happy. An associate lecturer on one of the modules that I chair had a rather grotty end to 2011 when a student complained about her grading of an assignment – why had she lost all those marks? Incidentally the student got 80+%;  in my experience it is students who are doing very well but who want to be perfect who tend to be most disatisfied – I’m not sure if their disastifaction is more with themselves or with us.

It turns out that the student is not a native speaker of English and she’s convinced that she was penalised because of her poor written English. Not true! (though it might have been – this may not be terribly politically correct, but we’re a UK University and if a student’s written communication skills are not up to par, then they will lose marks). The tutor had given feedback on what might have been done to produce an even better piece of work, but the student doesn’t appear to have understood this. From the student’s point of view, she clearly had some legitimate cause for complaint – and that matters. However, if she would just look at what her tutor has written, understand it, and learn from it, all would be well. But there is a gap between what has been written and what the student understands. Could we have written it more clearly – I don’t know. Hey ho!

Happy New Year everyone.

Posted in feedback | Tagged | Leave a comment