Archive for the ‘Uncategorized’ Category

Learning outcomes – love them or hate them?

Thursday, December 13th, 2012

I went to an excellent meeting yesterday, the next step in bringing ‘joined-up thinking’ to assessment in modules in our Physics and Astronomy and Planetary Sciences pathways. There are issues, not least that some of the modules are also used by other qualifications/pathways – and we don’t own all of them. But, as at the Faculty Assessment Day in October, it was lovely to be able to spend several hours discussing teaching with a room-full of colleagues, and indeed the debate continued onto the bus to Milton Keynes Station at the end of the day!

The debate hinged around the use of learning outcomes. (more…)

Multiple choice vs short answer questions

Thursday, January 19th, 2012

I’m indebted to Silvester Draaijer for leading me towards an interesting article:

Funk, S.C. & Dickson, K.L (2011) Multiple-choice and short-answer exam performance in a college classroom. Teaching of Psychology, 38 (4), 273-277.

The authors used exactly the same questions in  multiple-choice and short-answer free-text response format – except (obviously) the short-answer questions did not provide answer choices. 50 students in an ‘introduction to personality’ psychology class attempted both versions of each question, with half the students completing a 10 question short-answer pretest before a 50 question multiple-choice exam and half the students completing the 10 short-answer questions as a post-test after the multiple-choice exam. The experiment was run twice (‘Exam 2′ and ‘Exam 3′, where students didn’t know what format to expect in Exam 2, but did in Exam 3). (more…)

Use of capital letters and full stops

Wednesday, November 30th, 2011

For the paper described in the previous post, I ended up deleting a section which described an investigation into whether student use of capital letters and full stops could be used as a proxy  for writing in sentences and paragraphs. We looked at this because it is a time-consuming and labour-intensive task to classify student responses as being ‘a phrase’, ‘a sentence’, ‘a paragraph’ etc. – but spotting capital letters and full stops is easier (and can be automated!).

I removed this section from the paper because the findings were somewhat inconclusive, but I was nevertheless surprised how many responses finished with a full stop and especially by the large number that started with a capital letter. See the table below, for a range of questions in a range of different uses (sometimes summative and sometimes not).

Question Number of responses (and percentage of total) that started with a capital letter Number of responses (and percentage of total) that finished with a full stop
A-good-ideaAYRF

S154 10J

 1678 (60.9%)

622 (60.0%)

 1118 (40.6%)

433 (41.8%)

Oil-on-waterS154 10J  500 (53.9%)  294 (31.7%)
MetamorphicSXR103 10E  297 (41.6%)  166 (23.2%)
SedimentarySXR103 10E  317 (39.9%)   178 (22.4%)
SandstoneS104 10B  954 (58.2%)  684 (41.7%)
Electric-forceS104 10B  673 (56.7%)  445 (37.5%)

Answers that were paragraphs were found to be very likely to start with a capital letter and end with a full stop; answers that were written in note form or as phrases were less likely to start with a capital letter and end with a full stop. Answers in the form of sentences were somewhere in between.

The other very interesting thing was that capital letters and full stops were both [sometimes significantly] associated with correct rather than incorrect responses.

Student misunderstandings

Tuesday, October 25th, 2011

The second external meeting I attended last week, this time at the University of Warwick, was a meeting of the Institute of Physics Higher Education Group entitled ‘Conceptual understanding : beyond diagnostic testing’. The messages that I’ve come home with are that student misunderstandings may not be what we think they are – and that we need to find out more. Derek Raine’s talk ‘Metaphors and misunderstandings: addressing student misconceptions in physics’ started off with a (presumably apocryphol – and I’m sure I won’t do the story justice) tale of a famous actress being shown to a dressing room in a provincial theatre. Her hosts were embarassed about the poor standard of their facilities and apologised that the dressing room had no door. But, she said ‘if there’s no door, how do I get in?’ Yes, we really are sometimes that much at cross-purposes with our students.

In physics education research, much attention has been given over the years to the ‘Force Concept Inventory’ (FCI), where a series of questions is used to assess student understanding of the Newtonian concepts of force. At the meeting, Marion Birch described common trends in FCI results at the universities of Manchester, Hull and Edinburgh – two questions seem to cause particular problems wherever they are asked. More startling are the gender differences – women do less well than men and two questions (different from those that are poorly answered by all) have particularly large differences. What Marion was describing was inarguable (though some of the women at the meeting wanted to argue…) but I want to know what is causing the results! Is the difference at the level of conceptual (mis)understanding or is it something about these particular questions that is causing women more difficulty than men? This is just far to interesting to let it go – we must find out what is going on.

The final presentation of the day was from Paula Heron (by video link from the University of Washington) on ‘Using students’ spontaneous reasoning to guide the design of effective instructional strategies’. I think we do need to start observing our students carefully, and asking about their reasoning, rather than just assuming that they answer mutliple-choice questions in the way that they do because of a particular misconception.

BODMAS, BIDMAS, BEDMAS

Tuesday, September 27th, 2011

More on simple arithmetic skills that people don’t always understand as well as they think they do, leading to difficulties at a later stage.

In the OU Science Faculty we use the mnemonic BEDMAS (others use BODMAS or BIDMAS) to remind students of the  rules governing the order of precedence for arithmetic operations:

Brackets take precedence over

Exponents. Then

Division and

Multiplication must be done before

Addition and

Subtraction.

When analysing student responses to iCMA questions, lack of understanding of the rules of precedence and related issues, whilst not contributing to as many errors as do problems with fractions and/or units, it’s still up there as a common difficulty. Sometimes the problem can be attributed to poor calculator use e.g. a lot of students interpret 3 6/3 as meaning (3 6)/3,  perhaps because they don’t stop and think before using their calculator.  This misunderstanding (seen in lots of variants of a question in summative use) led to a talk I used to give: ‘Why is the answer always 243?’.  But it goes deeper than that! For example, even after teaching students how to multiply out brackets etc., many think that (x + y) 2 is the same as  x2+ y2. Mistakes of this ilk are completely understandable, but it is nevertheless something to watch out for.

Happy birthday blog!

Saturday, July 30th, 2011

It seems hard to believe that I’ve been blogging on assessment, especially e-assessment, and especially the impact of e-assessment on students, for a year now.

Even more amazing is the fact that there is still so much I want to say. Assessment and e-assessment  have been growth areas for the past 20 years or so (huge numbers of papers have been written and huge numbers of conference presentations given). In many ways we know so much…but yet we know so little.  I’m not an expert, just an amateur pottering around the edges of a large, overpopulated and sometimes contested territory. I find it difficult to get my head around many of the concepts. (more…)

Fair or equal?

Tuesday, May 31st, 2011

This post returns to ideas from Gipps and Murphy’s book ‘A fair test?’. We use words like ‘equality’, ‘equity’ and ‘equal opportunities’ frequently, in the context of assessment and elsewhere. Gipps and Murphy deliberately talk about ‘equity’ not ‘equal opportunities’ and the UK Government talk about ‘equality’ (the 2010 Equality Act came fully into force in April 2011) – all in an attempt to make their meaning more clear. I used to think I was really clued up on all of this (as a line-manager in the UK Open University, I ask a lot of interview questions relating to equal opportunities – and I was once told that the answer I gave to an interview question of this ilk was the best that the interviewer had ever heard). However, especially in the context of assessment, I’ve come to realise that things aren’t as simple as they might appear… (more…)

How long is short?

Sunday, January 23rd, 2011

I’ve been looking at student responses to our short-answer free-text questions. I’ll start by considering something simple; how long are the responses? (more…)

Feedback, feedforward or feedaround?

Monday, November 22nd, 2010

This is another of those ideas that others probably thought of years ago, but I’ve been a bit slow on the uptake. In summary, findings about the effectiveness or otherwise of feedback probably depend on what the feedback is meant to be used for. The usual OU scenario is a student receiving feedback on a tutor-marked assignment and (supposedly) using this feedback to improve for next time. But in other contexts, the feedback may be intended to enable the student to improve the same piece of work. And the ‘three attempts with increasing feedback’ that we provide on interactive computer-marked assignments perhaps has more in common with the second of these than the first. (more…)

Partial credit for correct at second or third attempt

Monday, August 30th, 2010

One of the features of OpenMark, the OU’s e-assessment system, is the fact that students are allowed several (usually three) attempts at each question, and receive hints which increase in detail after each unsuccessful attempt. This is the case even in summative use, where the marks awarded decrease in line with the amount of feedback that has been given before the question is correctly answered.

The provision of increasing feedback is illustrated in the figure below.

The way in which we give partial credit when a question is only answered following feedback contrasts with other other systems which give partial credit for partially correct answers (we sometimes do that too). Is one of these approaches better than the other? I have always liked our approach of giving increasing feedback, and it has recently been pointed out the me that it is also good if students can be encouraged to get a completely correct answer for themselves. However, I think it is important that we tell students if their answer is partially correct, rather than letting them think that it is completely wrong – and so sending them off on a wild goose chase!

I also think that one of the remaining issues for the use of e-assessment of this type, especially in assessing maths, is the fact that we don’t give credit for ‘working’, which of course is so much a feature of human marking.