Archive for the ‘short-answer free text questions’ Category

Transforming Assessment webinar

Thursday, June 6th, 2013

I gave a webinar yesterday in the ‘Transforming Assessment’ series, on “Short-answer e-assessment questions: six years on”. The participants were lovely and I was especially pleased that there were lots of people I didn’t know. There is a recording at http://bit.ly/TA5J2013 if you’ve interested.

The response yesterday was very encouraging, but I remain concerned that more people are not using question types like this. However I rest my case that you need to use real student responses in developing answer matching for questions of the type we have written. That’s fine for us at the OU, with large student numbers, but not necessarily for others.

Then, in the feedback from participants, someone suggested that they would value training in writing questions and developing answer matching. I would so much like to run training like this, but simply don’t have the time.

But, thanks to Tim Hunt, we have another suggestion. I have recently used the Moodle Pattern Match question type to write very much simpler questions, which require a tightly constrained single word answer – like the one shown below.

If you are interested in using Pattern Match, writing questions like this would give a simple way in - and you’d probably get away with developing the questions without the need for student responses beforehand (though I would still monitor the responses you do get).

Automatic marking of essays

Friday, April 5th, 2013

I am grateful to Carol Bailey (see previous post) who, following a discussion over lunch, sent me a link to an extremely interesting paper:

Vojak, C., Kline, S., Cope, B., McCarthey, S. and Kalantzis, M. (2011) New Spaces and Old Places: An Analysis of Writing Assessment Software. Computers and Composition, 28, 97-111. (more…)

Eight maids a-Milking – part 2

Wednesday, January 2nd, 2013

Day 8b. Using PMatch for short-answer free-text questions. As promised, what I’d like to do is to give you an example of the answer matching for a real question, based on real student responses. (more…)

Eight Maids a-Milking – part 1

Tuesday, January 1st, 2013

Day 8a. Free text marking of short answer questions.  I’ve already talked about the automatic marking of short answers quite a lot on this blog so today, the first day of 2013, I’d like to do two slightly different things (1) Set our work in this area in context; (2) Give an example of the actual PMatch answer matching that we use for one question. Actually, we’ve been out walking and I wanted to get my other blog written up, so I will only do (1) today, letting the ‘Twelve Days’ slip and ending on 6th January (which some people consider to be the 12th Day of Christmas in any case) rather than 5th. A clever solution and another reminder that assessment questions need to be unambiguous (not like ‘What date is the 12th Day of Christmas?’) and to mark all correct answers as correct and all incorrect answers as incorrect – that is very important for short-answer free text questions too. (more…)

Using pattern matching software

Tuesday, July 17th, 2012

PMatch is a new Moodle question type (based on OpenMark’s pattern matching question type that is currently in use at the Open University for the short-answer free text questions that I have written). There is more information here.

Follow the links and you’ll see that it is pretty simple and, despite the fact that I have no computer programming experience, I found it easy to use. If you wanted to, you could use PMatch simply to look for keywords – however, for most of the questions that I have written that would not be sufficient. Word order and negation can be very important, especially when students really do give answers that are ‘opposite’ to a correct one, as in the examples that follow. So, in one question, you need to be able to mark ‘kinetic energy is converted to gravitational potential energy’ (and synonyms) as right but ‘gravitational potential energy is converted to kinetic energy’ (and synonyms) as wrong. In another question, you need to be able to mark ‘the forces are balanced’ and ‘There are no unbalanced forces’ as right (note that students very often use double negatives in their answers) whilst marking ‘The forces are not balanced’ and ‘The forces are unbalanced’ as wrong.

(more…)

Why don’t more people use short-answer free-text questions?

Monday, July 16th, 2012

At CAA 2012 I gave a paper with the title ‘Short-answer e-assessment questions : five years on’  in which I discussed OU work in this area. There was a  lot of interest in what I said, especially concerning evaluation findings. However I wanted to get a discussion going on the reasons why more people don’t use assessment items of this type, and this didn’t really happen. So I’m trying again here. (My CAA 2012 paper is at Open Research Online if you want more background information.)

(more…)

Use of capital letters and full stops

Wednesday, November 30th, 2011

For the paper described in the previous post, I ended up deleting a section which described an investigation into whether student use of capital letters and full stops could be used as a proxy  for writing in sentences and paragraphs. We looked at this because it is a time-consuming and labour-intensive task to classify student responses as being ‘a phrase’, ‘a sentence’, ‘a paragraph’ etc. – but spotting capital letters and full stops is easier (and can be automated!).

I removed this section from the paper because the findings were somewhat inconclusive, but I was nevertheless surprised how many responses finished with a full stop and especially by the large number that started with a capital letter. See the table below, for a range of questions in a range of different uses (sometimes summative and sometimes not).

Question Number of responses (and percentage of total) that started with a capital letter Number of responses (and percentage of total) that finished with a full stop
A-good-ideaAYRF

S154 10J

 1678 (60.9%)

622 (60.0%)

 1118 (40.6%)

433 (41.8%)

Oil-on-waterS154 10J  500 (53.9%)  294 (31.7%)
MetamorphicSXR103 10E  297 (41.6%)  166 (23.2%)
SedimentarySXR103 10E  317 (39.9%)   178 (22.4%)
SandstoneS104 10B  954 (58.2%)  684 (41.7%)
Electric-forceS104 10B  673 (56.7%)  445 (37.5%)

Answers that were paragraphs were found to be very likely to start with a capital letter and end with a full stop; answers that were written in note form or as phrases were less likely to start with a capital letter and end with a full stop. Answers in the form of sentences were somewhere in between.

The other very interesting thing was that capital letters and full stops were both [sometimes significantly] associated with correct rather than incorrect responses.

Student engagement with assessment and feedback: some lessons from short-answer free-text e-assessment questions

Wednesday, November 30th, 2011

Sorry for my long absence from this blog. Those of you who work in or are close to UK Higher Education will probably realise why – changes in the funding of higher education in England mean that the Open University (and no-doubt others) are having to do a lot of work to revise our curriculum to fit. I’m sure that it will be great in the end, but the process we are going through at present is not much fun. I always work quite hard, but this is a whole new level – and I’m afraid my blog and my work in e-assessment is likely to suffer for the next few months.

It’s not all doom and gloom. I’ve had a paper published in Computers & Education (reference below), pulling together findings from our observation of students attempting short answer free-text questions in a usability lab, and our detailed analysis of student responses to free-text questions – some aspects of of which I have reported here. It’s a long paper, reflecting a substantial piece of work, so I am very pleased to have it published.

The reference is:

Jordan, S. (2012) Student engagement with assessment and feedback: some lessons from short-answer free-text e-assessment questions. Computers & Education, 58(2),  818-834.

The abstract is: 

Students were observed directly, in a usability laboratory, and indirectly, by means of an extensive evaluation of responses, as they attempted interactive computer-marked assessment questions that required free-text responses of up to 20 words and as they amended their responses after receiving feedback. This provided more general insight into the way in which students actually engage with assessment and feedback, which is not necessarily the same as their self-reported behaviour. Response length and type varied with whether the question was in summative, purely formative, or diagnostic use, with the question itself, and most significantly with students’ interpretation of what the question author was looking for. Feedback was most effective when it was understood by the student, tailored to the mistakes that they had made and when it prompted students rather than giving the answer. On some occasions, students appeared to respond to the computer as if it was a human marker, supporting the ‘computers as social actors’ hypothesis, whilst on other occasions students seemed very aware that they were being marked by a machine. Do take a look if you’re interested.

Answer matching for short-answer questions: simple but not that simple

Saturday, July 16th, 2011

In describing our use of  (simple) PMatch for answer matching for short-answer free-text questions, I may have made it sound too simple. I’ll give two examples of the sorts of things you need to consider:

Firstly, consider the question shown on the left. I’m not going to say whether the answer given is correct or incorrect, but note that the answer ‘Kinetic energy is converted to gravitational (potential) energy’ includes exactly the same words – and responses of both types are commonly received from real students. So word order matters.

The other thing to take care with is negatives. As I’ve said before, it isn’t that students are trying to trick the system. However responses that would be correct were it not for the fact that they contain the word ‘not’ are suprisingly common. So answer matching needs to be able to deal with negation(more…)

Short-answer questions : how far can you go?

Sunday, July 3rd, 2011

Finally for today, I’d like to talk about where I believe the limits currently sit in the use of short-answer free-text questions.

I have written questions where the correct response requires three separate concepts. For example, I have written a question which asked how the rock in photograph shown was formed. (Incidentally this is granite, photographed near Lands End in Cornwall, but I’d never say that in a question, otherwise students just Google the answer). A correct answer would talk about the rock being formed from magma (first concept), which has cooled and crystallised (second concept) slowly (slow because it has cooled within the Earth rather than on the surface) (third concept). I haven’t managed to write a successful question with a correct answer that includes more than three separate ideas, but that doesn’t mean to say it can’t be done.

A second consideration in authoring short-answer questions is the number of discrete correct and incorrect responses. Here I think the limit came in another question based on one of my photographs, with thanks to the colleagues shown. We used this photograph in training people to write answer-matching and the question was simply ‘Who is taller?’ That might sound like a very straightforward question (until you get the bright-sparks who say ‘The prettier one’),  but writing a complete set of answer-matching for this question was a time-consuming and non-trivial task. Think about the correct answers: ‘The one on the right’, ‘The one with longer hair’, ‘The one carrying a brochure’, ‘The one wearing glasses’, [not 'The one holding a glass']..and so on…and so on.

The third limit is the serious one. Developing answer matching for the questions I’ve talked about was time-consuming but we got there in the end. However the correct answers are unambiguous – it is clear that the woman on the right-hand side of the photograph is taller than the one on the left. However in some subject areas, what is ‘correct is less well defined. I think that’s the real limit for questions of this type.