Archive for November, 2011

Use of capital letters and full stops

Wednesday, November 30th, 2011

For the paper described in the previous post, I ended up deleting a section which described an investigation into whether student use of capital letters and full stops could be used as a proxy  for writing in sentences and paragraphs. We looked at this because it is a time-consuming and labour-intensive task to classify student responses as being ‘a phrase’, ‘a sentence’, ‘a paragraph’ etc. – but spotting capital letters and full stops is easier (and can be automated!).

I removed this section from the paper because the findings were somewhat inconclusive, but I was nevertheless surprised how many responses finished with a full stop and especially by the large number that started with a capital letter. See the table below, for a range of questions in a range of different uses (sometimes summative and sometimes not).

Question Number of responses (and percentage of total) that started with a capital letter Number of responses (and percentage of total) that finished with a full stop
A-good-ideaAYRF

S154 10J

 1678 (60.9%)

622 (60.0%)

 1118 (40.6%)

433 (41.8%)

Oil-on-waterS154 10J  500 (53.9%)  294 (31.7%)
MetamorphicSXR103 10E  297 (41.6%)  166 (23.2%)
SedimentarySXR103 10E  317 (39.9%)   178 (22.4%)
SandstoneS104 10B  954 (58.2%)  684 (41.7%)
Electric-forceS104 10B  673 (56.7%)  445 (37.5%)

Answers that were paragraphs were found to be very likely to start with a capital letter and end with a full stop; answers that were written in note form or as phrases were less likely to start with a capital letter and end with a full stop. Answers in the form of sentences were somewhere in between.

The other very interesting thing was that capital letters and full stops were both [sometimes significantly] associated with correct rather than incorrect responses.

Student engagement with assessment and feedback: some lessons from short-answer free-text e-assessment questions

Wednesday, November 30th, 2011

Sorry for my long absence from this blog. Those of you who work in or are close to UK Higher Education will probably realise why – changes in the funding of higher education in England mean that the Open University (and no-doubt others) are having to do a lot of work to revise our curriculum to fit. I’m sure that it will be great in the end, but the process we are going through at present is not much fun. I always work quite hard, but this is a whole new level – and I’m afraid my blog and my work in e-assessment is likely to suffer for the next few months.

It’s not all doom and gloom. I’ve had a paper published in Computers & Education (reference below), pulling together findings from our observation of students attempting short answer free-text questions in a usability lab, and our detailed analysis of student responses to free-text questions – some aspects of of which I have reported here. It’s a long paper, reflecting a substantial piece of work, so I am very pleased to have it published.

The reference is:

Jordan, S. (2012) Student engagement with assessment and feedback: some lessons from short-answer free-text e-assessment questions. Computers & Education, 58(2),  818-834.

The abstract is: 

Students were observed directly, in a usability laboratory, and indirectly, by means of an extensive evaluation of responses, as they attempted interactive computer-marked assessment questions that required free-text responses of up to 20 words and as they amended their responses after receiving feedback. This provided more general insight into the way in which students actually engage with assessment and feedback, which is not necessarily the same as their self-reported behaviour. Response length and type varied with whether the question was in summative, purely formative, or diagnostic use, with the question itself, and most significantly with students’ interpretation of what the question author was looking for. Feedback was most effective when it was understood by the student, tailored to the mistakes that they had made and when it prompted students rather than giving the answer. On some occasions, students appeared to respond to the computer as if it was a human marker, supporting the ‘computers as social actors’ hypothesis, whilst on other occasions students seemed very aware that they were being marked by a machine. Do take a look if you’re interested.

More errors in finding the gradient of a graph

Friday, November 4th, 2011

Working yesterday on the chapter on Graphs and Gradient for the new edition of Maths for Science, I remembered the other student error that I have seen in iCMA questions. When asked to find the gradient of a graph like the one shown below, some students essentially ‘count squares’.

In this case, they would give the rise (actually (170-10) km = 160 km) as 16, since that is how many squares there are vertically between the red lines. By counting squares horizontally, they would give the corresponding run as 28 (which is correct apart from the fact that the units – seconds in this case – have been omitted).  So the gradient would be given as 0.57 instead of 5.7 km/s.

A related problem is students measuring the rise and run using a ruler, again rather than reading values from the axes. Perhaps we encourage both these behaviours by making an analogy between the gradient of a line and the gradient of a road. When finding the gradient of a road, we are concerned with actual lengths,  not the representations of  other things on a graph.