Archive for March, 2011

Feedback for excellent students

Thursday, March 31st, 2011

I’ve heard/read several things recently about the fact that excellent students tend to get less feedback than others. This is perhaps related to the fact that (anecdotally at least) teachers sometimes ignore excellent students – they’ll do OK whatever, so more effort is put in to helping the others. That seems wrong to me; I feel that education (and feedback) should be about helping individuals to achieve their best, so able students should be stretched. In feedback on written work, able students should be given comments that may not be appropriate for others, perhaps a suggestion for extra reading, a link to related literature etc. etc.

But what are the implications for e-assessment? Clearly students who get e-assessment questions right should be told that they have done well (obvious, but not always done, and we don’t want to patronise). But is that enough? Adaptive questions may also have a part to play; there seems little point in expecting a student to work through a tranch of questions which are trivially easy for them. Having said that, some students may like the reinforcement of realising that they can do well on these questions, and the revision of topics which (for them) are straightforward. And can we be sure that these students will find the ‘difficult’ adaptive questions more challenging than the easy ones?  Has anyone done any work in this area?

Repeated and blank responses

Wednesday, March 30th, 2011

The figure shown on the left requires a bit of explaining. The three columns represent student responses at 1st, 2nd and 3rd attempt to a short-answer free-text question in formative use. Green represents correct responses; red/orange/yellow respresent incorrect responses. The reason I’ve used different colours here is to enable me to indicate repeated responses. Where a colour is identical from column to column, this means that an incorrect response from a first or second response was repeated exactly at second and/or third attempt. The colour grey represents responses that were completely blank. The figure shows that

  • at first attempt, four responses (0.9% of the total of 449) were blank;
  • at second attempt, 43 responses (17.8% of the total of 241) were identical with responses given at the first attempt, with 7 responses (2.9%) blank;
  • at third attempt, 54 responses (27.4% of the total of 197) were identical with responses given at the second attempt, with 3 responses (1.5%) blank.

Reasons given by students (in interviews) for leaving the response box blank and repeating responses include just wanting to get to the final worked-example, not understanding the question and not understanding the feedback. (more…)

Can formative-only assignments tell you about student misunderstandings?

Thursday, March 17th, 2011

When I was doing the original analysis of student responses to Maths for Science assessment questions, I concentrated on questions that had been used summatively and also on questions that required students to input an answer rather than selecting a response. I reckoned that summative free-text questions would give me more reliable information on student misunderstandings than would be the case for formative-only and multiple-choice questions. Was this a valid assumption? I’ll start by considering student responses to a question that is similar to the one I have been discussing in the previous posts, but which is in use on the Maths for Science  formative-only ‘Practice assessment’. The question is shown on the left. The Practice assessment is very heavily used and students can (and do) re-attempt questions as often as they would like to, being given different variants for extra practice. (more…)

More on feedback

Wednesday, March 16th, 2011

Picking up on Silvester’s comment on my previous post…I think it is really important that we stop and think before saying that a student answer to an e-assessment question is wrong because some detail of it is wrong. As with any type of assessment, it is important to think about what you want to assess. (more…)

More about units

Tuesday, March 15th, 2011

OpenMark e-assessment questions were used for the first time in a little 10-credit module called Maths for Science  that has been running since 2002. I did some analysis years ago into the mistakes that students make, but I’m about to start writing a second edition of the module, so I’m revisiting this work. One of the things that amazed me when I first did the analysis and continues to amaze me now is that students are surprisingly good at rearranging equations. However, they are surprisingly bad at substituting values into equations to give a final result, complete with correct units and appropriate precision. (more…)

Thank you Google translate

Friday, March 11th, 2011

As you’ve probably guessed, I’m English, and aside of a pitifully small amount of French, I am ashamed to admit that  I don’t speak any other languages. Therefore I think Google translate is wonderful, especially since it enables me to read the excellent blogs of Sander Schenk and Silvester Draaijer, both of them very thoughtful blogs in the same area as mine – but both in Dutch. I commend them to you. I have added them to my ‘links’ (below right).

Unfortunately the OpenCETL (COLMSCT and piCETL) websites are not yet restored to their full glory, which means that many of the other links currently lead to the same place –  and lots of information formerly on the COLMSCT and piCETL sites is missing. If it doesn’t reappear soon I will remove these links and start posting the most interesting findings from my COLMSCT and piCETL teaching fellowships here.

Units : little things that make a difference

Thursday, March 10th, 2011

If we start from the premise that we want assessment to encourage and support learning, then one measure of the assessment’s effectiveness is better performance on later summative tasks. Mundeep Gill and Martin Greenhow (Gill, M. and Greenhow, M. (2008) How effective is feedback in computer-aided assessments? Learning Media and Technology, 33(3), 207-220) report on work where the introduction of  computer-aided assessment had positive impact (by this measure) in all areas but one.

The problem hinged round the presence of correct units with students’ numerical answers, so we might accept an answer of 10 metres, 10 meters, 10 m, but not 10 M, 10 kg, 10 m s-1 or just 10. Like most physicists, this is something on which I have extremely strong views – I regard the unit as a crucial part of the answer. The problem is that the answer-matching for many e-assessment systems doesn’t allow you to check for correct values and correct units at the same time (let alone whether the answer has been expressed to the correct precision etc.). Fortunately this is something that OpenMark handles well – we can check numbers, units etc. etc. and give appropriate targeted feedback on any aspect(s) of an answer that are incorrect.

In the system Mundeep and Martin were using this was not possible, so units were provided for ths students outside the input box; all students had to input was a number. Unfortunately, over the two year period of the investigation, students were observed to be more likely to omit units from their written work. This is another one of those unintended consequences – not exactly a positive outcome for e-assessment. Thank you Mundeep and Martin for your honesty in reporting this; others would be advised to take note.

Learning-oriented and technology-enhanced assessment

Wednesday, March 9th, 2011

My post on ‘Adjectives of assessment’ omitted ‘learning-oriented’, and to be honest it wasn’t until reading this afternoon that I realised what a  powerful concept learning-oriented assessment might be.  I was reading Keppell, M. and Carless, D. (2006) Learning-oriented assessment: a technology-based case study, Assessment in Education, 13(2), 179-191. Keppell and Carless deliberately use the term ’learning-oriented’ instead of the more common ‘formative’ or ‘assessment for learning’ and they make the point that both formative and summative assessment can be learning-oriented (and I’d add that both formative and summative assessment can be anti-learning-oriented too). It is also noteworthy that Keppell and Carless’s work was done in Hong Kong, where assessment is generally characterised as being exam oriented.

I’d also overlooked the full impact of the phrase technology-enhanced assessment. That little world ‘enhanced’ presumably means that the assessment is better because of the use of technology. So if the technology doesn’t make it better, perhaps we shouldn’t be using it. Nice.