to two significant figures

I know I keep banging on about the importance of monitoring your questions when they are ‘out there’, being used by students. If what follows appears to be a bit of a trick (and in a sense it is), it’s a trick with firm foundations – the monitoring of many thousands of student responses. Continue reading

Posted in Computers as Social Actors, question analysis, significant figures | Tagged , , , , | Leave a comment

iCMA statistics

This work was originally reported on the website of  COLMSCT (the Centre for the Open Learning of Mathematics, Science, Computing and Mathematics) – and other work was reported on the piCETL (the Physics Innovations Centre for Excellence in Teaching and Learning) website. Unfortunately, the whole of the OpenCETL website had to be taken down. The bare bones are back (and I’m very grateful for this) but the detail isn’t, so I have decided to start re-reporting some of my previous findings here. This has the advantage of enabling me to update the reports as I go.

I’ll start by reporting a project on iCMA statistics which was carried out back in 2009, with funding from COLMSCT, by my daughter Helen Jordan (now doing a PhD in Department of Statistics at the University of Warwick; at the time she did the work she was an undergraduate student of mathematics at the University of Cambridge). Follow the link for  Helen’s project report , but I’ll try to report the headline details here – well, as much as I can understand them! Continue reading

Posted in e-assessment, statistics, student engagement | Tagged , , | 1 Comment

Feedback after a correct answer

OpenMark is set up to give students increasing feedback after each incorrect attempt at a question. After they have had [usually] three attempts they are given a ‘full answer’. The system is set up so that a student who gets the question right receives the same ‘full answer’. The screenshot above shows this feedback for a simple question.

Our approach raises a number of questions: Continue reading

Posted in e-assessment, excellent students, feedback, feedback after correct answer | Tagged , | 4 Comments

Feedback for excellent students

I’ve heard/read several things recently about the fact that excellent students tend to get less feedback than others. This is perhaps related to the fact that (anecdotally at least) teachers sometimes ignore excellent students – they’ll do OK whatever, so more effort is put in to helping the others. That seems wrong to me; I feel that education (and feedback) should be about helping individuals to achieve their best, so able students should be stretched. In feedback on written work, able students should be given comments that may not be appropriate for others, perhaps a suggestion for extra reading, a link to related literature etc. etc.

But what are the implications for e-assessment? Clearly students who get e-assessment questions right should be told that they have done well (obvious, but not always done, and we don’t want to patronise). But is that enough? Adaptive questions may also have a part to play; there seems little point in expecting a student to work through a tranch of questions which are trivially easy for them. Having said that, some students may like the reinforcement of realising that they can do well on these questions, and the revision of topics which (for them) are straightforward. And can we be sure that these students will find the ‘difficult’ adaptive questions more challenging than the easy ones?  Has anyone done any work in this area?

Posted in adaptive questions, excellent students | Tagged , | 2 Comments

Repeated and blank responses

The figure shown on the left requires a bit of explaining. The three columns represent student responses at 1st, 2nd and 3rd attempt to a short-answer free-text question in formative use. Green represents correct responses; red/orange/yellow respresent incorrect responses. The reason I’ve used different colours here is to enable me to indicate repeated responses. Where a colour is identical from column to column, this means that an incorrect response from a first or second response was repeated exactly at second and/or third attempt. The colour grey represents responses that were completely blank. The figure shows that

  • at first attempt, four responses (0.9% of the total of 449) were blank;
  • at second attempt, 43 responses (17.8% of the total of 241) were identical with responses given at the first attempt, with 7 responses (2.9%) blank;
  • at third attempt, 54 responses (27.4% of the total of 197) were identical with responses given at the second attempt, with 3 responses (1.5%) blank.

Reasons given by students (in interviews) for leaving the response box blank and repeating responses include just wanting to get to the final worked-example, not understanding the question and not understanding the feedback. Continue reading

Posted in e-assessment, student engagement | Tagged , , | 1 Comment

Can formative-only assignments tell you about student misunderstandings?

When I was doing the original analysis of student responses to Maths for Science assessment questions, I concentrated on questions that had been used summatively and also on questions that required students to input an answer rather than selecting a response. I reckoned that summative free-text questions would give me more reliable information on student misunderstandings than would be the case for formative-only and multiple-choice questions. Was this a valid assumption? I’ll start by considering student responses to a question that is similar to the one I have been discussing in the previous posts, but which is in use on the Maths for Science  formative-only ‘Practice assessment’. The question is shown on the left. The Practice assessment is very heavily used and students can (and do) re-attempt questions as often as they would like to, being given different variants for extra practice. Continue reading

Posted in mathematical misunderstandings, question analysis, student engagement | Tagged , , , | Leave a comment

More on feedback

Picking up on Silvester’s comment on my previous post…I think it is really important that we stop and think before saying that a student answer to an e-assessment question is wrong because some detail of it is wrong. As with any type of assessment, it is important to think about what you want to assess. Continue reading

Posted in e-assessment, feedback, mathematical misunderstandings | Tagged , , | Leave a comment

More about units

OpenMark e-assessment questions were used for the first time in a little 10-credit module called Maths for Science  that has been running since 2002. I did some analysis years ago into the mistakes that students make, but I’m about to start writing a second edition of the module, so I’m revisiting this work. One of the things that amazed me when I first did the analysis and continues to amaze me now is that students are surprisingly good at rearranging equations. However, they are surprisingly bad at substituting values into equations to give a final result, complete with correct units and appropriate precision. Continue reading

Posted in mathematical misunderstandings, question analysis, units | Tagged , , | 3 Comments

Thank you Google translate

As you’ve probably guessed, I’m English, and aside of a pitifully small amount of French, I am ashamed to admit that  I don’t speak any other languages. Therefore I think Google translate is wonderful, especially since it enables me to read the excellent blogs of Sander Schenk and Silvester Draaijer, both of them very thoughtful blogs in the same area as mine – but both in Dutch. I commend them to you. I have added them to my ‘links’ (below right).

Unfortunately the OpenCETL (COLMSCT and piCETL) websites are not yet restored to their full glory, which means that many of the other links currently lead to the same place –  and lots of information formerly on the COLMSCT and piCETL sites is missing. If it doesn’t reappear soon I will remove these links and start posting the most interesting findings from my COLMSCT and piCETL teaching fellowships here.

Posted in e-assessment, links | Tagged , , | 1 Comment

Units : little things that make a difference

If we start from the premise that we want assessment to encourage and support learning, then one measure of the assessment’s effectiveness is better performance on later summative tasks. Mundeep Gill and Martin Greenhow (Gill, M. and Greenhow, M. (2008) How effective is feedback in computer-aided assessments? Learning Media and Technology, 33(3), 207-220) report on work where the introduction of  computer-aided assessment had positive impact (by this measure) in all areas but one.

The problem hinged round the presence of correct units with students’ numerical answers, so we might accept an answer of 10 metres, 10 meters, 10 m, but not 10 M, 10 kg, 10 m s-1 or just 10. Like most physicists, this is something on which I have extremely strong views – I regard the unit as a crucial part of the answer. The problem is that the answer-matching for many e-assessment systems doesn’t allow you to check for correct values and correct units at the same time (let alone whether the answer has been expressed to the correct precision etc.). Fortunately this is something that OpenMark handles well – we can check numbers, units etc. etc. and give appropriate targeted feedback on any aspect(s) of an answer that are incorrect.

In the system Mundeep and Martin were using this was not possible, so units were provided for ths students outside the input box; all students had to input was a number. Unfortunately, over the two year period of the investigation, students were observed to be more likely to omit units from their written work. This is another one of those unintended consequences – not exactly a positive outcome for e-assessment. Thank you Mundeep and Martin for your honesty in reporting this; others would be advised to take note.

Posted in e-assessment, units | Tagged , | 2 Comments