Posted on May 17th, 2013 at 4:39 pm by Sally Jordan
You’ll be getting the idea…
The figures below show, for each question, the number of students who got it right at first attempt (yellow), second attempt (green), third attempt (blue), or not at all (maroon). So the total height of each bar represents the total number of students who completed each question.
You can spot the differences for yourself and I’m sure you will be able to work out which module is which! However I thought you’d like to know that questions 24-27 are on basic differential calculus. Obviously still some work to do there…


Posted in question analysis, student engagement | No Comments »
Posted on May 17th, 2013 at 7:48 am by Sally Jordan
This one comes from Carl Wieman who won the Nobel Prize for physics in 2001. I’ll start with a quote which gives the broader flavour of the paper:
[pg10] [we should] ‘approach the teaching of science like a science. That means applying to science teaching the practices that are essential components of scientific research and that explain why science has progressed at such a remarkable pace in the modern world.
The most important of these components are:
• Practices and conclusions based on objective data rather than—as is frequently the case in education—anecdote or tradition.This includes using the results of prior research, such as
work on how people learn.
• Disseminating results in a scholarly manner and copying and building upon what works. Too often in education, particularly at the postsecondary level, everything is reinvented, often in a highly flawed form, every time a different instructor teaches a course. (I call this problem “reinventing the square wheel.”)
• Fully utilizing modern technology. Just as we are always looking for ways to use technology to advance scientific research, we need to do the same in education.’
[I'm not sure I necessarily agree with the final point - I'd use technology when, and only when, that is beneficial to the student experience.]
Relative to this, the point I want to emphasise sounds timid:
[pg13] ‘Even the most thoughtful, dedicated teachers spend enormously more time worrying about their lectures than they do about their homework assignments, which I think is a mistake.’
But it is oh so true – certainly in my own institution, relative to the time and effort that goes into developing our (excellent) teaching resources, we put so little time and effort into getting assessment right. I think that’s a mistake! Your institution may be different of course, but I doubt that many are.
Wieman, C. (2010). Why not try a scientific approach to science education? Change: The Magazine of Higher Learning, 39(5), 9-15
Posted in quotes | No Comments »
Posted on May 16th, 2013 at 7:59 pm by Sally Jordan
Following on from my previous post, take a look at the two figures below. They show how students’ overall score on an iCMA varied with the date they submitted. These figures are for the same two assignments as in the previous post (very similar assignments, rather different students).

The top figure (above) is depressingly familiar. The students who submit early all do very well – they probably didn’t need to study the module at all! The rest are rushing to get the assignment done, just before the due date – and lots of them don’t do very well.
I am very pleased with the lower figure. Here students are doing the assignment steadily all the while it is available – and with the exception of a small number who were probably prompted to have a go on the due date by a reminder email we sent, they do pretty similarly, irrespective of when they submitted. This is how assignments should perform!

I’m aware that my interpretation may seem simplistic, but we have other evidence that the first batch of students are overcommitted – they are also younger and have lower previous qualifications – so it all fits.
Finally, following yesterday’s JISC webinar on learning analytics I’m beginning to think that this is how I should be describing the work that I’ve previously categorised as ‘Question analysis’ and ’Student engagement’. However we describe this sort of analysis, we must do more of it – it’s powerful stuff.
Posted in question analysis, student engagement | No Comments »
Posted on May 12th, 2013 at 4:13 pm by Sally Jordan
I’ve written quite a lot previously about what you can learn about student misunderstandings and student engagement by looking at their use of computer-marked assiggnments. See my posts under ‘question analysis’ and ‘student engagement’.
Recently, I had cause to take this slightly further. We have two interactive computer-marked assignments (iCMAs) that test the same material, and that are known to be of very similar difficulty. Some of the questions in the two assignments are exactly the same, most are slightly different. But when we see very different patterns of use, this can be attributed to the fact that the two iCMAs are used on different modules, with very different student populations.
Compare the two figures shown below. These simply show the number of questions started (by all students) on each day that the assignment is open. The first figure shows a situation where most questions are not started until close to the cut-off date. The students are behind, struggling and driven by the due-date (I know some of these things from other evidence).

The second figure shows a situation in which most questions are started as soon as the iCMA opens – the students are ready and waiting! These students do better by various measures – more on this to follow.

Posted in question analysis, student engagement | 1 Comment »
Posted on May 12th, 2013 at 2:36 pm by Sally Jordan
I’m not sure about the detail of this one, but as a physicist I couldn’t resist the analogy between quantum mechanics and the testing effect!
‘The testing effect represents a conundrum, a small version of the Heisenberg uncertainty principle in psychology: Just as measuring the position of an electron changes that position, so the act of retrieving information from memory changes the mnemonic represention underlying retrieval – and enhances later retention of the tested information.’
Roediger, H. L., III, & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1, 181-210. pg 182.
Posted in quotes, testing effect | No Comments »
Posted on May 3rd, 2013 at 6:09 am by Sally Jordan
‘Assessment is a moral activity. What we choose to assess and how shows quite starkly what we value.’
Knight, P. (ed) (1995) Assessment for learning in Higher Education. Kogan Page in association with SEDA. pg13 (introduction)
Posted in quotes | No Comments »
Posted on April 17th, 2013 at 5:34 am by Sally Jordan
Post now edited to include links…
Oh how much I missed in my last post!
Others, far better informed than me, have also reflected on the general principles of the use of the automated marking of essays. See, for example, Michael Feldstein, Audrey Watters and Justin Reich.
I think the interest was probably sparked by an ‘Automated Student Assessment Prize’ competition last year, funded by the William and Flora Hewlett Foundation. Phase 1 of this was to do with the automatic marking of essays and the results are here.
Things began to get interesting when the New York Times announced that edX were going to use automatic grading of essays. The report may or may not have been accurate, but concerns are being expressed about so-called ‘robo-marking’. See for example Les Pereman and the Professionals against Machine Grading of Essays group.
On Michael Feldstein’s blog ‘e-Literate’, Elijah Mayfield of LIghtSIDE Labs has now said more about why the NYT (and possibly the edX claim) is wrong. The LIghtSIDE labs approach seems very sensible – human marked essays are used as the basis for machine learning about good features of essays; this is used to provide feedback about the good and weak features of essays submitted by students and hence to peer review. Grading is still done by humans.
Some questions:
- I don’t know what edX are actually planning to do, in other words, how much can we believe the NYT report?
- I’d love to know more about the technology being used (by edX, LIGHTside, anyone) and whether they are marking essay content, style or both?
Posted in essay-marking software, essays | No Comments »
Posted on April 5th, 2013 at 8:22 am by Sally Jordan
I am grateful to Carol Bailey (see previous post) who, following a discussion over lunch, sent me a link to an extremely interesting paper:
Vojak, C., Kline, S., Cope, B., McCarthey, S. and Kalantzis, M. (2011) New Spaces and Old Places: An Analysis of Writing Assessment Software. Computers and Composition, 28, 97-111. Read the rest of this entry »
Posted in essay-marking software, essays, short-answer free text questions | 1 Comment »
Posted on April 5th, 2013 at 6:30 am by Sally Jordan
A couple of weeks ago we had an internal conference at the Open University on ‘Developing good academic practice’. Day 1 was intended for our tutors and their line-managers (staff tutors); Day 2 for module team members and those with the assessment overview in faculties. That means I went both days! Each day, we had a ‘keynote’ presentation, then a series of workshops run by different ‘Faculty Academic Conduct Officers’. Read the rest of this entry »
Posted in plagiarism | 1 Comment »
Posted on March 31st, 2013 at 6:23 pm by Sally Jordan
I was talking at the eSTEeM Conference last week about the fact that, whilst our interactive computer-marked assessment (iCMA) questions are generally well liked by our students, occasional questions can cause problems (usually because we are not giving sufficiently targeted feedback, so students don’t understand why the answer they have given is wrong). Why not, someone said, have an ‘unsound’ button? Well, we used to have such a function for our short-answer free text questions (shown below in use back in 2007) and it wasn’t my choice to stop using it. This post considers the pros and cons. Read the rest of this entry »
Posted in unsound questions | No Comments »