Why learning analytics have to be clever

Posted on March 28th, 2014 at 5:32 pm by Sally Jordan

 

I am surprised that I haven’t posted the figure below previously, but I don’t think I have.

It shows the number of independent users (dark blue) and usages (light blue) on a question by question basis. So the light blue bars include repeating of whole questions.

This is for a purely formative interactive computer-marked assignment (iCMA) and the usage drops off in exactly the same way for any purely formative quiz. Before you tell me that this iCMA is too long, I think I’d agree, but note that you get exactly the same attrition – both within and between iCMAs – if you split the questions up into separate iCMAs. And in fact the signposting in this iCMA was so good that sometimes the usage INCREASES from one question to the next. This is for sections (in this case chemistry!) that the students find hard.

The point of this post though is to highlight the danger of just saying that a student clicked on one question and therefore engaged with the iCMA. Just how many questions did that student attempt? How deeply did they engage? The situation becomes even more complicated if you consider the fact that there are around another 100 students who clicked on this iCMA but didn’t complete any questions (and again, this is common for all purely formative quizzes). So be careful, just using clicks (on a resource of any sort) does not tell you much about engagement.

What gets published and what people read

Posted on March 23rd, 2014 at 6:24 pm by Sally Jordan

I doubt this will be my final ‘rant of the day’ for all time, but it will exhaust the stock of things I’m itching to say at the current time. This one relates not to the use and misuse of statistics but rather to rituals surrounding the publication of papers. I’ll stay clear of the debate surrounding open access; this is more about what gets published and what doesn’t!

I have tried to get papers published in Assessment & Evaluation in Higher Education (AEHE). They SAY they are “an established international peer-reviewed journal which publishes papers and reports on all aspects of assessment and evaluation within higher education. Its purpose is to advance understanding of assessment and evaluation practices and processes, particularly the contribution that these make to student learning and to course, staff and institutional development”  but my data-rich submission (later published in amended form as Jordan (2012) “Student engagement with assessment and feedback: Some lessons from short-answer free-text e-assessment questions”) didn’t even get past the editor. Ah well, whilst I think it was technically in scope, I have to admit that it was quite different from most of what is published in AEHE. I should have been more careful.

My gripe on this is two-fold: firstly if you happen to research student engagement with e-assessment, as I do, you’re left with precious few places to publish. Computers & Education, the British Journal of Education Technology and Open Learning have come to my rescue, but I’d prefer to be publishing in an assessment journal, and my research has implications that go way beyond e-assessment (and before anyone mentions if, whilst I read the CAA Conference papers, I’m not convinced that many others do, and the International Journal of e-Assessment (IJEA) seems to have folded. Secondly, whilst I read AEHE regularly, and think that there is so excellent stuff there, I also think there are too many unsubstantiated opinion pieces and (even more irritating) so-called research papers that draw wide-ranging conclusions from, for example, self-reported behaviour of small numbers of students. OK, sour grapes over.

In drafting my covering paper for my PhD by publication, one of the things I’ve done is look at the things that are said by the people who cite my publications. Thank you, lovely people, for your kind words. But I have been amused by the number of people who have cited my papers for things that weren’t really the primary focus of that paper. In particular, Jordan & Mitchell (2009) was primarily about the introduction of short-answer free-text questions, using Intelligent Assessment Technologies (IAT) answer matching. But lots of people cite this paper for its reference to OpenMark’s use of feedback. Ross, Jordan & Butcher (2006) or Jordan (2011) say a lot more about our use of feedback. Meanwhile, whilst Butcher & Jordan (2010) was primarily about the introduction of pattern matching software for answer matching (instead of software that uses the NLP techniques of information extraction), you’ve guessed it, lots of people cite Butcher & Jordan (2010) rather than Jordan & Mitchell (2009) when talking about the use of NLP and information extraction.

Again, in a sense this is my own fault. In particular, I’ve realised that the abstract of Jordan & Mitchell (2009) says a lot about our use of feedback and I’m guessing that people are drawn by that. They may or may not be reading the paper in great depth before citing it. Fair enough.

However I am learning that publishing papers is a game with unwritten rules that I’m only just beginning to understand. I always knew I was a late developer.

Evaluation, evaluation, evaluation

Posted on February 23rd, 2014 at 4:08 pm by Sally Jordan

Despite my recent ‘rants of the day’, I think it is vitally important that we try our best to evaluate our assessment practice. There is some good, innovative practice out there, but it can still be very tempting to confuse practice that we consider to be “good” with practice that we know to be good, because it has been properly – and honestly – evaluated. And, at the danger of appearing a stick in the mud, innovation does not necessarily lead to improvement.

My quote for today is from the (fictional) Chief Inspector Morse:
In the pub, with Lewis, he’d felt felt convinced he could see a cause, a sequence, a structure, to the crime… It was the same old tantalizing challenge to puzzles that had faced him ever since he was a boy. It was the certain knowledge that something had happened in the past – happened in an ordered, logical, very specific way. And the challenge had been, and still was, to gather the disparate elements of the puzzle together and to try to reconstruct that ‘very specific way’.” (from Colin Dexter’s “The remorseful day”, Chapter 22)

Honest evaluation will sometimes ‘prove’ what you expected; but sometimes there will be surprises. Sometimes good ideas don’t work and we need to reconsider. Sometimes a ‘control’ group does better than the experimental group and we need to think why. Sometimes students don’t engage with an assessment task in the way that we expect; sometimes students don’t make the mistakes that we think they will make; sometimes they make mistakes that we don’t expect.

Actually, in the long run, it is often the surprises that provide the real insights. And sometimes they can even save time and money. We would have gone on using linguistically-based software for our short-answer free-text questions had we discovered that pattern matching software was just as effective.

But whatever, we must find out…Chief Inspector Morse always got it right in the end!

A Darwinian view of feedback effectiveness

Posted on February 8th, 2014 at 9:19 pm by Sally Jordan

Please don’t treat this too seriously – but please do stop and think about what I am trying to say, in the light of the fact that the effectiveness of feedback on assessment tasks is, despite the huge amount that’s been written on the subject, poorly understood.

Many people talk about the issues that arise when the grade awarded for an assignment ‘gets in the way’ of the feedback – and this is something I have seen evidence of myself. Authors also talk in quite damnatory terms about extrinsic motivation and surface learning. However, we have to face the fact that many of our students probably have no aspiration to submit perfect work – they just want to do OK, to pass, not to fall too far behind their peers.

Now sidestep to the theory of natural selection and evolution. Individuals with advanatagous characteristics have a a greater probability of survival, and therefore of reproducing. Provided that these characteristics are inherited by offspring, individuals possessing the characteristics will become more common in the population. If something like an environmental change (a common example is a decrease in soot in the atmosphere) means that there is a change in what is advantageous (so, in the example, dark coloured moths – which were well camouflaged from their predators when the atmophere was sooty – become less well camouflaged and so more likely to be eaten) then relatively rapid evolution will be seen (in the example, light coloured moths will become more common). When there is no change in the environment, natural selection will still be taking place, but you won’t see a lot of evolution.

Now, think feedback. If a student only wants to pass and is getting pass grades and feedback that says they are doing OK, then [in their view] is there any need for them to do anything differently? Perhaps there isn’t really a ‘gap’ (Ramaprasad, 1983; Sadler, 1989) to close. Perhaps this is just the natural way of things.

More lies, damned lies and statistics

Posted on February 8th, 2014 at 8:19 pm by Sally Jordan

This second ‘rant of the day’ focuses on practice which, I think, arises from the fact that most people are not as fortunate(?) as me in having data from hundreds and thousands of students on each module each year. It also stems from a very human wish for our data to show what we want them to show.

The first problem that arises is akin to that shown in the photograph (which, I hasten to add, has nothing to do with students, at the Open University or elsewhere – it is just an image I’ve found on XPert, which seems to be about the number of people who have looked at a particular photo). Wow yes, there has been a marked increase, of …ah yes…three! (probably the photographer’s mum, sister and wife…) – and look at all those months when the photo was not viewed (I suspect because it had not been uploaded then…). This example may relate to photographs, but I have seen similarly small data sets used to ‘prove’ changes in student behaviour.

The second type of problem is slightly more complicated to explain – but I saw it in a published peer-reviewed paper that I read last week. Basically, you are likely to need a reasonably large data set in order to show a statistically significant difference between the different behaviour of different groups of students. So if your numbers are on the small side and no significant difference is shown, you can’t conclude that there isn’t a difference, just that you don’t have evidence of one.

Victorian clergymen

Posted on January 31st, 2014 at 7:14 am by Sally Jordan

This is more ‘rant of the day’ than ‘quote of the day’ but I’d like to start with a quote from my own ‘Maths for Science’ (though I’m indebted to my co-author Pat Murphy who actually wrote this bit):

” It is extremely important to appreciate that even a statistically significant correlation between two variables does not prove that changes in one variable cause changes in the other variable.

Correlation does not imply causality.

A time-honoured, but probably apocryphal, example often cited to illustrate this point is the statistically significant positive correlation reported for the late 19th Century between the number of clergymen in England and the consumption of alcoholic spirits. Both the increased number of the clergymen and the increased consumption of spirits can presumably be attributed to population growth (which is therefore regarded as a ‘confounding variable’) rather than the increase in the number of clergymen being the cause of the increased consumption of spirits of vice versa.”

Jordan, S., Ross, S. and Murphy, P. (2013) Maths for Science. Oxford: Oxford University Press. p. 302.

Now, my fellow educational researchers, have you understood that point? Correlation does not imply causality. In the past week I have read two papers, both published in respectable peer-reviewed journals and one much cited (including, I’m sad to say, by one publication on which I’m a co-author), which make the mistake of assuming that an intervention has been the cause of an effect.

In particular, if you offer students some sort of non-compulsory practice quiz, those who do the quiz will do better on the module’s summative assessment. We hope that the quiz has  helped them, and maybe it has – but we can’t prove this fact just from the fact that they have done better in a piece of assessed work.  What we mustn’t forget that it is the keen, well motivated, students who do the non-compulsory activities – and  these students are more likely to do well in the summative assessment, for all sorts of reasons (they may actually have studied the course materials for a start…).

One of the papers I’ve just mentioned tried to justify the causal claim by saying that the quiz was particularly effective for “weaker” students. The trouble is that a little investigation showed me that this claim made the logical inconsistency even worse! Firstly it assumed that weaker students are less well motivated. That may be true, but no proof was offered. Secondly, I was puzzled about where the data came from and discovered that the author was using score on the first quiz that a student did, be that formative or summative, as an indicator of student ability. But students try harder when the mark counts and their mark on a summative assignment is very likely to be higher for that reason alone. The whole argument is flawed. Oh dear…

I am deliberately not naming the papers I’m referring to, partly because I’m not brave enough and partly because I fear there are many similar cases out there. Please can we all try a little harder not to make claims unless we are sure that we can justify them.

Quotes of the day

Posted on January 17th, 2014 at 10:41 am by Sally Jordan

In other words, if students perceive a need to understand the material in order to successfully negotiate the assessment task, they will engage in deep learning but if they perceive the assessment instrument to require rote learning of information, they will be unlikely to engage with the higher level objectives which may well have been intended by the programme of study.’
What is considered important to assess will strongly determine what is considered important to learn.
MacLellan, E. (2001) Assessment for learning: the differing perceptions of tutors and students, Assessment & Evaluation in Higher Education, 26(4), 307-318. [p.307 then p. 309]

In other words:

‘Assessment is a moral activity. What we choose to assess and how shows quite starkly what we value.’

Peter Knight in the introduction to Knight, P. (ed) (1995) Assessment for learning in Higher Education. Kogan Page in association with SEDA. p.13.

Quote of the day

Posted on January 15th, 2014 at 7:19 am by Sally Jordan

…assessment is a form of communication. This communication can be to a variety of sources, to students (feedback on their learning), to the lecturer (feedback on their teaching), to the curriculum designer (feedback on the curriculum) to administrators (feedback on the use of resources) and to employers (quality of job applicants).

McAlpine, M. (2002). Principles of assessment. CAA Centre, University of Luton. p. 4.

Is feedback a waste of time? A personal view

Posted on January 13th, 2014 at 9:47 pm by Sally Jordan

I’ve just found something I wrote nearly eight years ago and most of this post is a copy of it. I might draw some slightly different conclusions now, but my basic argument is unchanged. There is huge confusion about what feedback is and what it’s for, and there are lots of ‘sacred cows’ [things that are assumed to be true and not questioned, even though they should be!].

Is feedback a waste of time? A personal view.
By Sally Jordan, The Open University

I believe that feedback has a crucial role to play in underpinning student learning, but that there is frequently a mismatch between our expectations of the purpose and usefulness of feedback and those of our students. Until we start to really listen to what evaluation tells us about this, there is a danger that our feedback will be of limited value and cost-effectiveness.

I worked for many years as an Open University tutor. I gave excessively detailed feedback to my students on their assignments, explaining exactly where they had gone wrong in their working and giving frequent hints for improvement. I did this for the very best of motives – after all there were some students who I never met, so my comments were important in facilitating student learning. In doing this I was told that I was good at my job; my correspondence tuition was exemplary – and my students seems quite grateful. But I always had a nagging feeling that some students simply filed my comments away ‘for future use’ (when?!), and perhaps some even put the marked assignments straight in the bin. And what of those who attempted to learn from my comments; could they ‘see the wood for the trees’?

Now I line-manage Open University tutors and see many of them spending considerable amounts of time in providing students with just the same sort of detailed feedback. Are they wasting their time? Is the University wasting its money? We provide tutors with ‘model answers’ to send to their students if they wish, in an attempt to ease the load, but these are of limited use unless personalised by additional comments from their tutor. Seeing ‘the right answer’ doesn’t always help a student to be able to produce such an answer for themselves.

I chaired the production of a course which has totally online assessment. It’s clever stuff (not multiple choice). We provide students with relatively detailed feedback on their work and then give them an opportunity to learn from the feedback by having another go. Whenever possible the feedback is targeted; the student is told exactly where they have gone wrong and is given a hint as to how to correct their error. And of course, because the assignment is completed online, feedback is instantaneous. The system has been extensively evaluated and students tell us that ‘the immediate feedback was ace’ and that the end of course assessment (the course’s examinable component) is ‘fun’. Pretty good stuff!

But how much do students really learn from our feedback? When asked, many students say that it is useful, but how many actually use it? Worryingly, when asked about the feedback immediately after they have completed the assignment, some students tell us that they haven’t had any! This could mean that they got almost all of the questions right, so didn’t see the feedback provided, or it could be simply that students don’t mean the same thing as we do by the word ‘feedback’. But they could be referring to the fact that we haven’t told them their mark or whether or not they have passed the course (we can’t do this until various weightings have been applied and the Award Board has met). So is there any point at all to all our lovingly crafted teaching comments for students? Even if the disparity can be explained by a simple misunderstanding of the word ‘feedback’, this has lessons for the way in which we phrase questions on survey instruments.

I suspect that, at the end of the day, different students would benefit from different types and levels of feedback. So in an ideal world we would provide each student with feedback appropriate for that student. I’m not sure that I accept the assertion that we should be providing students with the feedback that they need rather than what they want. People tend to engage better with what they want. But this isn’t an ideal world. We are rightly driven by a desire to improve student learning, but we are also driven by economics and the need for cost-effectiveness.

There are some awkward questions to ask, and no easy answers. But as a starting point, it is time that we started to listen to what our students are really telling us rather than what we want to hear, and to learn from the feedback they are giving to us.

Sally Jordan
23rd February 2006

Quote of the day

Posted on January 8th, 2014 at 12:21 pm by Sally Jordan

I’ve blogged before about Snyder’s (1971) ‘Hidden curriculum’ ( click here).

But looking in more detail, on page 120, we have

 ‘so many students in colleges and universities…get their rewards from grades on papers they have written and not from the excitement of working through the idea in the paper…Twenty years later such students are vulnerable, with their narrow range of supports for their self esteem. Their brittleness may persist, since they are dependent on society’s equivalent of grades.’

Ouch!

Snyder, B.R. (1971) The hidden curriculum. New York: Alfred A. Knopf.