There have been two very good webinars on learning analytics recently in the Transforming Assessment series. On 9th Sept 2015, Cath Ellis from the University of New South Wales and Rachel Forsyth from Manchester Metropolitan University spoke on “What can we do with assessment analytics?”. On 9th December 2015, Gregor Kennedy, Linda Corrin and Paula de Barba from the University of Melbourne spoke on “Providing meaningful learning analytics to teachers: a tool to complete the loop” . I would heartily recommend both recordings to you.
Having said that, talk of learning analytics is suddenly everywhere. If we take Doug Clow’s (2013, p. 683) short definition of learning analytics as “the analysis and representation of data about learners in order to improve learning” then it is beyond argument that this is something we should be doing. And I agree that student engagement in assessment is something that must be included in the analysis, hence “assessment analytics”. It could be argued that I’ve been using assessment analytics since before the term was invented, though my approach has been a bit of a cottage industry; one of the recent changes is that learning analytics now (rightly) tends to be available at the whole institution level.
There seems to be some dispute about (i) whether the terms learning and assessment analytics apply to analysis at the cohort level as well as analysis at the individual student level and (ii) as to whether it is legitimate to include analysis done retrospectively to learn more about learning. I would include all of these; all are important if we are to improve the student experience and their chances of success.
So far so good. However, learning analytics has become fashionable; that in itself is perhaps a cause for some anxiety. It is all too easy to leap on board the bandwagon without giving the matter sufficient thought. I know that many mainstream academics (i.e. those who probably don’t read blogs like this one) are deeply uneasy about the approach. This is partly because the information given to academics is sometimes blindingly obvious…e.g. telling us that students who do not engage at all with module materials are not very likely to pass. Some of us have been banging on about this for years. I am also anxious that the analytics are sometimes simplistic, equating clicking on an activity with engagement with it, something I’ve posted about before.
So, what’s the way forward? I think learning analytics has to cease being the preserve of the few and become something that ordinary lecturers use in their teaching as a matter of routine; if this is to happen they need to trust and understand the data and to take ownership of it. Furthermore, the emphasis needs to change from the giving of data to the real use of data. When learning analytics (at the individual or the cohort level, and in real time or retrospectively) reveals a problem, let’s do something about it.
Clow, D. (2013). An overview of learning analytics. Teaching in Higher Education, 18(6), 683-695.