I’m surprised I haven’t posted on this before, but it looks as if I haven’t, and I am reminded to do so now by another New Scientist piece, this time from back in January:
Rutkin, A. (2nd Jan 2016) Robotutor is a class act. New Scientist, 3054, p. 22.
The article talks about an algorithm developed by researchers at Stanford University and Google in California which analyses students’ performance on past problems, identifies where they tend to go wrong and forms a picture of their overall knowledge.
Chris Piech from Stanford goes on to say “Our intuition tells us if you pay enough attention to what a student did as they were learning, you wouldn’t need to have them sit down and do a test.”
The first paper I heard suggesting that we might assess students by analysing their engagement with an online learning environment (rather than adding a separate test) was Redecker et al. (2012) and it blew me away.
Redecker, C., Punie, Y., & Ferrari, A. (2012). eAssessment for 21st Century Learning and Skills. In A. Ravenscroft, S. Lindstaedt, C.D. Kloos & D. Hernandez-Leo (Eds.), 21st Century Learning for 21st Century Skills (pp. 292-305). Berlin: Springer.
In reality of course, and as much discussed in this blog, I would never want to do away with interaction with humans, and there are things (e.g. essays, problem solving) where I think marking should be done by human markers. However, if we can do away with separate tests that are just tests, I’d be delighted.