Some of the findings I’ve been blogging about recently (and some still to come) are contradictory. On the one hand students seem to be very aware that their answers have been marked by a computer not a human-marker, but in other ways they are behaving as if they are being marked by a human marker.
Back in 1996 Reeves and Nass introduced the ‘Computers as social actors’ paradigm, arguing that people’s interaction with computers, television and other new media are fundamentally social and natural, just like interactions in real life. More recent work has shown that people are polite to computers (even though they know this is stupid) and, when they feel that a computer has ‘cheated’ them, they behave spitefully towards it.
Lipnevich and Smith‘s work on assessment feedback that students believed to be generated by a human or computer provided some support for the computers as social actors hypothesis, but also some evidence that runs counter to the hypothesis. When all was going well, feedback from a computer was received as if from a person; when recipients didn’t want to believe the feedback, then the computer was…a stupid computer! I think I’m beginning to find similar evidence in my own work. Some students type their answers as carefully crafted sentences and in many situations they seem to react to feedback from a computer as if it came from a human marker. However when an answer is marked as incorrect (whether or not it is), and especially in the absence of targeted feedback that explains the error, students seem to think that they are right and the computer is wrong. As I’ve said before, this is something of a problem for e-assessment.