Feedback from a computer

Back in Feb 2011 – gosh that’s five years ago – I was blogging about some contradictory results on how people respond to feedback from a computer. The “computers as social actors” hypothesis contends that people react to feedback from a computer as if it were from a human. In my own work, I found some evidence of that, though I also found evidence that when people don’t agree with the feedback, or perhaps just when they don’t understand it, they are quick to blame the computer as having “got it wrong”.

The other side to this is that computers are objective, and – in theory at least – there is less emotional baggage in dealing with feedback from a computer than in dealing with feedback from a person; you don’t have to deal with the aspect that “my tutor thinks I’m stupid” or, even perhaps even worse for peer feedback, “my peers think I’m stupid”.

I was reminded of this in reading an interesting little piece in this week’s New Scientist. The article is about practising public performance to a vritual audience, and describes a system developed by Charles Hughes at the University of Central Florida. The audience are avatars, deliberately designed to look like cartoon characters. A user who has tried the system says “We all know that’s fake but when you start interacting with it you feel like it’s real” – that’s Computers as Social Actors. However, Charles Hughes goes on to comment “Even if we give feedback from a computer and it is actually came from a human, people buy into it more because they view it as objective”.

Wong, S. (6th Feb 2016). Virtual confidence, New Scientist, number 3059, p. 20

This entry was posted in Computers as Social Actors, feedback from a computer and tagged , . Bookmark the permalink.

2 Responses to Feedback from a computer

  1. Tim Hunt says:

    It is definitely easier to take certain types of feedback from a computer, and I think those areas often coincide with the areas where it is easier for computers to give feedback.

    E.g. I am happy to have spell- and grammar-checkers run on my assignments, but I want my tutor to tell me if the quality of the argument is any good.

    But the example I really wanted to share was from computer programming, which is my day job. There, one tries to get the computer to check as much of the routine stuff as possible, so you can concentrate on the harder stuff. That might be declaring the type of function arguments, so that the compiler can tell you if you call a function wrongly, or it might just be stylistic issues, like https://tracker.moodle.org/browse/MDL-52738?focusedCommentId=390759&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-390759.

    However, the genesis of that particular checking tool for Moodle code started with a motive that I think teachers will empathise with. My colleague sam and I were tired of pointing out the same trivial style errors in a lot of code we reviewed, so we made an automatic checker. However, the original point remains, it is much easier to take that kind of feedback from a computer. (And it helps that the computer can return that feedback quickly.)

  2. Sally Jordan says:

    This is a good point Tim, thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *