Automatically generated questions

In describing a presentation by Margit Hofler of the Institute for Information Systems and Computer Media at Graz University of Technology, Austria, the CAA 2011 Conference Chair Denise Whitelock used the words ‘holy grail’ and this is certainly interesting and cutting-edge stuff.  The work is described in the paper ‘Investigating automatically and manually generated questions to support self-directed learning’ by Margit and her colleagues at http://caaconference.co.uk/proceedings/

An ‘enhanced automatic question creator’ has been used to create questions from a piece of text, and the content quality of 120 automatically created test items has been compared with 290 items created by students. Both sets of questions were found to be predominantly at the ‘lower end’ of Bloom’s taxonomy (as the paper says, this is not necessarily a bad thing) and the automatic question creator was most successful at creating ‘single choice’ (what I usually call ‘multiple choice’) questions.

There’s obviously still a long way to go, but it’s exciting work. I can’t help but feel that some of the problems described with the automatic generation of free-text questions might be avoidable by more careful specification. All questions of this type were required to be of the form ‘What do you know about X in the context of Y’ (I’m not sure why this was) and one of the problems was the resultant question ‘What do you know about natural-language processing in the context of natural-language processing.’

My other big question relates to the fact that if the questions are automatically generated rather than being generated by students, the learning that students do when writing questions (as they do at the University of Edinburgh using Peerwise, described in a blog posting here) will be lost.  Nevertheless I will be looking out for more news from Graz University of Technology on their work in this challenging area.

Leave a Reply