More on the automatic marking of essays

Post now edited to include links…

Oh how much I missed in my last post!

Others, far better informed than me, have also reflected on the general principles of the use of the automated marking of essays. See, for example, Michael Feldstein, Audrey Watters and Justin Reich.

I think the interest was probably sparked by an ‘Automated Student Assessment Prize’ competition last year, funded by the William and Flora Hewlett Foundation. Phase 1 of this was to do with the automatic marking of essays and the results are here.

Things began to get interesting when the New York Times announced that edX were going to use automatic grading of essays. The report may or may not have been accurate, but concerns are being expressed about so-called ‘robo-marking’. See for example Les Pereman and the Professionals against Machine Grading of Essays group.

On Michael Feldstein’s blog ‘e-Literate’, Elijah Mayfield of LIghtSIDE Labs has now said more about why the NYT (and possibly the edX claim) is wrong. The LIghtSIDE labs approach seems very sensible – human marked essays are used as the basis for machine learning about good features of essays; this is used to provide feedback about the good and weak features of essays submitted by students and hence to peer review. Grading is still done by humans.

Some questions:
– I don’t know what edX are actually planning to do, in other words, how much can we believe the NYT report?
– I’d love to know more about the technology being used (by edX, LIGHTside, anyone) and whether they are marking essay content, style or both?

This entry was posted in essay-marking software, essays and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *