-
Archives
- December 2019
- July 2018
- October 2017
- August 2017
- July 2017
- November 2016
- September 2016
- May 2016
- February 2016
- January 2016
- November 2015
- October 2015
- July 2015
- June 2015
- March 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- May 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
-
Meta
Category Archives: human marking
The multiple limitations of assessment criteria
Sadly, I don’t get as much time as I used to in which to think about assessment. So last Wednesday was a particular joy. First thing in the morning I participated in a fantastic webinar that marked the start of … Continue reading
An experiment in the essay-type paper
The title of this post is the title of a paper I have just read. It was written in – wait for it – 1938. It’s a delightful little paper, but its findings are shocking. I came across it whilst … Continue reading
Exam marking errors
I’m pleased to hear that OCR have apologised for errors in adding up marks for GCSE and A-level papers last year. It doesn’t seem right that the whistleblower remains suspended, but I don’t know the details so perhaps I shouldn’t … Continue reading
Short-answer questions : when humans mark more accurately than computers
Hot on the heals of my previous post, I’d like to make it clear that human markers sometimes do better than computers in marking short-answer [less than 20 word] free-text questions. I have found this to be the case in two situations … Continue reading
Short-answer questions : when computers mark more accurately than humans
Back to short-answer free-text questions. One of the startling findings of my work in this area was that computerised marking (whether provided by Intelligent Assessment Technologies’ FreeText Author or OpenMark PMatch) was consistently more accurate and reliable than human markers. At the time, … Continue reading
Making the grades
I’ve been lent a copy of Todd Farley’s book ‘Making the grades: my misadventures in the standardized testing industry’ (published by PoliPointPress in 2009). The blurb on the back of the book says ‘Just as American educators, parents and policymakers … Continue reading
Posted in human marking
Tagged human marking, marking accuracy, standardised testing, USA
Comments Off on Making the grades