-
Archives
- December 2019
- July 2018
- October 2017
- August 2017
- July 2017
- November 2016
- September 2016
- May 2016
- February 2016
- January 2016
- November 2015
- October 2015
- July 2015
- June 2015
- March 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- May 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
-
Meta
Monthly Archives: July 2011
Assessing Open University students – at residential schools and otherwise
I was due to be tutoring at the Open University residential school SXR103 Practising Science at the University of Sussex (shown left; the crane is a reminder of the huge amount of building work that is going on) for two … Continue reading
Posted in Open University, Residential schools
Tagged Open University, residential school
Leave a comment
Happy birthday blog!
It seems hard to believe that I’ve been blogging on assessment, especially e-assessment, and especially the impact of e-assessment on students, for a year now. Even more amazing is the fact that there is still so much I want to … Continue reading
Posted in Uncategorized
Leave a comment
Answer matching for short-answer questions: simple but not that simple
In describing our use of (simple) PMatch for answer matching for short-answer free-text questions, I may have made it sound too simple. I’ll give two examples of the sorts of things you need to consider: Firstly, consider the question shown … Continue reading
Can online selected response questions really provide useful formative feedback?
The title of this post comes from the title of a thoughtful paper from John Dermo and Liz Carpenter at CAA 2011. In his presentation, John asked whether automated e-feedback can create ‘moments of contingency?’ (Black & Wiliam 2009). This is something I’ve … Continue reading
Posted in conferences, feedback
Tagged CAA, CAA 2011, CAA Conference, feedback, John Dermo, student engagement
Leave a comment
Are you sure?
For various reasons I’ve been thinking a lot recently about confidence-based marking. (Tony Gardner-Medwin, who does most of the work in this area also calls it ‘certainty-based marking’). The principle is that you get most marks for a correct response … Continue reading
Let students not technology be the driver
Just home from CAA 2011 (the International Computer Assisted Assessment Conference in Southampton). The attendance was quite low ( probably a victim of the current economic climate) but the conference was good, with some very thoughtful presentations and extremely useful conversations. … Continue reading
Posted in conferences
Tagged assessment for learning, CAA, CAA 2011, CAA Conference, John Dermo
Leave a comment
Short-answer questions : how far can you go?
Finally for today, I’d like to talk about where I believe the limits currently sit in the use of short-answer free-text questions. I have written questions where the correct response requires three separate concepts. For example, I have written a … Continue reading
Short-answer questions : when humans mark more accurately than computers
Hot on the heals of my previous post, I’d like to make it clear that human markers sometimes do better than computers in marking short-answer [less than 20 word] free-text questions. I have found this to be the case in two situations … Continue reading
Short-answer questions : when computers mark more accurately than humans
Back to short-answer free-text questions. One of the startling findings of my work in this area was that computerised marking (whether provided by Intelligent Assessment Technologies’ FreeText Author or OpenMark PMatch) was consistently more accurate and reliable than human markers. At the time, … Continue reading