-
Archives
- December 2019
- July 2018
- October 2017
- August 2017
- July 2017
- November 2016
- September 2016
- May 2016
- February 2016
- January 2016
- November 2015
- October 2015
- July 2015
- June 2015
- March 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- May 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- July 2011
- June 2011
- May 2011
- April 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- September 2010
- August 2010
- July 2010
-
Meta
Monthly Archives: August 2010
Partial credit for correct at second or third attempt
One of the features of OpenMark, the OU’s e-assessment system, is the fact that students are allowed several (usually three) attempts at each question, and receive hints which increase in detail after each unsuccessful attempt. This is the case even … Continue reading
Overall impact of different variants of questions
You may be relieved to hear that this will be my final posting (at least for a while) on our use of different variants of interactive computer-marked assignment (iCMA) questions. We know that, whilst the different variants of many questions are … Continue reading
Posted in statistics, variants
Tagged CAA Conference, iCMAs, John Dermo, statistics, variants
2 Comments
So are the variants of equivalent difficulty?
One glance at the figure from the previous post (reproduced to the right) makes it clear that whilst the variants of the question shown at the top are equivalent, those for the lower question are not. Reasons why variants may be … Continue reading
Investigating whether variants of a question are of equivalent difficulty
We have devised a range of tools to determine whether or not the variants of a question are of equivalent difficulty.
Writing different variants of iCMA questions
So how can you make different variants of interactive computer-marked assignment questions? Here are some strategies we’ve used: Use different numbers (so ‘Evaluate 3 + 7’ becomes’Evaluate ‘4 + 5’); Use different letters (so ‘Rearrange a=bc to make b the … Continue reading
Using different variants of iCMA questions
At the Open University we use different variants of our iCMA questions. So, to take a very simple example, when one student receives the question ‘Evaluate 3 + 7’, another might receive the question ‘Evaluate ‘4 + 5’. In summative … Continue reading
Adjectives of assessment
Writing about the various terms used to describe e-assessment made me realise just how littered with adjectives the whole area of assessment is. We have formative, summative, thresholded and diagnostic assessment. We have peer assessment and self assessment, and when you’re assessing … Continue reading
Posted in terminology
Tagged authentic, catalytic, convergent, diagnostic, divergent, formative, qualitative, quantitative, summative, sustainable, terminology, thresholded
2 Comments
CAA or CAA?
We use ‘e-assessment’ to mean different things, but we also use a variety of terms to describe e-assessment! We have CAA (computer-aided assessment), or is it CAA (computer-assisted assessment); CMA (computer-marked assessment), or is it CMA (computer-mediated assessment).
Posted in e-assessment, terminology
Tagged CAA, e-assessment, iCMAs, online assessment, terminology
Leave a comment
What is e-assessment?
Again, I feel I ought to define my terms before going any further. The broadest definition of e-assessment encompasses the use of computers for any assessment-related activity, thus it might include the electronic submission of tutor-marked assignments, the marking of … Continue reading
What is formative e-assessment and when does it happen?
I’ve just read a paper by Pachler et al (Computers & Education 54 (2010) pp715-721) which describes aspects of the JISC-funded project ‘Scoping a vision of formative e-assessment’. The paper starts by considering different perspectives on the ‘nature and value of formative … Continue reading