Archive for August, 2010

Partial credit for correct at second or third attempt

Monday, August 30th, 2010

One of the features of OpenMark, the OU’s e-assessment system, is the fact that students are allowed several (usually three) attempts at each question, and receive hints which increase in detail after each unsuccessful attempt. This is the case even in summative use, where the marks awarded decrease in line with the amount of feedback that has been given before the question is correctly answered.

The provision of increasing feedback is illustrated in the figure below.

The way in which we give partial credit when a question is only answered following feedback contrasts with other other systems which give partial credit for partially correct answers (we sometimes do that too). Is one of these approaches better than the other? I have always liked our approach of giving increasing feedback, and it has recently been pointed out the me that it is also good if students can be encouraged to get a completely correct answer for themselves. However, I think it is important that we tell students if their answer is partially correct, rather than letting them think that it is completely wrong – and so sending them off on a wild goose chase!

I also think that one of the remaining issues for the use of e-assessment of this type, especially in assessing maths, is the fact that we don’t give credit for ‘working’, which of course is so much a feature of human marking.

Overall impact of different variants of questions

Sunday, August 29th, 2010

You may be relieved to hear that this will be my final posting (at least for a while) on our use of different variants of interactive computer-marked assignment (iCMA) questions. We know that, whilst the different variants of many questions are of equivalent difficulty, we can’t claim this for all our questions. How much does this matter? A typical iCMA might include 10 questions and contribute something between 2 and 5% to a student’s overall score. My Course Team colleagues and I have reassured ourselves that the effect of the different difficulty of some variants of some questions will, when averaged out over a large number of questions, have minimal impact on a student’s overall score. But is this true? (more…)

So are the variants of equivalent difficulty?

Sunday, August 29th, 2010

One glance at the figure from the previous post (reproduced to the right) makes it clear that whilst the variants of the question shown at the top are equivalent, those for the lower question are not.

Reasons why variants may be of differing difficulty include

  • the variables selected may result in a more difficult mathematical task (e.g. rounding up instead of rounding down, understanding of negative exponents rather than positive ones)
  • a graph to be interpreted may have a more awkward scale to read, or if different readings are to be taken from the same graph, some students may have to interpolate or extrapolate whilst others are taking readings where the graph crosses the grid-lines.
  • the letters used may appear similar in lower and upper case and so be confused e.g. k and K are very similar whilst q is unlikely to be confused with Q.
  • the words used in setting up the question may use language or describe a situation which is unfamiliar to the student. (more…)

Investigating whether variants of a question are of equivalent difficulty

Sunday, August 29th, 2010

We have devised a range of tools to determine whether or not the variants of a question are of equivalent difficulty. (more…)

Writing different variants of iCMA questions

Thursday, August 26th, 2010

So how can you make different variants of interactive computer-marked assignment questions?

Here are some strategies we’ve used:

  • Use different numbers (so ‘Evaluate 3 + 7′ becomes’Evaluate  ’4 + 5′);
  • Use different letters (so ‘Rearrange a=bc to make b the subject’ becomes ‘Rearrange b=cd to make c the subject’);
  • Use different words (so ‘Find the area of the floorboard’ becomes ‘Find the area of the carpet’ or ‘Find the area of the runway’); (more…)

Using different variants of iCMA questions

Sunday, August 22nd, 2010

At the Open University we use different variants of our iCMA questions. So, to take a very simple example, when one student receives the question ‘Evaluate 3 + 7′, another might receive the question ‘Evaluate  ’4 + 5′. In summative use, different variants limit the opportunities for plagiarism. In formative-only use, different variants provide students with extra opportunities for practice. (more…)

Adjectives of assessment

Sunday, August 22nd, 2010

Writing about the various terms used to describe e-assessment made me realise just how littered with adjectives the whole area of assessment is.

We have formative, summative, thresholded and diagnostic assessment.

We have peer assessment and self assessment, and when you’re assessing yourself against previous performance, the assessment becomes ipsative. (more…)

CAA or CAA?

Friday, August 20th, 2010

We use ‘e-assessment’ to mean different things, but we also use a variety of terms to describe e-assessment!

We have CAA (computer-aided assessment), or is it CAA (computer-assisted assessment); CMA (computer-marked assessment), or is it CMA (computer-mediated assessment).  (more…)

What is e-assessment?

Friday, August 13th, 2010

Again, I feel I ought to define my terms before going any further.

The broadest definition of e-assessment encompasses the use of computers for any assessment-related activity, thus it might include the electronic submission of tutor-marked assignments, the marking of student engagement with a tutor group forum or the compilation and grading of an e-portfolio. I have interests in all of these aspects. However much of work I have done relates to the online delivery and automatic marking of questions, with the provision of immediate feedback, in Open University interactive computer-marked assignments (iCMAs) so this is the area I can speak about with most authority.

What is formative e-assessment and when does it happen?

Wednesday, August 11th, 2010

I’ve just read a paper by Pachler et al (Computers & Education 54 (2010) pp715-721) which describes aspects of the JISC-funded project ‘Scoping a vision of formative e-assessment’. The paper starts by considering different perspectives on the ‘nature and value of formative e-assessment’. I’m sure it should have occurred to me before (but hadn’t!) that ‘formative (e)assessment’ can mean a range of different things. The emphasis might be on

  • practice for summative assessment
  • the provision of feedback
  • a means of providing self-reflection

The paper goes on to make the key point that ‘no technology-based assessment is in itself formative, but almost any technology can be used in a formative way – if the right conditions are set in place.’ In other words, the technology isn’t the thing that makes learning happen, it’s student engagement that matters. Amen to that.