Skip to content

The Open CETL > Centre for Open Learning of Mathematics, Science, Computing and Technology > Activities & projects > Assessment > An overview and review of the iCMA initiative

An overview and review of the iCMA initiative

The interactive computer marked assessment (iCMA) initiative was conceived in 2005 on the premise that developing models of eAssessment had the potential to be used in a much wider role than had hitherto been possible. The initiative followed closely on, and has been enabled by, the 2005 upgrade to the university’s OpenMark CAA system. This upgrade had been built on modern internet technologies and provided a platform which was able to harness the full potential of a modern multimedia computer.

The leadership of COLMSCT believed that if OU academics could be freed from the constraints of normal course production they could provide the innovation that would lead the university in helping to develop these new models of eAssessment. The projects that are reported on here have shown what can be achieved in a variety of subject areas. Four years on COLMSCT takes some pride in reporting that COLMSCT fellows have been to the fore in directing OU eAssessment developments and that it is the faculties that have had COLMSCT fellows that are in the vanguard of using eAssessment in OU courses.

COLMSCT offered fellows collaboration with specialists in pedagogy, educational research, and educational computing. COLMSCT also provided the resources to help specify and implement the assessments and to evaluate both the process of creating the assessments and the outcomes. In all thirteen projects were supported in Biology, Chemistry, Computing, Earth Science, General Science, Languages, Mathematics, and Nursing.

Back

The iCMA initiative is one of the main themes of work within the COLMSCT which is one of the Open University’s Centres for Excellence in Learning and Teaching (CETLs).  

The Centre appointed its first fellows in 2005 and work will continue until 2010. Within the eAssessment strand fellows were able to ask questions that go beyond the bounds that constrain normal course production cycles. Foremost among these discussions was the general question of whether or not eAssessment was capable of assessing higher order learning in any meaningful way. To help think this through, a workshop was convened in late 2005 with invited experts at the forefront of eAssessment from other universities. One conclusion, arising from the combined educational and computing expertise of the discussants and the ‘what if’ enabling approach of the COLMSCT leadership, was that developing models of eAssessment had the potential to be used in a much wider role than had hitherto been possible. The challenge to COLMSCT was to establish projects to test the conclusion of the experts.

In March of 2006 COLMSCT issued a call for proposals from academic staff to “develop and evaluate innovative e-assessment projects within their own teaching context”. While the remit of COLMSCT was the Mathematics, Science, Computing and Technology areas the call was widened to the whole university with appropriate support from the university’s Learning and Teaching Office. The initiative specifically acknowledged the current gap between academic aspirations and the types of interactions commonly found within standard Computer-Based Assessment systems and encouraged proposals that went beyond those current boundaries. The iCMA initiative offered collaboration and resources to help specify and implement the assessment and to evaluate both the process of creating the assessment and the outcomes.

In the intervening years the iCMA initiative has grown to include projects that have delivered iCMAs for use in Biology, Chemistry, Computing, Earth Science, General Science, Languages, Mathematics, and Nursing. The projects are required to undertake the full project cycle from proposal, through specification and implementation, to evaluation with students and propagation of the outcomes within and outside the university. 

Please scroll down to related resources below for more about the iCMA initiative. The Resource allows you to download a version to print. 

Back

The interactive questions on this site are run by the OpenMark assessment system which handles your access to a question as if you were taking a real assessment. However this site provides you with considerable flexibility as to how you access these demonstration questions and if you use all of this flexibility you may break some of the rules that are applied to more formal assessments.

One rule you may break is an 'Access out of sequence' error. If you see one of these errors, you now know why. Just click on the link provided to 'Return to the test'.  

Back

The major similarity between projects in the iCMA initiative is reflected in the name; all projects are attempting to engage students in an ‘interactive’ exchange around one or more learning outcomes, with the computer providing instant feedback and multiple attempts for students who answer incorrectly. The overall project is titled eAssessment for Learning and all projects include teaching feedback, often with course references, to persuade students to revisit topics where their answers are incorrect, before attempting the question again. For example see Figure 1 below.

Figure 1 An illustration of immediate targeted feedback

While the OU is not unique in using eAssessment in this way it was perhaps one of the first to realise how automation could be used to support students studying on their own away from tutors and peers. The university has been host to a variety of projects in this field stretching back to the 1970s. When the OU joined the Moodle community in 2005 the very first thing it added to the Moodle eAssessment tool was the ability to give a much wider variety of feedback. As prior to OU involvement Moodle was already the leading open-source VLE, here was evidence that the OU had given more thought than 10,000 other institutions worldwide on how Moodle’s eAssessment could be used to support the distant learner.
The importance of feedback for learning has been highlighted by a number of authors, emphasising its role in fostering meaningful interaction between student and instructional materials (Buchanan, 2000), its contribution to student development and retention (Yorke, 2001), but also its time-consuming nature for many academic staff (Gibbs, 2006). In distance education, where students work remotely from both peers and tutors, the practicalities of providing rapid, detailed and regular feedback on performance are vital issues.

Gibbs and Simpson suggest eleven conditions in which assessment supports student learning (Gibbs and Simpson 2004).

  1. Assessed tasks capture sufficient study time and effort.
  2. These tasks distribute student effort evenly across topics and weeks.
  3. These tasks engage students in productive learning activity.
  4. Assessment communicates clear and high expectations to students.
  5. Sufficient feedback is provided, both often enough and in enough detail.
  6. The feedback is provided quickly enough to be useful to students.
  7. Feedback focuses on learning rather than on marks or students themselves.
  8. Feedback is linked to the purpose of the assignment and to criteria.
  9. Feedback is understandable to students, given their sophistication.
  10. Feedback is received by students and attended to.
  11. Feedback is acted upon by students to improve their work or learning.

Four of these conditions, those in italics, are particularly apposite with regard to the use of eAssessment within distance education. They are reflected in the design of OpenMark and are amplified in the rationale behind the development of the S151, Maths for Science, online assessments (Ross, Jordan and Butcher, 2006) where

  • the assessment questions provide individualised, targeted feedback, with the aim of helping students to get to the correct answer even if their first attempt is wrong
  • the feedback appears immediately in response to a submitted answer, such that the question and the student's original answer are still visible
  • students are allowed up to three attempts at each question, with an increasing amount of feedback being given after each attempt

Readers might like to try this typical OpenMark question with instant feedback https://students.open.ac.uk/openmark/omdemo.text.q4. For non-scientists a response of the form '1s2', which is partially right, should give helpful feedback.

References

Buchanan, T. (2000) The efficacy of a World-Wide Web mediated formative assessment, Journal of Computer Assisted Learning, 16, 193-200

Gibbs, G. and Simpson, C. (2004), Conditions under which assessment supports students' learning. Learning and Teaching in Higher Education, 1, pp 3-31

Gibbs, G, (2006) Why assessment is changing, in C. Bryan and K. Clegg (eds), Innovative assessment in Higher Education, Routledge

Ross, S, Jordan, S and Butcher, P (2006), Online instantaneous and targeted feedback for remote learners, in C. Bryan and K. Clegg (eds), Innovative assessment in Higher Education, Routledge

Yorke, M. (2001) Formative assessment and its relevance to retention, Higher Education Research and Development, 20(2), 115-126  

Back

All of the iCMA projects wanted students to use and act on the instant feedback there and then while the problem is still in their mind; points 10 and 11 from the Gibbs and Simpson conditions in the previous section. And so the majority of questions were designed such that if the student’s first answer is incorrect, they can have an immediate second, or third, attempt. See Figure 2.

Figure 2 An illustration of three attempts at an interactive question

Readers might like to try this non-typical OpenMark question which allows up to 100 attempts (!) https://students.open.ac.uk/openmark/omdemo.twod.marker 

Back

Here is how the IMS Question and Test Interoperability (QTI) specification defines adaptive questions (items):

An adaptive item is an item that adapts its appearance, its scoring (Response Processing) or both in response to each of the candidate's attempts. For example, an adaptive item may start by prompting the candidate with a box for free-text entry but, on receiving an unsatisfactory answer, present a simple choice interaction instead and award fewer marks for subsequently identifying the correct response. Adaptivity allows authors to create items for use in formative situations which both help to guide candidates through a given task while also providing an outcome that takes into consideration their path.

Readers will see that by coupling feedback with multiple attempts we have much of what is described as an adaptive item. But as the excerpts from the following question show the iCMA project embraced all aspects of adaptive questions. We can report that the question features of OpenMark satisfactorily supported all the iCMA projects.

Figure 3a Initially students have to enter their own words into text-entry boxes

Figure 3b The correct response is locked and the student is asked to try again for the remainder

Figure 3c A second attempt

Figure 3d Now two correct responses are locked and the remaining questions become selection lists

Figure 3e On the third attempt the text-entry boxes from attempts 1 and 2 have been replaced by selection lists

Readers might like to try this question for themselves https://students.open.ac.uk/openmark/omdemo.adaptive.q1 

Back

It follows that as well as wishing students to work towards the correct answer for the original question then perhaps we should also provide more opportunities for the student to practice. This too has been supported by many of the iCMA projects.

What might look like a single question to the user often has several variations behind the scenes such that a student may revisit the iCMAs for further practice and receive variations on their original questions; in this respect the iCMAs resemble a patient tutor, correcting initial misunderstandings and providing further examples to reinforce the learning. None of the COLMSCT iCMAs is being used for summative purposes but if they were the in-built variability also counteracts plagiarism. Here are two such variations.

Figure 4a There are a variety of eruptions available to this question

Figure 4b Of course the response matching behind the scenes has to cope with the variety of eruptions too

Readers might like to try this example https://students.open.ac.uk/openmark/omdemo.maths.q2
Answer the question – you will get feedback to help you if you need it. And once you have completed the question, request 'Next question'. Do you see the same question? In fact this question has five variations so there is a 20% chance that you might see the same question repeated.  

Back

Across the iCMA initiative we have seen the creation of questions that use the capabilities of modern multimedia computers to display the problem and support interactions with the questions.

For example within Science there are several examples of ‘virtual microscopes’ having been put to imaginative use as teaching tools. Now it is possible to reuse the same idea within an iCMA with each of the views below corresponding to different levels of magnification (figures 5a-c). 

Figure 5a Low resolution

Figure 5b Medium resolution

Figure 5c High resolution

And as this figure shows not only can the resources be varied but students can be asked to interact with them directly to show that they have understood what they are looking at (Figure 6 below).

Figure 6 Identifying a parasite on a microscope slide 

Incorporation of online resources

The Clinical Decision Making Maze also challenges the student to interpret an array of online resources from audio interviews through data sheets to online databases thereby making the experience more akin to a real life nurse/patient consultation. These are configured to come up in different tabs of a tabbed browser leaving the question in the first tab. Try it here https://students.open.ac.uk/openmark/omdemo.mm.cdm 

Back

Compound questions are not unusual (see Figure 7 below). While these are more difficult for the author to analyse and comment on, they do provide students with more substantial tasks.

Figure 7 A compound question with multiple responses to be marked  

Several authors are exploring how advances in computing technologies can be utilised in iCMAs. For example we know there is variation in how human markers mark written materials and we can ask how a computer might fare if asked to mark the wide range of student responses that such questions elicit. Jordan, Butcher and Brockbank have been exploring the application of both computational linguistics and computational algorithms to marking free form text (Figure 8).  

 Figure 8 Automatic marking of free-text responses

We have a demonstration test that contains six questions that require free-text responses. Readers are invited to try it by following this link https://students.open.ac.uk/openmark/omdemo.pm2009.

And Thomas has been exploring the automatic marking of diagrams. Students use a linked applet to draw their diagram which is then automatically marked and feedback in the normal OpenMark style. 

Back

The most striking differences have come about from the initiative’s venture beyond the Mathematics, Science and Technology fields that form the backbone of COLMSCT. In both Health and Social Care and Languages we have seen both different interactive activities devised and different forms of eAssessment created.

New question components

In its simplest form our work with Languages has resulted in the cost-effective development of a new OpenMark component that allows the selection of multiple words in a paragraph. In a symbiotic relationship the new component builds on existing OpenMark functionality and contributes a new question type into the larger pool. This example in figure 9 raises the question of whether other subjects might not devise their own specialised interactions.

Figure 9 The new OpenMark word selection component developed by COLMSCT

Readers may wish to explore their own knowledge of English grammar with this example https://students.open.ac.uk/openmark/omdemo.mcq.q6. Please note that this question is aimed at master’s level students and such students are expected to understand the reasons for their own errors, such that little teaching feedback is included.

Novel use of existing question components

But there are also novel uses of existing question interactions. For example the line interaction was created to enable mathematicians, scientists and technologists to draw tangents or best fit straight lines (Figure 10).

Figure 10 A user is drawing the line shown here

Consequently it was both a surprise and a delight to see how it might be used to help language learning (Figure 11).

Figure 11 And here is drawing a line to link words

Readers may wish to try a question similar to that shown in Figure 11 by following this link https://students.open.ac.uk/openmark/omdemo.twod.lineconnect

Different forms of eAssessment

While all of our iCMAs have relied on an assessment of knowledge, one, the Clinical Decision making Maze, has also followed the pathway that the student takes through the activity. With different responses leading to different pathways this is an example of an adaptive eAssessment. As such this provides a different form of experience to the sequences of unrelated questions found in many applications of eAssessment.

In looking back over the projects we have encountered some difficulties in supporting our fellows’ requests for adaptive testing. While OpenMark’s question features have coped very well with our fellows’ designs, the difficulty in sequencing questions as a result of student performance has been much harder to solve; indeed the iCMA initiative has circumvented this problem and not solved it. 

Back

The iCMA initiative offered collaboration and resources to fellows in three key areas: Pedagogic, Technical and Evaluation support.

Pedagogic support

We started by recognising that writing questions that explore students’ understanding of their subject is a skilled activity. Couple this with the wish to include feedback that enables students who respond incorrectly to correct their misunderstanding and the task grows, but so does the student engagement. While most fellows started with their own idea the iCMA coordinator and project consultants had many years of experience in creating interactive multimedia materials and could guide the fellows as to what might work and to steer them away from what it might be impossible to achieve. For example interactive eAssessment must react sensibly to student inputs and if the question is too ‘open’ this becomes impossible so setting the question and providing appropriate response matching are key starting points.

We would also include here the application of technology to support pedagogic ends. Our authors did not have to concern themselves with the implementation issues so that ‘difficult’ areas of implementation did not cloud their view of what they wished to do.

Technical support

All technical issues were undertaken by the project coordinator and two experienced consultants, Spencer Harben and Adam Gawronski, who were familiar with writing interactive computer-based materials. The consultants were able to guide the fellows on what forms of interaction could be supported and how different inputs might lead to a range of responses that would have to be dealt with.

Clearly questions should be functionally correct and the coordinator and consultants ensured that:

  • the questions were appropriately phrased such that the required answer could be readily handled by a computer
  • the question was unambiguous
  • the response was marked accurately
  • the feedback reflected the marking
  • common mistakes received targeted feedback
  • all students were given the correct final answer with reasoning.

Evaluation support

Support was provided to evaluate both the process of creating the assessment and the outcomes of using the assessment with students. For the latter the fellows used a selection of data analysis of student responses collected by the eAssessment systems, online questionnaires, observation in the Institute of Educational Technology’s data-capture suite, online monitoring with Elluminate and one-to-one interviews.  

Back

All of the COLMSCT iCMAs were implemented in the OpenMark eAssessment system that was developed at the Open University http://www.open.ac.uk/openmarkexamples. The university has integrated OpenMark with Moodle and readers wishing to know more about this project are referred to our OpenLearn website http://labspace.open.ac.uk/course/view.php?id=3484.

The OpenMark philosophy

“There is already a trend towards a larger proportion of multiple-choice questions in British exams – a tendency often taken to extremes in the United States. This may be less a matter of academic merit than the convenience that these papers can be marked electronically. This is an outcome that should worry all those involved in education. The best test of a test is whether it stretches pupils, not that it is easy to mark” Editorial, The Times, 7th August 2004.

The raison d’être for OpenMark is to enable the Open University to produce interactive assessments that go beyond the bounds of the restricted range of question formats found in most commercial CAA systems.

And The OpenMark philosophy is also

• To exploit modern interactive computing technology for the purpose of enhancing the student learning experience.
• To provide an ‘open’, extensible, framework capable of supporting question types that go beyond multiple-choice.
• To support assessments that are engaging.
• To provide immediate feedback.

OpenMark is somewhat different from many CAA systems in the methods used for building questions. OpenMark questions and assessments that are to be run in the OpenMark system are constructed in an editor such as Eclipse using a combination of OpenMark XML components and Java. This combination enables OpenMark to be used very flexibly but it comes at the price of requiring authors to be comfortable when reading and writing Java.

Figure 12 illustrates how this works in practice with most output to the student being written in XML and most input from the student being analyzed in Java. Because the XML can be controlled by the Java it is also possible to introduce variations under computational control. The balance is that each technology is used for what it is good at; XML for specifying the content and laying out the web pages; Java for analyzing and making decisions on student responses.
 

Figure 12 OpenMark XML components are combined with small Java code segments which analyse responses and select the feedback that is to be given.

The open source site holding the OpenMark system also includes a variety of examples that show how the XML and Java work together. We would acknowledge that there is a learning curve to using OpenMark but multiple interactive media developers at the OU have risen up that curve and there are now thousands of OpenMark questions in regular use.

We would stress that COLMSCT set itself the task of pushing the boundaries and the flexibility of the OpenMark system has suited our purposes admirably. 

Back

The iCMA initiative started to deliver its first iCMAs in July 2006. The following graph (Figure 13) shows how the University’s use of iCMAs has increased during the lifetime of COLMSCT. We would not wish to claim that we are responsible for all of the increase over the period but we can be clear that COLMSCT fellows have been the leaders through their role in COLMSCT and in Course Team work in driving the upward trend shown.

iCMA usage to July 2010

Figure 13 iCMAs served by the OU to July 2010. The figure shows total usage by year and includes diagnostic, formative and summative iCMAs.

COLMSCT has also provided the pedagogic underpinning behind the university’s development of its eAssessment systems. Sally Jordan and Pete Thomas were the academic advisers to the VLE eAssessment project, Ingrid Nix and Ali Wyllie were contributing members of the eAssessment Faculty Liaison Group and Phil Butcher was the VLE eAssessment project manager. These fellows were able to steer the development of Moodle towards more OpenMark like features with Phil Butcher providing many of the designs. The software developments have been undertaken by technical staff in the Strategic Development section of the Learning and Teaching Solutions department and are described in ‘Online assessment at the Open University using open source software’ in the Resources section of this site.

Back

I have worked in computer-based learning since 1974 and during that time have performed most roles encompassing programmer, author, systems designer, researcher, educational evaluator, manager and acting head of department. I wrote my first interactive CBL program at Leeds University in 1974, my first for the OU in 1975 and moved from Leeds to the OU in 1977. As well as overseeing the COLMSCT iCMA initiative I have also continued to maintain my software skills and on the free-text marking project I have been active both in enhancing the OpenMark response matching algorithm and using it to analyse students’ responses.

During my long, but varied, career at the OU, I have been on numerous course teams that have advanced the use of interactive computers in the OU’s teaching and learning practices, and 30 years on the pace of change doesn’t slacken. That the COLMSCT CETL has chosen to address online assessment as a major theme illustrates that increasingly sophisticated application of computers in education continues to make major in-roads into established practices.

Here are a few highlights that led to my current fellowship in COLMSCT. The Works Metallurgist (with Adrian Demaid for TS251) in 1980 was the first OU CAL program to use interactive graphics and The Teddy Bear Videodisc (with Keith Williams (Technology) and Martin Wright (BBC) for T252) in 1984 was the first to integrate full-screen video and interactive computing. In 1986 I joined the T102 Living with Technology, course team that introduced personal computing into OU first level courses in 1989; in 1996 T102 became the first non-ICT undergraduate course to utilise conferencing and e-mail on a large scale. From 1996 I managed the introduction of multimedia computing into the Science Faculty with S103 Discovering Science becoming the first course to replace paper-based CMAs with interactive questions delivered through a modern multimedia computer; this was the start of the development of the OpenMark interactive assessment system. In 2002 OpenMark was moved online to support S151 Maths for Science and since 2005 has been integrated ever more closely with the OU’s VLE developments. OpenMark is the system that has enabled the creation of the diverse COLMSCT iCMA initiative projects shown on this site.

I have an MPhil in Computer-Based Education from the University of Leeds. I guess I thought that this was ‘normal’ until I heard Tim O’Shea (also ex. Computer-Based Learning at Leeds) describe the impact of Leeds in this area and how, in our different ways, he and I had brought that approach to the OU. And having made me think about it I can see he has a point. In my formative CBL years it was made clear to me by Roger Hartley (then Director of the Leeds CBL Unit) that it’s not only the student who should work hard at CBL but also the authors and at run-time the computer. In recent years my role as the COLMSCT iCMA initiative coordinator has put me in a prime position to continue this tradition while at the same time helping my COLMSCT colleagues deliver their ideas across the internet.  

Back

Contact

Phil Butcher, Coordinator of the iCMA initiative and COLMSCT Teaching Fellow
P.G.Butcher@open.ac.uk

Related projects

Related resources