You are here

  1. Home
  2. eSTEeM Projects
  3. Innovative assessment
  4. Thresholded assessment: does it work?

Thresholded assessment: does it work?

  • Project leader(s): Sally Jordan
  • Theme: Innovative assessment
  • Faculty: STEM
  • Status: Archived
  • Dates: May 2013 to September 2014

Many Science Faculty modules have moved from their previous summative continuous assessment to formative but thresholded continuous assessment. The aim of the project was to evaluate this Faculty-wide change in practice.

Two basic models of formative thresholded assessment are currently in use:

  • Model A. Tutor-marked assignments (TMAs) and interactive computer-marked assignments (iCMAs) are weighted, and students are required to reach a threshold (usually 40%) overall.
  • Model B. Students are required to demonstrate engagement by reaching a threshold (usually 30%) in, say, 5 out of 7 assignments.

The relative merits of summative, purely formative and Model A and Model B formative-thresholded assessment were investigated. The project was split into a number of smaller practitioner-led sub-projects. The methodology was largely data-driven, though student and associate lecturer perception and opinion was also considered.

The main findings were as follows:

  • Many students and ALs have a poor understanding of our assessment strategies, including conventional summative continuous assessment. This is in line with a frequently found result that students have poor understanding of the nature and function of assessment. 
  • Many of the other findings stem from a similar lack of understanding. The increase in plagiarism following the move to formative thresholded assessment (with the accompanying re-use of TMA questions) was less marked than had been feared, but whenever tutor notes get into the public domain they are sometimes copied by students and submitted as their own work. This does not help the students to learn, or prepare them for the examinable component, so they are only “cheating themselves”. However, the offending students often find this point difficult to understand.
  • No significant differences in engagement were seen as a result of summative continuous assessment or models A or B of formative thresholded assessment. Following a move to either model of formative thresholded assessment, more students were seen to omit the final assignment or to submit a partial assignment. However overall retention and success rates have not altered as a result of the changes in assessment strategy (other factors e.g.changing student populations have had considerably larger impact) and some students have been seen to appreciate the encouragement to concentrate on the formative aspects of the continuous assessment rather than on the minutiae of the grading.
  • There is a correlation between the number of assignments submitted and overall success. However some students omitted TMAs without apparent impact on their final module result, and some of these students appear to have spent their limited time more profitably on revision. 
  • Thus, overall, no evidence has been seen to support a return to summative continuous assessment. However, it has rightly been pointed out that examinations cannot authentically assess all aspects of university-level skills. The use of two components contributing to the “overall examinable score” (OES), e.g. an examination and an experimental write-up, seems a sensible way forward, with the formative thresholded components helping students to prepare for both components.
  • There was no evidence of different behaviour as a result of the number of assessment points in the module, but several instances were observed in which submission rates dropped as a result of TMAs being too close to due dates for examinable components on the same module or for modules frequently studied concurrently.
  • Some evidence has been seen of students appearing more likely to complete modules with end-of-module assessments rather than examinations. However, if this effect is real (as opposed to being as a result of some modules being easier than others), it appears to influence engagement during a module rather than at the end, perhaps because students are less likely to feel frightened and overwhelmed on a module with an end-of-module rather than an examination.
  • Students have been seen to engage considerably more and more deeply with formative iCMAs when they have thresholds and hard cut-off dates. Some students appear to benefit from repeating questions. Since the repeating of iCMA questions sits more comfortably within Model B and there is no evidence to support the notion that Model A is more effective in any other way, Model B formative thresholded assessment with hard cut-off dates seems the best approach for iCMAs.
  • In the interests of consistency, the Faculty should then consider whether Model B formative thresholded assessment is also a better approach for TMAs. Model A was initially proposed as an alterative because it was felt that otherwise students would be tempted to omit TMAs, but there is no evidence of a significant difference in student behaviour on the two models. Model B would also be straightforward to explain to students, and there would be less scope for confusion with summative assessment.
  • Overall, the evidence presented supports the conclusion that students are “conscientious consumers” (Higgins, Hartley & Skelton, 2002) doing exactly what they think they are “meant” to do. However, the devil is in the detail; if we give an advisory cut-off date but put the actual cut-off date at a later stage, we should not be surprised that students see the actual cut-off date as the one that matters and work to that date rather than the advisory one.
  • The project started by looking for evidence that a change in modules’ assessment strategies might alter assignment submission rates. However, since submission rates also give a measure of attrition, it is possible to use the project’s detailed inspection of assignment submission on a wide range of modules to investigate other factors affecting retention. The different behaviour of continuing and new students, especially early in a module, was not a surprise, but the extent of the difference in retention was shocking. It is clear that different modules behave in different ways, but no unambiguous evidence has been found to explain why. Similarly, no systematic difference in attrition between 30- and 60-credit modules has been found, a point which is worthy of further investigation.
Related Resources: 
AttachmentSize
PDF icon Sally Jordan, Thresholded assessment. eSTEeM Final Report.pdf1.71 MB

eSTEeM final report.

AttachmentSize
PDF icon Cook, Butler & Jordan (2013) AHEC poster.pdf58.92 KB

Poster presentation.

AttachmentSize
Office presentation icon Jordan & Haresnape (2013) AHEC presentation.ppt599.5 KB

Project presentation.

AttachmentSize
PDF icon Jordan (2013) Using e-assessment to learn about learning CAA.pdf349.13 KB

Conference paper.

AttachmentSize
PDF icon Cook, Haresnape & Jordan (2014) eSTEeM Poster.pdf149.98 KB

Poster presentation.

AttachmentSize
Office presentation icon Jordan (2014) CALRG presentation.ppt1.38 MB

Project presentation.

AttachmentSize
Office presentation icon Jordan (2014) VICE PHEC presentation.ppt692 KB

Project presentation.

AttachmentSize
PDF icon Jordan (2014) EDEN workshop paper.pdf185.68 KB

Workshop paper.

AttachmentSize
Office presentation icon Jordan (2014) EDEN presentation.ppt376.5 KB

Project presentation.