Skip to content

Toggle service links
  1. eSTEeM
  2. Category
  3. Theme
  4. Innovative assessment
Subscribe to RSS - Innovative assessment

Innovative assessment

Student co-design of confidence-building formative assessment for Level 1 Computing & IT students

Project leader(s): 
Paul Piwek and Simon Savage
Faculty: 
STEM
Status: 
Current
Body: 

The pedagogic issue that will be addressed is the problem students face when learning a complicated skill such programming and problem solving. According to Jenkins (2002) this is a slow and gradual process with students learning at different paces. Additionally, students often start a programming course with the preconception that programming is difficult, which has a negative effect on their motivation and can be reinforced if they are subjected to summative assessment too early.

In TM112 (Introduction to computing and information technology 2), several strategies were used to build student confidence and encourage sustained practice and reflection (Piwek et al., 2019). Among other things, a new approach to formative assessment was explored, which makes use of strictly formative quizzes. To encourage students to engage with these quizzes, students were rewarded with a small number of marks for including evidence of engagement with the quizzes with their TMAs. Marks were for the evidence of engagement and personal narrative/reflection on their engagement with the quiz questions. Since the quiz questions were not marked, students were also encouraged to discuss their attempts and answers with other students.

The main task of this proposal will be to:

  • investigate the effectiveness of the student discussions that took place about the quiz questions.
  • involve students in further increasing the effectiveness by co-designing quiz questions that are specifically aimed at helping students gain understanding and lead to in-depth peer discussions/dialogue. The aim will be to better understand the student perspective on design and presentation of the quizzes.

The main outcomes will be:

  • new and/or redesigned quiz questions for TM112
  • recommendations for design of formative assessment based on the co-design process
  • insights into the benefits and pitfalls of co-design activities with students

The impact of this research will consist in further improvements to student engagement and understanding of formative assessment.


Reference

Jenkins, T. (2002). ‘On the Difficulty of Learning to Program’, Proceedings of the 3rd Annual HEA Conference for the ICS Learning and Teaching Support Network, pp. 1-8

Piwek, Paul; Wermelinger, Michel; Laney, Robin and Walker, Richard (2019). Learning to program: from problems to code. In: Third Conference in Computing Education Practice (CEP), 9 Jan 2019, Durham, UK.

Piwek, P. and Savage, S. (2019) Project poster (PDF)

The S112 assessment strategy: student behaviours and subsequent success in higher level study

Project leader(s): 
Jim Iley
Faculty: 
STEM
Status: 
Current
Body: 

S112 is the only Level 1 module within the OU to adopt Single Component: Exam. This is a deliberate strategy to prepare students for examinations at higher level, and involves 6 assignments, which contribute 39% to the OES (3 x 3% ‘formative’ assignments, and 3 x 10% that assess skills more difficult to address through an examination, viz. practical, collaboration, and communication) and an examination that contributes 61%. To further prepare for the exam, students are provided with all the materials around which the questions will be based as ‘seen materials’. The pass mark is 40% for their OES; there are no thresholds.

The benefits of this strategy are that it:

  • provides students with a clear understanding of their ongoing achievement towards passing
  • provides stepping stones for progression and therefore contributes to developing student resilience
  • allows students to miss one or more assignments, and even to pass the module without submitting any assignments

The disadvantages are that:

  • students can ‘game’ the system and engage with the minimum of assessment (the exam) to pass the module
  • because of the preceding bullet, students may not be sufficiently prepared for subsequent study at Level 2

This proposal sets out to:

In Phase 1

  • analyse student assessment behaviours during the first presentation (17J) of S112 (i.e. What was their level of understanding of the assessment strategy from the outset and how did that influence their behaviour? Did they know that they only had to do a certain number of TMAs?)
  • identify how frequent any ‘gaming’ behaviour is
  • identify any previous module contraindications (SDK100, U116, S111)

In Phase 2

  • identify if the S112 assessment strategy behaviours has subsequent impact on student performance in Level 2 modules

The findings have the potential to impact:

  • the kinds of advice students are provided with by the SST
  • approaches to single component assessment across the university, especially the use of SCA-exam to prepare students for higher level

Jim Iley presentation

Use of STACK to generate formative assessment for level 3 Pure mathematics

Project leader(s): 
Hayley Ryder
Faculty: 
STEM
Status: 
Archived
Body: 

In a study of exam revision impact, Cross et al found that 83.9% of students rated access to a sample examination paper useful or very useful [Cross, Whitelock, and Mittelmeier, 2016]. Many mathematics modules have been running for a number of years and have a large number of available past examination papers. As a result, students studying mathematics at the OU have often developed revision strategies that assume access to a large number of past examination papers (together with solutions). On newer modules this resource does not exist, and students have cited lack of past papers as a reason for deferring or for not taking a new module. Mathematics examination-like questions and solutions are time-consuming and expensive to write and check; therefore, it is not easy to produce a large number of sample papers.

Many mathematics modules use a computer algebra system called STACK to generate online short practice questions (SPQs) with feedback and solutions. The STACK system can be programmed so that the feedback produced for incorrectly answered questions depends on the mistakes made by the individual. These questions are used to build quizzes (SPQ-quizzes). For this project we used STACK to produce sets of long examination-like practice questions (ELPQs) with worked solutions and tailored feedback for a relatively new module (M303). These long examination-like questions were then used to build a randomly generated quiz that resembled a past exam paper (ELPQ-quiz) so that students could generate multiple instances of an example examination-like quiz, with feedback and example solutions, for practice.

We used a mixed methods approach to our evaluation of this project, performing a quantitative analysis of the engagement with these questions by 660 students, sending a questionnaire to just over 200 students and carrying out a qualitative analysis of semi-structured interviews undertaken with 12 students.

A thematic analysis of the structured interviews showed lack of past papers as a strong theme, along with the existence of established revision strategies that relied on access to many past papers.

The results of the quantitative analysis showed that a higher percentage of students engaged with the ELPQ quiz than engaged with the traditional SPQ quizzes. The students that did engage with the ELPQs used them at a higher intensity. In addition, a significantly higher percentage of students who engaged well with the ELPQs maintained or improved their exam result for M303: Further pure mathematics, as compared to their score in the level two precursor module, M208: Pure mathematics.

Over 80% of students responding to the survey, taking the exam, and using the ELPQs, either agreed or strongly agreed that the ELPQ-quiz helped them structure their revision and felt that the ELPQs were helpful when they saw the exam. Over 90% of the respondents who used the ELPQs agreed or strongly agreed that quizzes similar to the ELPQ-quiz should be implemented on other mathematics modules.

Related resources

Ryder, H. (2019) Use of STACK to generate formative assessment for Level 3 pure mathematics. eSTEeM Final Report (PDF)

Hayley Ryder poster

1 of 5