Skip to content The Open University
  1. Institute of Educational Technology
  2. Research & Innovation
  3. Projects
  4. Supportive Automated Feedback for Short Essay Answers

Supportive Automated Feedback for Short Essay Answers

Project website

The project website is at http://www.open.ac.uk/researchprojects/safesea/

What research questions the project addresses, aims & themes

The aim of this research is to provide an effective automated interactive feedback system that yields an acceptable level of support for university students writing essays in a distance or e-learning context.

To provide such a tool - which will be based on an existing system supporting the online writing and assessment of essays, OpenComment (Whitelock & Watt, 2008) - requires several problems to be solved, both in natural language processing and in educational theory and practice. The project is therefore a truly interdisciplinary one.

The natural language process problems are how to 'understand' a student essay well enough to provide accurate and individually targeted feedback and how to automatically generate that feedback.

The educational problem is how to develop and evaluate effective models of feedback. This requires research into the selection of the content, mode of presentation and delivery of the feedback.

The research questions this project addresses include:

  1. Can essay marking techniques be extended to detect passages on which a human marker would usually give some feedback?
  2. Is it possible to adapt existing methods of information extraction, summarisation and key phrase extraction in order to select content for such feedback?
  3. Will automatic sentence generation methods be able to deliver this feedback in a natural way?
  4. What effect does summarsation have on essay improvement?
  5. How does the provision of hints affect the essay being written, now and for future essay writing?
  6. What effect does automated feedback have on the student's levels of self-regulation and metacognition on the writing task?

How the research questions are addressed by the project (methodology and activity/environment)

The primary aspects of feedback which will be investigated are:

  • summarisation;
  • recognition of positive achievements;
  • location of errors of commission or omission;
  • misconceptions or problems connected with causal relationships;
  • tactical and strategic hints both relating to the specific issues being addressed in the essay and issues connected with `metacognition' (i.e. self awareness of the learning process)
  • self-regulation of the learning process

More specifically, the methodology will involve:

  • Determining the effect of summarisation: Do the results indicate an association between the quality of summarisation and the effect on learner essays.
  • Exploring of effect of summarisation: Summarising the student's text is expected to prove effective in drawing attention to deficiencies in the overall argument. However, there has been little research on the effects of (quickly) giving students even partial summaries of their own essays, and little if any work on how the accuracy of the summary can effect learning. The aim therefore is to evaluate the effect of factors that are expected to play a role in generating a beneficial effect on the learner - measures of accuracy and completeness of the summary will be correlated with measures of learner activity and essay improvement.
  • Investigating the effect of hints: In the current work package we focus on the effect of hints on the essay being written; longer term effects are studied in work package 6. The expectation is that hints will be beneficial if they support learning and help to set goals for the learner (Kluger & DeNissi, 1996). The degree of specificity of the hints is known to be an important variable (Shute, 2009).

Findings and outputs

The objectives of this project are twofold:

  1. To develop an appropriate approach to the automated analysis of the free text presented for feedback by the students
  2. To develop an appropriate feedback model together with an assessment of its likely value

In particular, the project hopes to produce:

  • An essay assessment engine that will provide a profile of an essay providing the basis for feedback and scoring
  • A feedback generator that is customised and corpus-informed and that uses sentence generation, summarisation and key phrase extraction
  • A set of feedback models and systematic evaluation of the feedback generated under various conditions in terms of reliability and validity
  • Incorporation of these into a working system suitable for use by learners and teachers (OpenEssayist)
  • A field evaluation of the effectiveness of automated feedback on short essay answers in one or more arts-based areas

Project impact

This project takes into account a wide-range of potential impacts across the whole of formal and informal education but, initially, the prime impacts are on learning in higher education institutes. The technology-enhanced learning community will be able to reuse the results for crafting their feedback systems for ill-definted domains.  The feedback research community will benefit by having some new results that shed light on the effect of feedback on university students.  The e-learning community will be interested in adapting the approach to their own situation and utilising the resulting software.

Higher education institutions across the UK and around the world will benefit either by adopting and further developing the technologies within this project or by utilising the research results to inform their own feedback procedures for work carried out online.  In the current difficult funding climate for higher education, we expect a surge of interest in automated or semi-automated methods of learning and assessment.

Last, but not least, most students in higher education can benefit from the research, as will students in secondary schools in the future. 

Publications (and ORO feed)

Publications and other dissemination strategies are planned for the coming year

Keywords

Feedback, assessment, essay, e-learning, natural language generation

People involved

Investigators

Dr Denise Whitelock (The Open University)

Professor Stephen Pulman (University of Oxford)

Professor John Richardson (The Open University)

Research Associate

Dr Nicholas Van Labeke (The Open University)

Project Co-ordinator

Bethany Alden (The Open University)

Project Consultant

Dr Stuart Watt (Information Balance)

Project partners and links

The Open University

University of Oxford

Funder(s)

EPSRC

Start Date and duration

1 September 2012 - 31 August 2014

Research & Innovation