Skip to content

Toggle service links
  1. eSTEeM
  2. Category
  3. Theme
  4. Technologies for STEM learning
Subscribe to RSS - Technologies for STEM learning

Technologies for STEM learning

Development and evaluation of a software tool for automated Java specification marking

Project leader(s): 
Anton Dil

Students studying M250, our second year object-oriented programming module using Java, are required to complete Java programs according to detailed syntactical, structural, functional and stylistic specifications.

Although software tools exist for code syntax, functionality and style checking, tools for structural specification checking are not widely available. The long-term goal of this project is to raise awareness of these various aspects of correctness in our assessment of students’ code and to support automated assessment of these aspects of code quality for tutors and students.

The project focused particularly on the development and evaluation of a structural specification tool (known as CheckM250), deployed in the 2017J presentation of M250, to allow tutors to check to what extent students’ code met a specification. The tool was provided for use in the module IDE, BlueJ, alongside traditional tutor marking notes. The project also explored the use of automated marking in the module Virtual Learning Environment (VLE), for quick feedback to students, and overcame technical obstacles in this context. 

Tutor surveys and interviews were used to gather feedback on CheckM250 and on other kinds of marking tool support and traditional resources.

Automated structural checks on code were found to have multiple use cases:

  • supporting human markers in assessing students’ code;
  • for markers to assess their own marking (as a kind of e-monitoring tool);
  • in online assessment for automated marking of students’ code;
  • as a step in determining if software functional tests can proceed;
  • for students to perform checks on their code before submitting it for marking;
  • for question setters to check completeness of questions set for students. 

There was evidence of tutors favouring the use of marking tools, or of their distrusting them, or finding them an obstacle. This appeared to depend less on the tool itself than on a predisposition for or against the use of tools. Similarly, tutors’ comparative rating of tools as aids to themselves versus as aids to students appeared to depend on the tutors’ disposition towards tools.

Most tutors using CheckM250 found it to be useful, and some reported that it increased their accuracy in marking. Tutors not using the tool cited lack of time and the simplicity of the assignment it was trialled on. Some reservations were expressed about reliance on automated marking tools, both for markers and for students.  The marking software was also shown to be useful in the VLE for automated student feedback.

The results provided indicators of topics that should be discussed with tutors and students in this context:

  • how automated code marking tools may best be used in tutor and student workflow;
  • how the outputs of the tools should be interpreted;
  • the potential benefits and pitfalls of automated marking;
  • the relationships between the outputs of various automated marking tools.

The project has also suggested ways forward in developing automated marking tools for Java code. 

Related resources

Dil, A., Truby, S. and Osunde, J. (2018) Development and evaluation of a tool for Java structural specification testing. eSTEeM Final Report. (PDF)

Dil, A., Truby, S. and Osunde, J. (2018) Development and evaluation of a tool for Java structural specification testing. Appendix A, Java specification checking: software notes. (PDF)

Dil, A., Truby, S. and Osunde, J. (2018) Development and evaluation of a tool for Java structural specification testing. Appendix B, Java specification checking: survey and interview results. (PDF)

Dil, A. and Truby, S. (2018) Evaluation of a tool for use on M250 “Object-oriented Java Programming”. Presentation from the 7th eSTEeM Annual Conference, 25-26 April 2018, Milton Keynes. (PowerPoint)

Dil, A., Osunde, J. (2018) Evaluation of a tool for Java Structural Specification Testing. Paper presented at the 10th International Conference on Education Technology and Computers, October 2018, Tokyo. (PDF)


Assessing The ‘Open Field Lab’: Evaluating Interactive Fieldcasts for Enhancing Access to Fieldwork

Project leader(s): 
Phil Wheeler, Julia Cooke, Kadmiel Maseyk and Trevor Collins

We developed fieldcasts, unique live broadcasts in which a student-led field investigation is carried out by tutors based in the field for the module S206/SXF206 Environmental Science. Key to the fieldcasts was a decision-making framework which, in conjunction with the Stadium Live platform, delivers authentic fieldwork that aimed to build student confidence and sense of belonging in field investigations. We used immediate and delayed student feedback to assess their efficacy.

Related resources

Cooke, J., Wheeler, P., Maseyk, K. and Collins, T. (2020) Assessing The ‘Open Field Lab’: Evaluating Interactive Fieldcasts for Enhancing Access to Fieldwork. eSTEeM Final Report. (PDF)

Phil Wheeler, Julia Cooke, Kadmiel Maseyk and Trevor Collins poster

Scholarship Shorts - video highlighting the different approach to teaching fieldwork using fieldcasting.

Video length: 5 mins 8 secs

Download transcript

Visualising the code: are students engaging with programming at level 1?

Project leader(s): 
Elaine Thomas, Soraya Kouadri Mostéfaoui and Helen Jefferis

This project Visualising the code: are students engaging with programming at Level 1? investigated the impact of using a visual programming environment on student engagement with programming.

Programming is a subject that many students find difficult and it may be particularly challenging for distance learning students working largely on their own. Many ideas have been put forward in the literature to explain why students struggle with programming, including: the relative unfamiliarity of computer programming or ‘radical novelty’ (Dijkstra, 1989), cognitive load (Shaffer, 2004) and that the whole learning environment may be influential (Scott & Ghinea, 2013).

We used as our case-study TU100 My digital life which is a Level 1 undergraduate Computer Science module in the Open University. The rationale for this work stems from the need for an introductory undergraduate Computing and IT module that will engage students of widely differing levels of prior experience in terms of programming and of education generally. In TU100, the module team introduced a visual programming environment, based on Scratch (MIT, 2007), called ‘Sense’ which is used in conjunction with an electronic device, the SenseBoard.

In the first phase of the project we analysed the grades of 6,159 students in the final assessment across six presentations of the module to identify student performance in the programming task, in comparison with their overall performance on the module. The aim was to explore whether there was any difference between student engagement with the programming task in comparison with non-programming tasks. Our results suggest that there is no significant difference in levels of engagement between these tasks, and it appears that success, or otherwise, in one type of task is a good predictor of engagement with the other tasks.

In the second phase of the project we analysed the textual comments made by students in the Student Experience on a Module (SEaM) survey from two recent presentations of TU100, using key words relating to programming. Just under 30% of students who made textual comments gave feedback about Sense or the programming teaching. A total of 22.2% of the students made positive comments about the use of Sense or the programming teaching generally and 7.6% of students’ comments were negative. Of the students who made negative comments, a small number had struggled with the programming, while others thought that the teaching was pitched at too low a level. However, the majority of student comments in this area suggest that they had enjoyed the programming elements.

The visual programming language used at Level 1 has been successful in engaging students in the study of programming. This study will provide a firm basis for a similar analysis of student performance on the new Level 1 modules which use a visual programming language in the first module followed by Python in the second one, and how well students cope with Level 2 programming.

Related resources

Thomas, E., Kouadri Mostéfaoui, S. and Jefferis, H. (2019) Visualising the code: are students engaging with programming at Level 1? eSTEeM Final Report (PDF)

Thomas, E.; Kouadri Mostéfaoui, S. and Jefferis, H. (2018) Visualising the code: a study of student engagement with programming in a distance learning context. In: Proceedings of the 11th International Conference on Networked Learning 2018 (Bajić, M.; Dohn, D. N.; de Laat, M.; Jandrić, P. and Ryberg, T. eds.), Springer.

Thomas, E.; Kouadri Mostéfaoui, S. and Jefferis, H. (2018)  Visualising the Code: An investigation of student engagement with programming in TU100. 7th eSTEeM Annual Conference, The Open University, Milton Keynes. (PDF)

Thomas, E., Kouadri Mostéfaoui, S. and Jefferis, H. (2017) Investigation of student engagement with programming in TU100: The impact of using a graphical programming environment? 6th eSTEeM Annual Conference, The Open University, Milton Keynes, 25-26 April 2017. (PDF)