Archive for the ‘student engagement’ Category

Why learning analytics have to be clever

Friday, March 28th, 2014

 

I am surprised that I haven’t posted the figure below previously, but I don’t think I have.

It shows the number of independent users (dark blue) and usages (light blue) on a question by question basis. So the light blue bars include repeating of whole questions.

This is for a purely formative interactive computer-marked assignment (iCMA) and the usage drops off in exactly the same way for any purely formative quiz. Before you tell me that this iCMA is too long, I think I’d agree, but note that you get exactly the same attrition – both within and between iCMAs – if you split the questions up into separate iCMAs. And in fact the signposting in this iCMA was so good that sometimes the usage INCREASES from one question to the next. This is for sections (in this case chemistry!) that the students find hard.

The point of this post though is to highlight the danger of just saying that a student clicked on one question and therefore engaged with the iCMA. Just how many questions did that student attempt? How deeply did they engage? The situation becomes even more complicated if you consider the fact that there are around another 100 students who clicked on this iCMA but didn’t complete any questions (and again, this is common for all purely formative quizzes). So be careful, just using clicks (on a resource of any sort) does not tell you much about engagement.

Same assignment, different students 3

Friday, May 17th, 2013

You’ll be getting the idea…

The figures below show, for each question, the number of students who got it right at first attempt (yellow), second attempt (green), third attempt (blue), or not at all (maroon). So the total height of each bar represents the total number of students who completed each question.

You can spot the differences for yourself and I’m sure you will be able to work out which module is which! However I thought you’d like to know that questions 24-27 are on basic differential calculus. Obviously still some work to do there…

Same assignment, different students 2

Thursday, May 16th, 2013

Following on from my previous post, take a look at the two figures below. They show how students’ overall score on an iCMA varied with the date they submitted. These figures are for the same two assignments as in the previous post (very similar assignments, rather different students).

The top figure (above) is depressingly familiar. The students who submit early all do very well – they probably didn’t need to study the module at all! The rest are rushing to get the assignment done, just before the due date – and lots of them don’t do very well.

I am very pleased with the lower figure. Here students are doing the assignment steadily all the while it is available – and with the exception of a small number who were probably prompted to have a go on the due date by a reminder email we sent, they do pretty similarly, irrespective of when they submitted. This is how assignments should perform!

I’m aware that my interpretation may seem simplistic, but we have other evidence that the first batch of students are overcommitted – they are also younger and have lower previous qualifications – so it all fits.

Finally, following yesterday’s JISC webinar on learning analytics I’m beginning to think that this is how I should be describing the work that I’ve previously categorised as ‘Question analysis’ and  ’Student engagement’. However we describe this sort of analysis, we must do more of it – it’s powerful stuff.

Same assignment, different students

Sunday, May 12th, 2013

I’ve written quite a lot previously about what you can learn about student misunderstandings and student engagement by looking at their use of computer-marked assiggnments. See my posts under ‘question analysis’ and ‘student engagement’.

Recently, I had cause to take this slightly further. We have two interactive computer-marked assignments (iCMAs) that test the same material, and that are known to be of very similar difficulty. Some of the questions in the two assignments are exactly the same, most are slightly different.  But when we see very different patterns of use, this can be attributed to the fact that the two iCMAs are used on different modules, with very different student populations.

Compare the two figures shown below. These simply show the number of questions started (by all students) on each day that the assignment is open. The first figure shows a situation where most questions are not started until close to the cut-off date. The students are behind, struggling and driven by the due-date (I know some of these things from other evidence).

The second figure shows a situation in which most questions are started as soon as the iCMA opens – the students are ready and waiting! These students do better by various measures – more on this to follow.

Science started here

Monday, August 20th, 2012

Sadly, the final presentation of S154 Science starts here has now ended. It was a 10-credit module so didn’t fit well with the 30-credit study intensity that is necessary for English students to get funding. But it was a lovely little module – popular with students and tutors alike, and highly effective in preparing students to study the longer S104 Exploring science.

S154′s assessment strategy was written to be complementary to that of S104, so it was at first something of a mystery when a ‘different’ student behaviour was observed on S154. The figures below show the way in which three individual (but typical) students interacted with a lightly-weighted  S154 iCMA. The red dot indicates the date on which the student first engaged with the iCMA and the blue dots indicate subsequent interactions.

‘Student 1′ is typical of many students, on S154 and all other modules – the last-minute merchants! (spot the iCMA’s cut-off date…). ‘Student 2′ is also typical of many students on many modules – the student starts the iCMA and completes the questions that they can; then over the next week or so, they use the feedback to improve on their answers to the other questions.

‘Student 3′ is typical for S154 but not for other modules. How odd I thought! But it turns out that there is a very simple explanation. S154 was an introductory module, with careful scaffolding. When students had completed Chapter 2 they were advised to attempt the first 4 questions on the iCMA; when they had completed Chapter 3 they were advised to attempt the next 2 questions; when they had completed Chapter 4 they were advised to attempt the final 4 questions. And that was exactly what they did! If surprises some people, but I have found on many occasions that students actually do what they think that they are expected to do.

More about guessing and blank/repeated responses

Tuesday, February 7th, 2012

Depressingly, this post reports a similar finding to the last one.

For the  question shown (which is one of a series of linked questions on the Maths for Science  formative-only practice assessment), 62% of students are right at the first attempt but 22% remain incorrect after the allowed two responses. At the response level, whilst 60.2% of responses are correct, the other options are selected approximately equal numbers of time. The details are below:

P>0.1  12.4% of responses

0.1>P>0.05 14.0% of responses

0.05>P>0.01 60.2% of responses

P<0.01 13.5% of responses

So what’s this saying? (more…)

Use of capital letters and full stops

Wednesday, November 30th, 2011

For the paper described in the previous post, I ended up deleting a section which described an investigation into whether student use of capital letters and full stops could be used as a proxy  for writing in sentences and paragraphs. We looked at this because it is a time-consuming and labour-intensive task to classify student responses as being ‘a phrase’, ‘a sentence’, ‘a paragraph’ etc. – but spotting capital letters and full stops is easier (and can be automated!).

I removed this section from the paper because the findings were somewhat inconclusive, but I was nevertheless surprised how many responses finished with a full stop and especially by the large number that started with a capital letter. See the table below, for a range of questions in a range of different uses (sometimes summative and sometimes not).

Question Number of responses (and percentage of total) that started with a capital letter Number of responses (and percentage of total) that finished with a full stop
A-good-ideaAYRF

S154 10J

 1678 (60.9%)

622 (60.0%)

 1118 (40.6%)

433 (41.8%)

Oil-on-waterS154 10J  500 (53.9%)  294 (31.7%)
MetamorphicSXR103 10E  297 (41.6%)  166 (23.2%)
SedimentarySXR103 10E  317 (39.9%)   178 (22.4%)
SandstoneS104 10B  954 (58.2%)  684 (41.7%)
Electric-forceS104 10B  673 (56.7%)  445 (37.5%)

Answers that were paragraphs were found to be very likely to start with a capital letter and end with a full stop; answers that were written in note form or as phrases were less likely to start with a capital letter and end with a full stop. Answers in the form of sentences were somewhere in between.

The other very interesting thing was that capital letters and full stops were both [sometimes significantly] associated with correct rather than incorrect responses.

Student engagement with assessment and feedback: some lessons from short-answer free-text e-assessment questions

Wednesday, November 30th, 2011

Sorry for my long absence from this blog. Those of you who work in or are close to UK Higher Education will probably realise why – changes in the funding of higher education in England mean that the Open University (and no-doubt others) are having to do a lot of work to revise our curriculum to fit. I’m sure that it will be great in the end, but the process we are going through at present is not much fun. I always work quite hard, but this is a whole new level – and I’m afraid my blog and my work in e-assessment is likely to suffer for the next few months.

It’s not all doom and gloom. I’ve had a paper published in Computers & Education (reference below), pulling together findings from our observation of students attempting short answer free-text questions in a usability lab, and our detailed analysis of student responses to free-text questions – some aspects of of which I have reported here. It’s a long paper, reflecting a substantial piece of work, so I am very pleased to have it published.

The reference is:

Jordan, S. (2012) Student engagement with assessment and feedback: some lessons from short-answer free-text e-assessment questions. Computers & Education, 58(2),  818-834.

The abstract is: 

Students were observed directly, in a usability laboratory, and indirectly, by means of an extensive evaluation of responses, as they attempted interactive computer-marked assessment questions that required free-text responses of up to 20 words and as they amended their responses after receiving feedback. This provided more general insight into the way in which students actually engage with assessment and feedback, which is not necessarily the same as their self-reported behaviour. Response length and type varied with whether the question was in summative, purely formative, or diagnostic use, with the question itself, and most significantly with students’ interpretation of what the question author was looking for. Feedback was most effective when it was understood by the student, tailored to the mistakes that they had made and when it prompted students rather than giving the answer. On some occasions, students appeared to respond to the computer as if it was a human marker, supporting the ‘computers as social actors’ hypothesis, whilst on other occasions students seemed very aware that they were being marked by a machine. Do take a look if you’re interested.

iCMA statistics

Tuesday, April 19th, 2011

This work was originally reported on the website of  COLMSCT (the Centre for the Open Learning of Mathematics, Science, Computing and Mathematics) – and other work was reported on the piCETL (the Physics Innovations Centre for Excellence in Teaching and Learning) website. Unfortunately, the whole of the OpenCETL website had to be taken down. The bare bones are back (and I’m very grateful for this) but the detail isn’t, so I have decided to start re-reporting some of my previous findings here. This has the advantage of enabling me to update the reports as I go.

I’ll start by reporting a project on iCMA statistics which was carried out back in 2009, with funding from COLMSCT, by my daughter Helen Jordan (now doing a PhD in Department of Statistics at the University of Warwick; at the time she did the work she was an undergraduate student of mathematics at the University of Cambridge). Follow the link for  Helen’s project report , but I’ll try to report the headline details here – well, as much as I can understand them! (more…)

Repeated and blank responses

Wednesday, March 30th, 2011

The figure shown on the left requires a bit of explaining. The three columns represent student responses at 1st, 2nd and 3rd attempt to a short-answer free-text question in formative use. Green represents correct responses; red/orange/yellow respresent incorrect responses. The reason I’ve used different colours here is to enable me to indicate repeated responses. Where a colour is identical from column to column, this means that an incorrect response from a first or second response was repeated exactly at second and/or third attempt. The colour grey represents responses that were completely blank. The figure shows that

  • at first attempt, four responses (0.9% of the total of 449) were blank;
  • at second attempt, 43 responses (17.8% of the total of 241) were identical with responses given at the first attempt, with 7 responses (2.9%) blank;
  • at third attempt, 54 responses (27.4% of the total of 197) were identical with responses given at the second attempt, with 3 responses (1.5%) blank.

Reasons given by students (in interviews) for leaving the response box blank and repeating responses include just wanting to get to the final worked-example, not understanding the question and not understanding the feedback. (more…)