Recaps the Big Issues in Mobile Learning workshop in June last year in Notthingham where evaluation emerged as one of the big issues. He’s now doing a whistle-stop tour of mobile learning, highlighting the evidence that mobile learning can be effective. But now he gets to the key points – how did we collect this evidence? Attitude surveys and intertiews, observations etc. But always the same method – we handed out some PDAs, we sent the kids out to observe some birds and the came back and said that they’d enjoyed it.
So what are the issues in evaluating mobile learning? Mobililty – tracking across locations. The fact that it may be distributed across participants in different locations. It may be informal, this makes it difficult to distiguish learning from other activities. It may be extended – so how can we evaluate long-term learning. It may involve a variety of personal and institutional technologies. There may be specific ethical issues as well.
One thing you need to establish is “What do you want to know”?
Usability….there are well tried usability methods. Hmm, yes but why are we looking at usability? Is this not returning the focus to the technology? I suppose if you are looking toward eventually establishing what technologies work then this is important.
Usefulness….this is difficult because it depends on the educational aims and context. Lists a few such as field-based interviews, observations and walkthroughs followed up by ethnographic analysis and critical incident studies including focus group replay. Also mentions logbooks and diaries. However he’s not mentioned Participatory Video. Maybe I’m just a bit excited by this because I’ve just been to a workshop about it, but I think it has definite potential as a method for researching informal learning.
Attitude…Mike feels that general attitude surveys are of little use. almost all innovations are rated between 3.5 and 4.5 on a 5 point likert scale. Micorsoft desirability toolkit (a set of cards, and slightly more sophisticated way of measuring attitudes).
He’s now going through the case studies: Student Learning organiser, MyArtSpace and PI the new project jointly between OU in Milton Keynes and Nottingham University.
Learning organiser: methods used
· Focus groups
· Videotaped interactions
Myartspace – the aim was to engage children in a straditional school museum visit. The enquiry-based approach with co-creation of content was selected. 2006 – 3 test sites with 2000 children. (maybe 3000). Children used the phone to create and share a record of their experiences of the museum.
Evaluation Methods: Micro level- usability issues, meso level, educational issues, macro level, organisational issues.
At each level they looked at what was supposed to happen, what actually happened and the differences between what was expected and what actually occurred.
Problems – kids loved collecting things, but teachers found that with more than 4 or 5 objects it became unmanageable. Do you constrain collection and then attempt to manage that back in the classroom. Part of learning experience is how you manage the complexity of the interaction. This is a learning opportunity but this creates something of a conflict. Also, museum saw it as another organizational burden.
The PI project – ethics were quite important. Certianly when tracking childrens’ use of technology outside of the formal framework. You need their agreement and participation. Hmmm – participatory video again?
Eileen Scanlon asks – What do you think the hardest thing to do with evaluating mobile technologies.
Mike Sharples – finding evidence of learning is the most difficult. You can find evidence that they are doing things, but it is really difficult to capture evidence of procedural or conceptual learning as it is happening. Most papers he’s read have captured everything else except whether people are learning.