In our report on Social Learning Analytics, we discuss social content indexing technologies, including image analysis. In this blog post, Suzanne Little provides a bit more insight into the rationale for exploring this…

It’s almost a truism that educational content these days is more than just text or spoken word. Exciting and effective learning materials contain diagrams, illustrations, photographs, presentations, audio and video. Courses are delivered via broadcasts, streaming video, online slide sharing, interactive games and collaborative forums. The Open University, in particular, has a very rich archive of multimedia educational resources to offer including videos, photographs, slideshow-based presentations, bundled educational archives and web pages.

Traditionally you would discover this type of material through a curated index built by librarians and educators who would guide you to useful resources depending on your question or learning goal. This might be through formal metadata in a library system or specific links given in a course outline. The information age opened up resources by indexing text (the content) that could then be searched by supplying a keyword or phrase (a la Google) that the learner thinks best describes what they are looking for.

Of course this puts a burden on the learner to have enough understanding of both the topic and the type of available material to craft a good search term. With the ever-increasing volumes of educational resources being made available, it is a challenge to find new material and forge appropriate learning pathways. The SocialLearn project is helping learners by developing tools to support the building and exploration of personal learning networks created with help from a learners peers. But how can we make it easier for learners (and educators developing course material) to find resources that aren’t well described using text – images, audio, video? Particularly where material is reused in other contexts.

Visual search (or content-based multimedia indexing) can help when it is difficult to describe your interests in words (“search terms”) or when you want to browse for inspiration without a specific result in mind. Users can then find reuse of material in different contexts with different supporting materials, discover the source of a screenshot or find items that share visual features and may provide new ways of understanding a concept. For example, using slides a visual search can identify a video of the lecture where slides are displayed or using a screenshot from a document the original source video can be identified. The integration of visual search with traditional search methods and social network based learning support provides exciting new ways to develop and explore learning pathways.

In the Multimedia Information Retrieval Group at the OU’s Knowledge Media Institute, we have been researching multimedia information retrieval and visual search for educational resources and started to integrate this work with the SocialLearn platform. Suzanne Little will be presenting this work at the World Conference on Educational Multimedia (EdMedia) in Lisbon, Portugal next week (June 30th, 2pm) based on the paper “Navigating and Discovering Educational Materials through Visual Similarity Search”.

Image search in SocialLearn (interface mockup)

Image search in SocialLearn (interface mockup)

We’ve laid the plumbing which connects the image indexing and search technology with SocialLearn, and have some proof of concept demos. This interface mockup shows the rendering of this indexing technology with the SocialLearn Backpack, the toolbar that can be activated while browsing the web, to access SocialLearn facilities. Images on the website have been extracted, and the user can select one of them to initiate a search for related images. A social learning dimension kicks in when, for instance, a learner’s social network is used to prioritise indexing, linked data from other OU datasets is used to infer potentially relevant sources, the learner’s own navigation history is mined to remind them of where they have encountered the image before, or discourse analytics are used to present that image from a different perspective to the learner’s.