The TELSTAR (Technology Enhanced Learning supporting STudents to achieve Academic Rigour) project ran from November 2008 to July 2010. It was run by the Open University and funded under the UK Higher Education Joint Information Systems Committee’s Institutional Innovation Programme (Phase 2 – Large-scale Institutional Exemplars). The project was carried out in collaboration with RefWorks-COS.
The key outputs of the projects are:
- Open source code for integrating reference management tools into the Moodle online learning platform
- A toolkit for those looking at integrating reference management tools into their learning environment ‘ReMIT’, including a detailed description of the OU approach
For more details see:
The Final Report of the project may also be of interest, and is available on the TELSTAR project page on the JISC website.
The project commenced in November 2008 and completed at the end of July 2010.
Heather Sherman from Dawson’s talking about how they have ‘reading list’ funcitonality in their dawsonenter application. Item’s can be added to the reading list from Dawson’s database of bibliographic records, and then turned into orders directly without re-keying data.
Can import from a spreadsheet – just need an ISBN basically.
At the moment just ‘book’ type data (but includes e-books), but looking at extending beyond this. Also interested in looking at possibility of an API or similar to make it possible to populate and/or publish reading lists.
Emerald have been looking at how to build ‘peer reviewed reading lists’. This is building reading lists – but currently from Emerald collection – although may look at including things from other publishers in the future.
This is a free of charge service (not a product) – something giving back to the academic university. Intended to improve workflow, and assure quality via peer review.
List creation process:
- Identify key subjects
- Journal/book editor recruited
- Student recruited
- Student follows set methodology to create list (this is documented in a booklet)
- Editor reviews list, writes editorial
- Revised reading list and editorial submitted to Emerald
- Project team review list and request amendments before approving
- Lists distributed to faculty – i.e. to academics on the ground
- Measure usage
Case study carried out. Trial readling list built on the syllabus of ‘Services Marketsing’ course taught by Dr Mei-Na Liao at Bradford School of Management. List created using a structure devised by the project team. List peer-review within the project, List sent to Pro Liao in July 2009. 21% of Pro. Liao’s 2nd reading list taken from our list. Feedback was very positive….
Emerald see the benefits to end-users as:
- Workflow improved with this tailored teaching aid
- Complementary and personal service that adds value to Emerald’s products and provides a better service to end-users
- List approved and reviewed by our experience editors
- Value for money for librarians (increased usage of Emerald subscriptions)
- Opportunity to build relationships
Ian Mulvany going to talk about Mendeley, and particularly the new API… Ian started at Mendeley 3 weeks ago, so learning as he goes!
Mendeley – intends to “help researchers work smarter”. Desktop client software (on all main platforms – Win, Mac, Linux) – helps you manage your research papers. Also have a cloud based service – can ‘scrobble‘ usage from client, and also use cloud based storage for references and papers.
Mendeley now 16 months old – 400,000 users, and real-time data on 28m research papers. Using Hadoop (basically Open Source MapReduce).
Mendeley provide an API. You can register for an API key at (invitation only until the end of July 2010) http://www.mendeley.com/oapi, and documentation at http://www.mendeley.com/oapi/methods.
Finally Ian talking about a new graphical ‘build a reference style’ tool – which will be built into next client (October) – this will allow users to build new styles. Uses CSL behind the scenes and once users have created new styles, they can share back to central list of styles.
Chris Clarke from Talis asks “How can we use collective intelligence to improve resource lists?”
Talis have product Talis Aspire – hosted ‘resource list management’ service. Talis found resource list management was a collaboration between Academics, Library and Students. Need all three involved – and academic engagement is key. Libraries also need to be able to get and manage stock. Talis wanted to avoid overheads – rekeying data etc. Also wanted to give students v high quality experience.
‘Collective Intelligence’ – aggregating information across many users/uses to find patterns of use etc. and use this to generate information. E.g. ‘Which items are frequently referenced together by the experts?’; ‘If we by this book, will learners actually use it?’; What items to learners substitute for when the guided resource are not available?’; ‘Can we guess the loan strategy upfront, instead of waiting for an item to be heavily borrowed?’
Can only do this across largish datasets – and Talis is able to aggregate over Talis Aspire customers who contribute their data. At the moment have a trial dataset made up over 4 customers – but still millions of transactions.
Chris mentions Talis use MapReduce to process large quantities of data (this approach was developed by Google, although now there are open source implementations (Hadoop) and Amazon provide an elastic MapReduce service).
Four prototype APIs (all REST based):
- “Appears with” recommendations
- Based on co-occurrences of items on resource lists. Academics who reference this, also reference…
- “Borrowed with” recommendations
- Based on the patterns of what students actually borrow. Learners who borrowed this also borrowed…
- Show me how popular this item has been over time, across all institutions
- Which institutions actually have this item
Ben Charlton relating how List8D was a project started at a Dev8D event – aimed at developing a new ‘reading list’ system. They then got funding under the JISC Rapid Innovation funding, and now being developed by the University of Kent for their live reading lists (hopefully from September). List8D has a Google Code page.
- Find resources
- Load Metadata
- Pass types (lists what types of resource the source contains – e.g. books, journals, etc.)
- Unique (says what unique key the source uses)
The interface can be restyled – but generally advise doing basic stuff like colours and logo, as a bit hacked together.
Matt also talking about applying ‘reference’ styling. Something that they’ve not been able to spend a lot of time on but would love to be able to do better – very similar issue to Telstar here I think.
For the next two days I’m at a ‘reading list hackday‘. This is a joint event between DevCSI, List8D and TELSTAR projects – all funded by JISC, and is definitely very much a ‘doing’ event – we are hoping to have people produce ideas, and realise some of those ideas as software – hopefully by the time we wrap up tomorrow.
What is a ‘Reading List’? Generally in an academic context it is a list of recommended or potential reading that tutors give to students. The format of these can vary wildly, and they can range in length from one book, to hundreds of books, articles, websites, etc. etc.
I’ve written quite a bit about how TELSTAR has been integrating reference management tools into Moodle, so what has this got to do with Reading Lists? The TELSTAR project has seen the use of reading lists, and the production of bibliographies in student essays as all part of the same workflow. Anecdotally we’ve found that the materials that students are most likely to reference are those that they’ve been recommended by their tutors, so it makes a lot of sense to make it as easy as possible for students to make a record of what has been recommended, what they have read, and finally what they are citing/referencing in their work.
So, part of the toolset TELSTAR has created is tools for tutors/lecturers to collect together lists of resources, and publish them on their course website. When they publish the lists, we can process the details to do a number of things such as adding links to online resources (using OpenURL) and providing a ‘styled’ version of the reference (whether that is in a formal citation style, or something simpler).
I hope over the next couple of days we can get some ideas and even perhaps get some new developments to TELSTAR. However, the best thing we could get out of the event is the start of an activity developer and user community interested in using and developing the TELSTAR code.
The day has started with Mahendra Mahey (UKOLN and DevCSI) talking about the event – how it came about, and picking up on issues around reading lists – using examples raised on the newly established ‘Reading List Solutions‘ JISCMail list. Then each delegate was asked to give a ’60 second pitch’ outlining why they are here, and what they want to get out of the day. Mahendra summarised a number of themes coming out of discussion on the mailing lists as follows:
- Interoperability with several systems, particularly Library management systems (e.g. flagging items that are on list)
- Moving away from platform dependence
- Intended purpose and usefulness of reading lists
- should/could read?
- purchasing/collection management tool
- Academic vs Student created lists
- Lists as social network
- Reliable stable links / Keeping lists up to date
- Duplication of effort
- Metadata Magic
Last talk of the day from Kevin Ashley (from the Digital Curation Centre) – he says if you don’t know what data citation is now, he hopes he will be able to tell you why you should care about it and why it will be important in the future.
Kevin mentioning the DCC Curation Lifecycle Model – but today’s talk is focussing only on one aspect – Access, Use and Reuse.
So – why should we care about data citation? Kevin giving example of paper on LIDAR and RADAR images of ice clouds – in paper, only images – not the data used to create those images. Kevin showing how data can be misrepresented – showing graphs that don’t start at zero on one scale can lead to misleading conclusions.
So – data behind graphs can be very important. Kevin says that data used to support statements in publication should be as accessible as the publication – so statements and findings can be examined and challenged.
Kevin showing how you can misrepresent data – e.g. by taking a subset of results (that happen to favour a particular conclusion) – the data published is not always (all of) the data collected. Kevin mentioning a few texts on this – and my favourite that I was googling as he spoke ‘How to Lie with Statistics’ by Darrell Huff
Kevin giving example of studying Biodiversity – requires many different data sources, some of which won’t be published, some of which won’t have been compiled through academic research…
All of these issues mean we really ought to care about data citation.
‘Data is Different’. With traditional bibliographic resources it has basically come from a ‘print’ paradigm – i.e. ‘published’ – we’ve moved online with many of these resources, but still fundamentally the same – you ‘publish’ something and then you cite it.
However, a data set may be being added to on a continuing basis – a telescope maybe collecting more and more data all the time. What you cite now may be different by tomorrow (Kevin draws parallel to citing web resources like blogs)
So – approaches to dealing with this:
- Giving data digital object identifiers (e.g. datacite)
- Capturing data subsets at a point of publication
- Freezing those subsets somewhere
- Publication led
These works well in certain areas
- Dataverse (thedata.org) – submit your data, get a checksum (so you can check if it has changed since publication) and citation and publish
- Ebank/ecrystals – harvest, stor, cite
- DataCite – working at national level with libraries and data centers
However, data changes and can be very very big: – can be changing by the second, and be petabytes in size. If you take a ‘publication’ approach – it may not be apparent that four different references to subsets of data are actually all part of the same dataset.
One way of dealing with ‘big data’ issue – rather than making copies – keep change records – create reference mechanises that allow reference to a specific change point – Kevin mentioning Memento as a possible model for this.
Another alternative is using ‘annotation’ rather than citations. When data sources have many (thousands) of contributors instead of citing data sources in publications, annotate data sources with publications. Example of ‘Mondrian’ approach where blocks of colour are assigned based on what types of annotation there are for different parts of the dataset. Turns data set into something that can be challenged in itself…
Kevin mentioning Buneman’s desiderata (see http://homepages.inf.ed.ac.uk/opb/homepagefiles/harmarnew.pdf)
Kevin concerned that the tools we have now aren’t quite ready for the challenges of data citation.
The first presentation after lunch was me, so you’ll have to wait for a blog post on ‘References on the Web’ until I get to write it up!
Now Helen Curtis going to talk about the links between Digital (and Information) literacy and Reference Management. [apologies to Helen I didn't capture this as well as I would have liked - she covered much more than I've got here] – her slides are at http://www.slideshare.net/helencurtis/
At the University of Wolverhampton, long history of using EndNote, used mainly by staff – researchers and postgraduates.
Few drivers to change this – in 2006 University introduced ‘Blended Learning Strategy’; seeing increased use of online resources; development of graduate attributes – including digital literacy. Also other drivers – impact of web technologies and growing concerns around academic conduct and plagiarism.
Role for reference management:
- significant for digital lieracy
- use tools to develop information managemet skills
- less emphasis on learning particuar s/w – more on behaviour and application
- Become much more relevant to undergraduate use
- new and emerging tools are web-based
Seeing move from personal list of references, aimed at researchers with large lists of references, to more flexible tools – sharing and collaboration becoming more significant.
- Teach principles of information and reference management
- Involvement in curriculum design/team teaching
- Linking use to assessment
- Using the tools to aid understanding of referencing and constructing a reference
- Using the tools as evidence of engagement with scholarly resources
- Exploiting the sharing collaboration features
Introduced group of students to using EndNote web – got v positive feedback – (paraphrased) ‘this was the first assignment where I didn’t lose marks on referencing’
Most University courses offer some lists of ‘recommended reading’ to their students, in this session we’ve got three presentations on ‘reading list’ systems from librarians.
University of Plymouth: Aspire (Jayne Moss)
Wanted reading list system to help improve service to students, and manage stock better in library. Decided to work with Talis – felt they could work with the company.
The key features they were looking for were:
- had to be a tool for the academic
- had to be easy to use – intuitive
- designed to match the academic workflow
Worked with Talis, ran focus groups with academics, found out about the academic workflow – found a huge variety of practice. Boiled down to:
- Locate resource
- Capture details/Bookmark
- Create list
- Publish list
Integrated with DOI/Crossref lookup. Encourage academics to give access to librarians so they can check details etc.
Once you have list of ‘bookmarks’ can just drag them into reading list.
- v positive feedback
- easy to use
- links to lists embedded in their teaching site
- liked ability to add notes (which are private to them)
Students can also tag items – although Jayne not convinced this is used much
- Displays availability taken from universities library catalogue – much easier than in catalogue interface!
- Great way of engaging faculty
- Getting accurate reading lists
- Developed good relationship with Talis
- Get to influence ongoing development (e.g. link from adding item to reading list to creating order in library acquisitions system)
- Aspire built on semantic tech
- enable academics to build ‘better’ lists
- enable students to collaborate and connect lists – e.g. create an annotated bibliography
- smarter workflows – e.g. link to library acquisitions
University of Lincoln: LearnBuild LibraryLink (Paul Stainthorp)
Paul reflecting that Lincoln only partially successful in implementing ‘reading lists’.
University of Lincoln – bought reading list system, funds were only available for short period, so had limited time to assess full requirements and how far chosen product met their requirements.
- filled a void
- improved consistency
- gave library an ‘in’ on launch of new VLE (Blackboard)
- hundreds of modules linked in by 2000
- students are using them – have usage stats from both LearnBuild and Blackboard
- some simple stock-demand prediction
Unfortunately there were quite a few areas not so successful:
- not intuitive; time-consuming
- software not being developed
- no community of users
- competing developements (EPrints, digitisation, OPAC, RefWorks)
- too closely linked to Blackboard module system
- Subject libraries don’t like it, but lack of uptake from academics means that it is the subject librarians who end up doing the work.
However, unless library can demonstrate success, unlikely to get money to buy better system… So library putting more effort into make it work.
Paul saying because they are in this situation, they have been thinking laterally, and going to come at it from a different angle. Library has an opportunity to do some ‘free’ development work – funding with no strings attached.
Created “Jerome” (patron saint of libraries) – a library unproject.
Taking some inspiration from the TELSTAR project (yay) – hope to use RefWorks webservices and regain some control for the library
The Open University: TELSTAR (Anna Hvass)
Anna talking about traditional course production mechanism at the OU – printed materials written and sent out to students. Although more delivery online now, still a huge team of people involved in writing and delivering an OU course – from writers to editors to media producers to librarians. Can take anything up to 2 years to produce a course.
Currently when creating resource lists there is a huge variation of practice – every course, faculty and librarian can have a different approach! Until TELSTAR there were several tools that could be used – but not integrated together, and not used consistently.
TELSTAR developed ‘MyReferences’ – place you can collect references, you can create bibliographies etc. Also run ‘reference reports’ which allow you previews of what references will look at in course website.
You can create ‘shared accounts’ in MyReferences which you can use to share a single account between whole course team. Also include librarian, editors, etc. in shared accounts.
Can create and edit references. Once finished, can pull list through into course website. When references display in course website get links to online resources. Students can also export references back from lists in course website – can add references to blogs, forum posts etc. using ‘Collaborative activities’ export. Can export it to their own ‘MyReferences’ account. Can export it to other packages via RIS files.
Once student has collected references in MyReferences they can create bibliographies etc.
- Makes it easier for course teams to work together – and gives them control which they like
- Once you have course teams working on lists together, many other aspects of library integration into courses come more easily
- Students don’t have to go to another systems or another login to use it
Positive feedback from students and staff so far. Now looking at further developments – and to keep selling it course teams!