↓ Archives ↓

Posts Tagged → telstar

Linking Reading lists to Acquisitions

Heather Sherman from Dawson’s talking about how they have ‘reading list’ funcitonality in their dawsonenter application. Item’s can be added to the reading list from Dawson’s database of bibliographic records, and then turned into orders directly without re-keying data.

Can import from a spreadsheet – just need an ISBN basically.

At the moment just ‘book’ type data (but includes e-books), but looking at extending beyond this. Also interested in looking at possibility of an API or similar to make it possible to populate and/or publish reading lists.

Emerald ‘Reading Lists’

Emerald have been looking at how to build ‘peer reviewed reading lists’. This is building reading lists – but currently from Emerald collection – although may look at including things from other publishers in the future.

This is a free of charge service (not a product) – something giving back to the academic university. Intended to improve workflow, and assure quality via peer review.

List creation process:

  • Identify key subjects
  • Journal/book editor recruited
  • Student recruited
  • Student follows set methodology to create list (this is documented in a booklet)
  • Editor reviews list, writes editorial
  • Revised reading list and editorial submitted to Emerald
  • Project team review list and request amendments before approving
  • Lists distributed to faculty – i.e. to academics on the ground
  • Measure usage

Case study carried out. Trial readling list built on the syllabus of ‘Services Marketsing’ course taught by Dr Mei-Na Liao at Bradford School of Management. List created using a structure devised by the project team. List peer-review within the project, List sent to Pro Liao in July 2009. 21% of Pro. Liao’s 2nd reading list taken from our list. Feedback was very positive….

Emerald see the benefits to end-users as:

  • Workflow improved with this tailored teaching aid
  • Complementary and personal service that adds value to Emerald’s products and provides a better service to end-users
  • List approved and reviewed by our experience editors
  • Value for money for librarians (increased usage of Emerald subscriptions)
  • Opportunity to build relationships


Ian Mulvany going to talk about Mendeley, and particularly the new API… Ian started at Mendeley 3 weeks ago, so learning as he goes!

Mendeley – intends to “help researchers work smarter”. Desktop client software (on all main platforms – Win, Mac, Linux) – helps you manage your research papers. Also have a cloud based service – can ‘scrobble‘ usage from client, and also use cloud based storage for references and papers.

Mendeley now 16 months old – 400,000 users, and real-time data on 28m research papers. Using Hadoop (basically Open Source MapReduce).

Mendeley provide an API. You can register for an API key at (invitation only until the end of July 2010) http://www.mendeley.com/oapi, and documentation at http://www.mendeley.com/oapi/methods.

Finally Ian talking about a new graphical ‘build a reference style’ tool – which will be built into next client (October) – this will allow users to build new styles. Uses CSL behind the scenes and once users have created new styles, they can share back to central list of styles.

Resource Lists and Collective Intelligence

Chris Clarke from Talis asks “How can we use collective intelligence to improve resource lists?”

Talis have product Talis Aspire – hosted ‘resource list management’ service. Talis found resource list management was a collaboration between Academics, Library and Students. Need all three involved – and academic engagement is key. Libraries also need to be able to get and manage stock. Talis wanted to avoid overheads – rekeying data etc. Also wanted to give students v high quality experience.

‘Collective Intelligence’ – aggregating information across many users/uses to find patterns of use etc. and use this to generate information. E.g. ‘Which items are frequently referenced together by the experts?’; ‘If we by this book, will learners actually use it?’; What items to learners substitute for when the guided resource are not available?’; ‘Can we guess the loan strategy upfront, instead of waiting for an item to be heavily borrowed?’

Can only do this across largish datasets – and Talis is able to aggregate over Talis Aspire customers who contribute their data. At the moment have a trial dataset made up over 4 customers – but still millions of transactions.

Chris mentions Talis use MapReduce to process large quantities of data (this approach was developed by Google, although now there are open source implementations (Hadoop) and Amazon provide an elastic MapReduce service).

Four prototype APIs (all REST based):

  • “Appears with” recommendations
    • Based on co-occurrences of items on resource lists. Academics who reference this, also reference…
  • “Borrowed with” recommendations
    • Based on the patterns of what students actually borrow. Learners who borrowed this also borrowed…
  • Loans
    • Show me how popular this item has been over time, across all institutions
  • Holdings
    • Which institutions actually have this item

Chris has posted further description of the API functions to the Reading List Solutions JISCMail list.


Ben Charlton relating how List8D was a project started at a Dev8D event – aimed at developing a new ‘reading list’ system. They then got funding under the JISC Rapid Innovation funding, and now being developed by the University of Kent for their live reading lists (hopefully from September). List8D has a Google Code page.

Now Matt Spence demonstrating the basic functionality. List8D has a very nice looking interface (although Ben notes that the use of complex CSS and javascript creates performance issues in IE). List8D allows you to search several sources at one time to add items – but has hierarchy, so you can display e.g. library catalogue results first. Also includes some admin functions – e.g. a ‘request a scan’ feature to get an item digitised for online delivery (nice idea), and a ‘note for librarian’.

Some technical details – built using ‘Zend‘ framework. Connection to other systems are handled via bits of code List8D project christened ‘Metatrons‘ – each metatron has a few functions:

  • Find resources
  • Load Metadata
  • Pass types (lists what types of resource the source contains – e.g. books, journals, etc.)
  • Unique (says what unique key the source uses)

The interface can be restyled – but generally advise doing basic stuff like colours and logo, as a bit hacked together.

Matt also talking about applying ‘reference’ styling. Something that they’ve not been able to spend a lot of time on but would love to be able to do better – very similar issue to Telstar here I think.

Reading List Hackday

For the next two days I’m at a ‘reading list hackday‘. This is a joint event between DevCSI, List8D and TELSTAR projects – all funded by JISC, and is definitely very much a ‘doing’ event – we are hoping to have people produce ideas, and realise some of those ideas as software – hopefully by the time we wrap up tomorrow.

What is a ‘Reading List’? Generally in an academic context it is a list of recommended or potential reading that tutors give to students. The format of these can vary wildly, and they can range in length from one book, to hundreds of books, articles, websites, etc. etc.

I’ve written quite a bit about how TELSTAR has been integrating reference management tools into Moodle, so what has this got to do with Reading Lists? The TELSTAR project has seen the use of reading lists, and the production of bibliographies in student essays as all part of the same workflow. Anecdotally we’ve found that the materials that students are most likely to reference are those that they’ve been recommended by their tutors, so it makes a lot of sense to make it as easy as possible for students to make a record of what has been recommended, what they have read, and finally what they are citing/referencing in their work.

So, part of the toolset TELSTAR has created is tools for tutors/lecturers to collect together lists of resources, and publish them on their course website. When they publish the lists, we can process the details to do a number of things such as adding links to online resources (using OpenURL) and providing a ‘styled’ version of the reference (whether that is in a formal citation style, or something simpler).

I hope over the next couple of days we can get some ideas and even perhaps get some new developments to TELSTAR. However, the best thing we could get out of the event is the start of an activity developer and user community interested in using and developing the TELSTAR code.

The day has started with Mahendra Mahey (UKOLN and DevCSI) talking about the event – how it came about, and picking up on issues around reading lists – using examples raised on the newly established ‘Reading List Solutions‘ JISCMail list. Then each delegate was asked to give a ’60 second pitch’ outlining why they are here, and what they want to get out of the day. Mahendra summarised a number of themes coming out of discussion on the mailing lists as follows:

  • Interoperability with several systems, particularly Library management systems (e.g. flagging items that are on list)
  • Moving away from platform dependence
  • Intended purpose and usefulness of reading lists
    • should/could read?
    • purchasing/collection management tool
  • Academic vs Student created lists
  • Lists as social network
  • Reliable stable links / Keeping lists up to date
  • Duplication of effort
  • Metadata Magic

Linking and Persistence

I’m currently working on a deliverable which relates to the provision of ‘persistent links’ to resources. This is part of that report and I’d be interested in feedback. As well as the text I’ve inlcuded a specific question at the end – I’d be very interested in responses:

When providing links to online resources it is clearly desirable that the links will work over long periods of time. However, it is common for resources to be identified and located by multiple URLs over time. This creates a challenge when forming a reference to an online resource.

This report will not attempt to cover all aspects of persistent identifiers, which are well covered elsewhere, particularly by Emma Tonkin’s 2008 article on the topic in Ariadne . However, it will consider approaches discussed within the TELSTAR project.

Digital Object Identifiers (DOIs)

A DOI name “provides a means of persistently identifying a piece of intellectual property on a digital network and associating it with related current data in a structured extensible way.” (from http://www.doi.org/faq.html#1)

On the web, a given DOI can be ‘resolved’ via a DOI System proxy server – the most commonly used being http://dx.doi.org. A DOI can be resolved by appending the DOI to the proxy server URL. For example:

DOI Name: doi:10.10.123/456
URL for resolution: http://dx.doi.org/10.10.123/456

In the majority of cases such a URL will resolve to the full text of the resource on a publishers website. However, there are examples of a DOI resolving to other services – such as a page listing a number of different URLs for the identified resource when it is available through multiple routes.

DOIs are being widely adopted to identify journal articles with a smaller amount of use to identify books, book chapters and other types of resource (see http://www.crossref.org/06members/53status.html for a breakdown of the different resources being identified by DOIs). The DOI has become part of some commonly used Citation styles such as APA .

Linking to online versions of articles using the DOI has a major drawback. Because the standard behaviour of DOI resolution services is to link to the ‘publisher’ version of the paper, it does not take into account the ‘appropriate copy’ problem . In brief the ‘appropriate copy’ problem is the issue that there may exist a number of different routes to a resource, but typically members of an institution will only be able to use a subset of the overall routes, depending on institutional subscriptions and services. It was the ‘appropriate copy’ problem that led to the development of the OpenURL standard.

PURLs (Persistent URLs)

A PURL is “an address on the World Wide Web that points to other Web resources. If a Web resource changes location (and hence URL), a PURL pointing to it can be updated.” (from http://purl.oclc.org/docs/faq.html#toc1.5)

PURLs were created in recognition that web resources can change location (and so URL) . A PURL can be assigned to a web resource and if the web resource changes location the PURL can be updated to point to the new location (URL) for the resource.

PURLs can be created through the use of appropriate software, either by hosting the software or by using a public PURL server such as that hosted by OCLC.


Unlike DOIs and PURLs, OpenURLs are not specifically persistent identifiers for a resource. The OpenURL framework standard (ANSI/NISO Z39.88) enables the creation of applications that transfer packages of information over a network. The only significant implementation of the standard is to transfer metadata related to bibliographic resources.

OpenURL has seen widespread adoption by University Libraries in combination with ‘OpenURL resolver’ software. This ‘resolver’ software typically uses the metadata available from an OpenURL (transported over http) and provides a link to the ‘appropriate copy’ based on the library’s subscription information.

OpenURLs are also commonly used by ejournal platforms to enable inbound links to specific resources (typically journal articles).

As the metadata related to a publication tends to be persistent over time OpenURLs can be seen as ‘persistent’ in one sense. However, OpenURLs in themselves simply provide a transport mechanism for metadata, and how they are ‘resolved’ and what they resolve to depends on the resolver software and the information available to that resolver. This means the result of resolving an OpenURL can change over time.

Managed URLs

It is possible to enable ‘persistence’ of links to online resources by introducing and managing a level of redirection. Using a ‘managed’ URL which in turn redirects to the location of the resource it is possible to then use the managed URL in place of the current location of the resource. If the resource is moved the managed URL can be updated to point at the new location of the resource.

The Open University currently uses a number of different types of Managed URLs depending on the type of resource being linked to. These mechanisms are described below in the section on the “Current Linking strategy at the Open University”.

[the following paragraphs are not part of the report, but conclude with some questions which I’m looking for answers to, so comments would be welcome]

An example of a ‘managed URL’ at the Open University is the use of a system called ROUTES. ROUTES is an implementation of the Index+ software from System Simulation.

This is used to give a ‘managed URL’ to freely available web resources. When a resource is added to ROUTES, its URL is recorded in the record. For example see the ROUTES record for the BBC Homepage.

Once a resource has been added to ROUTES, a ROUTES URL is used in place of the resource primary URL in Open University course material. This ROUTES URL results in a http status 302 being returned (i.e. a redirect) to the resources primary URL as recorded in ROUTES. Then, if the resource moves in the future the ROUTES record can be updated, but the ROUTES URL being used in OU course material does not change. For example:

So, my questions are

  • Can we talk about ROUTES URLs as PURLs, or are there important differences between what the PURL software is doing and what ROUTES does?
  • If so, what are these differences?
  • Does the more generic term ‘managed URL’ fit the bill?

Service Usage Model (SUM) for Citation Management

One of the workpackages in the TELSTAR project involves working towards development of a Service Usage Model (SUM) that will be offered as a contribution to the e-Framework.

The e-Framework for Education and Research is “an international initiative that provides information to institutions on investing in and using information technology infrastructure. It advocates service-oriented approaches to facilitate technical interoperability of core infrastructure as well as effective use of available funding. …The e-Framework maintains the content to assist other international education and research communities in planning, prioritising and implementing their IT infrastructure in a better way.”

We feel that it is quite important to attempt to model the work that is being done in the TELSTAR project by describing it in a controlled and systems-neutral way in order that other F/HEIs that have a similar business need have the opportunity to adopt similar methodologies regardless of the technical systems they may have available.

We are using the templates provided by the e-Framework to describe the business-level capabilities, the business processes or workflows, the technical functionality, the structure and arrangement of the functions, applicable standards, design decisions, data sources and services used.

We have started with a ‘top-level’ SUM which is a broad view of the whole area of what we have called “Citation Management”. We aim to follow up with 6 more detailed SUMs that represent the 6 business processes that the project is addressing. These are:

  • Add references
  • Aggregate references
  • Import/export references
  • Create bibliography
  • Manage bibliography
  • Recommend resources

We would welcome any comments on the top-level SUM over the next few weeks, and will add drafts of the detailed SUMs as they are developed. You can read and comment on the Citation Management SUM at https://e-framework.usq.edu.au/users/wiki/CitationManagement.

Innovations in Reference Management 2010

Today the TELSTAR project is running an event on ‘Innovations in Reference Management’. As we’ve been working on the project we’ve found that there seems to be a lack of ‘community’ to discuss and collaborate around the practice of Reference Management. Even in terms of products there doesn’t seem to be a strong ‘user group’ (in an organised fashion at any rate) for any of the major Reference Management packages such as RefWorks, EndNote, Zotero etc.

As we talked to others across the HE community about the project, we found that there was a lot of interest in what we were doing, and that there was quite a lot of innovation going on around the practice of Reference Management and the use of the relevant software. We felt there would be real value in running an event to highlight some of this work.

So, IRM10 (follow #irm10 on Twitter, or see an archives of all tweets at http://www.twapperkeeper.com/irm10/) is happening today – we’ve got a great programme (see http://www.open.ac.uk/telstar/event/programme) and I hope it will be an interesting day. I’ll be posting throughout the day on this blog, and we are recording all the sessions so we’ll be posting these later.

Innovations in Reference Management

Following on from my earlier post, I’m now very pleased to announce the Innovations in Reference Management event, which is taking place on 14th January 2010, at Kents Hill in Milton Keynes. This free event will include talks looking at how reference management relates to real-time impact metrics, social bookmarking, digital preservation, the semantic web, and also showcasing some innovative use of reference management software at the Open University and the University of Lincoln.

You can visit the event page for more information, full programme details, and to register.