Author Archives: Chris

About Chris

Chris looks after Open Research Online (ORO) on a day to day basis. He has worked in this role since 2011 and can advise on using ORO to maximise dissemination of research outputs and Open Access publishing generally.

Open Access Week 2016

The 9th Global Open Access Week is held between October 24th to 30th, the theme this year is “Open in Action”. Library Services is marking Open Access Week with 3 events, all sessions are open to all.

OpenAccessWeek_logo

 

Open Research Data & Open Research Data Online (ORDO)

Wendy Mears (Research Support Librarian) will be introducing the new research data store that enables you to publish completed research data and get a permanent, citable DOI for your work. Based on the established Figshare platform, ORDO makes it easy to link to supporting data from other publications, and provides an accessible shop window on University research. ORDO can also be used for live data storage by individual users or collaborative project groups.

Further Information: Tuesday 25th October, 10-11am Library Presentation Room
Booking Information: https://www.eventbrite.co.uk/e/open-research-data-and-open-research-data-online-ordo-tickets-28332219431

Getting to Grips with Open Access Publishing

Chris Biggs (Research Support Librarian) will explore Open Access Publishing. We will cover both the Gold and Green routes to Open Access, the benefits of Open Access and the different Open Access Policies researchers now operate under.

Further Information: Wednesday 26th October, 10-11am Library Presentation Room
Booking Information: https://www.eventbrite.co.uk/e/getting-to-grips-with-open-access-publishing-tickets-28332426049

Claiming your research publications: ORCIDs at the OU

Chris Biggs (Research Support Librarian) will give an introduction to Open Researcher & Contributor IDs (ORCIDs), the non-proprietary identifier for researchers that has become the de-facto standard in the community. We will explore why they are a good idea and the time saving benefits for researchers. Please bring along a mobile device – there will be time to sign up for ORCIDs, add items to your ORCID record and configure it to auto-populate with new publications.

Further Information: Friday 28th October, 10-11am Library Presentation Room
Booking Information: https://www.eventbrite.co.uk/e/claiming-your-research-publications-orcids-at-the-ou-tickets-28332611604

Social Media, Open Access and the Institutional Repository

The impact of engaging with social media in conjunction with Open Access papers in a repository is not new and was perhaps first illustrated by Melissa Terras back in 2012 in her blog post Is blogging and tweeting about research papers worth it? The Verdict where she writes:

The papers that were tweeted and blogged had at least more than 11 times the number of downloads than their sibling paper which was left to its own devices in the institutional repository. QED, my friends. QED.”

When I review the top downloads of publications in ORO every month I see papers that have received more downloads than usual and I can attempt to see why that might be. Some months we can see how the presence of research outputs in MOOCs or OU modules increases the number of downloads of research publications.  But this month there are 2 striking examples of how social media impacts the dissemination of research publications.

The top 50 downloads from August are listed below:

2016-08-monthly_downloads_ORO_Upload

The first output that interested me was at Number 6: Ferguson, Rebecca; Coughlan, Tim and Herodotou, Christothea (2016). MOOCS: What The Open University research tells us. Institute of Educational Technology, The Open University, Milton Keynes.  This received 305 downloads and had only been added to the repository on the 12th August this year.  First analysis revealed that 12% of referrals in August were from Twitter and another 12% from Facebook (33% were internal ORO referrals and another 19% were from Google).  So something had happened on Twitter and Facebook that helped cause a spike in downloads of the item.

So, the first trace of twitter activity was from Rebecca Ferguson (@R3beccaF) herself on 12th August:

This was followed by another tweet by Gabriel Dumouchel (@gdumouchel) on the 18th August:

A blogpost by Willem van Valkenburg was also published on the same day

46970Blogpost

and a Facebook post by Hubert Lalande on the 19th: 46970Facebook Finally, there was a tweet on 13th September by MOOC Knowledge (@MOOCknow):

And if you map that activity against the daily download log, this is what you get:

MOOCTIMELINE

The second item to grab my attention was at Number 12: Gray, Joshua; Franqueira, Virginia N. L. and Yu, Yijun (2016). Forensically-Sound Analysis of Security Risks of using Local Password Managers. In: 1st International Workshop on Requirements Engineering for Investigating and Countering Crime, 13 September 2016, Beijing, IEEE. This had been added to ORO on the 26th July and received 200 downloads during August.  The referrals were even more intriguing as nearly half (48%) were from Twitter (a further 18% were internal and 11% were from Google)… so to the twitter trail.

On 13th August the ORO record was tweeted by K.M.Gallagher (@ageis):

Followed on the 15th by Brandon Smith (@muckrakery) with a response from Julia Angwin (@JuliaAngwin)

It was also posted on EventRegistry on 23rd August:

46871EventRegistry Finally, it was tweeted by the conference organiser (@iRENIC_workshop) as Best Long paper (but with no link!)

and if you map all that activity onto the daily downloads this is what you get:

PASSWORDMANAGERSTIMELINE OK, so my trawl through the social media isn’t exhaustive – I’m sure there are activities I’ve missed, but I think it’s still instructive:

  • Using social media can have an enormous impact on the reach of an Open Access publication
  • The greatest dissemination of a research output may not be the result of an author (or co-authors) intervention in social media – but someone completely off the radar.
  • Twitter and Facebook usage can both impact on the reach of any particular research output, they aren’t mutually exclusive and both serve the required function.
  • Not all tweets are equal, some are more valuable than others.
  • and always add a link to the paper!

Finally, looking at the tweets and posts I was struck at how those that had the most impact on downloads were also the most eye-catching.  These were tweets with photos of the abstract of the conference item or posts with the cover of the MOOC report.  The images certainly makes them stand out in the timeline and there is some thinking to suggest tweets with images and links are more likely to get noticed.

ORO Annual Report

The end of the University year is the time to publish the Annual Report from the Institutional Repository.  The main headlines are:

  • Increase in all deposits and downloads on 2014-15
  • Steep increase in full text deposits (representing embargoed papers) on 2014-15 in line with the requirement to meet HEFCE Open Access Policy for next REF
  • ORO is a high performing institutional repository, ranked 8th of 142 UK repositories by the Ranking Web of Repositories

I’ve rendered 2 PDF versions of the report below the image – one using the old faculty structure and another with the new faculty structure.

ORO Annual Report 2015-16PDF Version of the Annual Report: ORO Annual Report 2015-16

PDF Version of the Annual Report (New Faculty version): ORO Annual Report 2015-16_New Faculty

 

 

Repository Downloads and Site Visits

In July 2016 44,215 items were downloaded from ORO this compares to 48,009 in June and 62,084 in May.  Downloads (and site visits) decline over the summer months and match the academic year with greater activity during term time and quieter over holiday periods.

The downloads pattern (below) is spiky and based on a dataset from Feb 2013 onwards but it does show a dip over the summer months and at Christmas.

ORO Downloads

The site visits pattern (below) has a larger data set from Jan 2011 onwards and shows the seasonal variations more consistently.

OROSITEVISITS

Well, that’s all well and good but it’s not so insightful is it.  Maybe not, but there are a couple of other observations we can make when we look at the data in this way.

(1) How come there are more downloads per month than site visits?  Surely you need to visit the site before you can download something? Well most of the downloads come straight from Google or Google Scholar where you can download the full text directly from the search results page.  So these downloads aren’t counted as site visits in themselves (by Google Analytics).

(2) Site visits peaked in 2012/13, dropped in 2014 and have steadily consolidated since then.  I was alarmed with the drop in 2014 – some colleagues at other Institutional Repositories thought this was the effect of REF 2014 with high usage leading up to submission in 2013 and then a drop off of usage after submission in 2014. Maybe that’s the case, maybe also it’s the effect of (1) with the repository effectively becoming invisible to users accessing content via Google and Google Scholar.

(3) Why are the download stats so spiky? Well the above counts (IRUS) are the best we have to go on and they are COUNTER compliant.  Nevertheless, they don’t represent individual clicks from humans accessing known research content – some downloads are from automated harvesters.  These robotic downloads are frequently detected and filtered out of download counts by IRUS but others may not be detected and may be counted until they are detected and filtered out.  Moreover, there are genuine research reasons for mass harvesting of repository content for text mining as a research corpus.

ORO July Downloads

The Top 50 Downloads for July are below.  It’s the first month we’ve recorded this against the new faculty structure – we can see Faculty affiliations and where (currently) available School affiliations.  We’ll have to work this out a bit in ORO – we have the problem of representing researchers who have now left the OU and were never represented in the current structure.  We also need to think about how to represent IET and the Learning and Teaching Innovation Portfolio.  But we’ll get there – and we do have the potential to report at greater granularity at School level (e.g. FBL) so I’m hoping we can do some refined quarterly ORO updates.

The Top 50 in itself shows a smaller number of downloads than last month and I’ll expand on that in an upcoming post.  But overall with 44,215 downloads in July (as recorded by the IRUS service) ORO continues to be a very well used repository (14th of 110 repositories) using the IRUS service.

ORO July 206 Downloads

Using ORO for Learning and Teaching

The focus for ORO remains the dissemination of OU research outputs to the widest possible audience.  However, there is a use for ORO that we sometimes forget: ORO can be an Open Access resource bank for module production.

When she was at the OU as Director, Research, Scholarship and Quality Astrid Wissenberg gave a presentation on using research outputs in modules.  At the time I was aware of around a dozen ORO items that were being used in OU Modules and gave Astrid some examples for the presentation she titled Sprinkling gold dust: challenging students with cutting edge research.

I’m aware from colleagues that ORO items are quite heavily used in some IET modules, so I thought I’d have another look to see if there had been an increase in the use of ORO items in OU Modules.  Below is a list of known library links to ORO records provided for module production – we found 18 different items associated with 23 different modules:

oro items in modules

So there is continued usage of ORO items in module production… and maybe there is a lot more off my radar – I’d love to know about it!

I’ve also noticed at least two ORO items being used in FutureLearn MOOCs: Regine Hampel (2014). Making meaning online: computer-mediated communication for language learning. was used in Understanding Language and Rosie Flewitt, David Messer, and Natalia Kucirkova (2015). New directions for early literacy in a digital age: the iPad. was used in Childhood in the Digital Age.  There seems an obvious match between Open Access research and Open Access teaching (see blog post Open Access research in Open Access courses).

The benefits of using Open Access materials from ORO for teaching include:

  • Linking to ORO items is free
  • OU researchers involved in module production are in a great position to identify relevant work for use in modules
  • OU researchers can deposit the Author’s Accepted Manuscript of their work in ORO at any time – even if it has already been published.  Adding the item to ORO makes it available for using in module production.

Of course ORO isn’t the only place to go, if you are looking for Open Access content CORE is the best place to start with over 36 million Open Access articles aggregated from repositories across the world.

And it works both ways, one of the most downloaded items in ORO is Nigel Cross (2001). Designerly ways of knowing: design discipline versus design science. Design Issues, 17(3) pp. 49–55 – it gets several hundred downloads a month!  One of the reasons for this is because it is used in a Masters programme in Interaction Design and Technologies at the University of Gothenburg.  So getting the full text in ORO may increase the dissemination of a research output via University Curricula across the globe.

 

 

Get your ORCID working for you!

orcid-logoAs I write over 2,300,000 ORCIDs have been registered by researchers globally.  ORCIDs have gained traction with publishers, funders and universities as a way of persistently identifying individual researchers across different infrastructures and enabling the flow of data across them.  This will save everyone time and effort and increase the quality of datasets in scholarly communications.

However, simply registering for the 16 character alphanumeric code at ORCID.org is not enough to get the real benefit of ORCIDs.  A certain amount of configuration of your ORCID record will get it up to date and keep it up to date without additional intervention.  Here’s how…

Getting it up to date – one off additions

1. Add works by search and link.

Go to your ORCID record.  In the Works section click Add works>search & link, identify a useful source of publications data from the list:

  • CrossRef Metadata Search – all articles with a Digital Object Identifier (DOI) should be retrievable from CrossRef Metadata Search so that’s a very good place to start (you will need to authorise CrossRef to read your ORCID record and Add works to it).
  • Scopus to ORCID – import publications associated to your name/Scopus ID in Scopus. There is good coverage here, especially in STEM, not so good in AHSS but Elsevier are improving coverage in these disciplines (you will need to authorise Scopus to read your ORCID record and Add works to it).
  • Europe PubMed Central – import publications from Europe PMC database of biomedical and life sciences research literature (you will need to authorise Europe PMC to read your ORCID record and Add works to it).
  • Researcher ID -if you have a ResearcherID in Web of Science you can import all your publications recorded by linking your Researcher ID to your ORCID (you will need to authorise ResearcherID to read your ORCID record).  You can also add publications to your researcher ID from ORCID see this guide from Queensland University of Technology on Linking your publications to your ORCID ID via ResearcherID.
  • MLA International Bibliography – given the gaps in Scopus great source for researchers in literature and modern languages (you will need to authorise MLA to read your ORCID record and Add works to it).
  • Additionally research datasets with a DOI can be added to your ORCID record by going to DataCite source where you can search and locate datasets using the Search and Link – DataCite search option (you will need to authorise the linking and log into DataCite using your ORCID ID in the the top right corner!)

2. Add works by importing by BibTeX file – e.g. you can add all your records in ORO by exporting them as a BibTeX file and then importing them to ORCID.  

  • In ORO go to Browse>OU Author
  • In your list of publications “click export as BibTex”
  • You will get a webpage of all your publications in BibTeX format.
  • If you are using Google Chrome right click and “Save as”, choose an appropriate name and save as a text document somewhere handy.
  • If you are using IE, right click, then Select all>Copy.  Open NotePad and copy the data into a file. Click File>Save as, choose an appropriate name and save somewhere handy.
  • Go to your ORCID profile
  • In the Works section click Add works>Import by BibTex
  • Find your file and import the publications.
  • Click save or ignore for each publication depending on whether or not you want to retain it in your ORCID.
  • Add works
  • Further instructions can be found on the ORCID website here:http://support.orcid.org/knowledgebase/articles/390530-import-works-from-bibtex-files-website-user

(NOTE: we are working on functionality to automatically update ORO with publications in ORCID and vice versa.)

3. Add works manually don’t do this!  …unless you really have to and works can’t be added by any of the methods above

Keeping it up to date – automated additions

The real benefit of ORCID is that it facilitates the automated flow of data across systems – so set up your ORCID record to automatically update when new records are added from trusted data sources.

In the works section, go to search and link to add these feeds to your ORCID record.

  • CrossRef Auto-update – any new article with a Digital Object Identifier (DOI) that is associated with your ORCID will be automatically pulled into your ORCID record.
  • Scopus Author ID Updates – any new items in Scopus associated to your Scopus ID will be pulled into your ORCID record. (Pending clarification from Elsevier – Chris 20/02/2017.  Auto – updates are not currently configured in Scopus – this is a longer term goal for Scopus & Mendeley – Chris 26/05/2017)
  • DataCite Auto-update – any new dataset with a Digital Object Identifier (DOI) that is associated with your ORCID will be automatically pulled into your ORCID record.

Again you will need to make sure you have authorised these trusted organisations to make changes to your ORCID record.

De-duplication

Having added publications to your ORCID record you may expect to find duplication of some publications. If you import items that share an identifier (e.g. a DOI) ORCID should combine them in a single listing and allow you to select your preferred version for display.

If this de-duplication doesn’t work you can easily remove duplicates by using the bin icon.

Checking Access Permissions

At any stage in the process of authorising trusted organisations you can check who can and cannot add items to your ORCID profile by viewing Account Settings on your ORCID record which will list all the organisations you have granted permission to.  At any point you can withdraw these permissions.

You are in control… and use your ORCID

ORCID.org is very clear in its commitment to researcher control and privacy – things can only happen to your ORCID record if you expressly give permission for them to happen.

Whenever you can add an your ORCID – whether it is to bids, publications, datasets or peer review – add it!  Adding your ORCID your scholarly works of whatever type asserts your ownership of your scholarly works and allows them to be pushed around the scholarly communications infrastructure without further intervention.

ORO Downloads – June 2016

I’m a bit reluctant to publish lists of top downloads from ORO as they only tell the story of those items that get an exceptional number of downloads from ORO.  Sometimes these numbers are questionable and can be the result of non-human downloads that haven’t been filtered out by either the e-prints software (on which ORO runs) or the Jisc service IRUS which we also use to capture download counts.  But more importantly it doesn’t capture the more modest downloads accruing on ORO – the repository has, if you like, a long tail of downloads where the majority of downloads are actually gained by lots and lots of outputs getting smaller amounts of downloads.

So I’ve expanded our list from 15 to 50 to see what we capture.  It’s still the exceptions (the top 50 makes up about 0.6% of the total Open Access items in ORO) but there are some interesting stories to tell.

ORB-2016-06-monthly-downloadsFirst and foremost is the top of the list: Petre, Marian (2013). UML in practice. In: 35th International Conference on Software Engineering (ICSE 2013), 18-26 May 2013, San Francisco, CA, USA.  Marian’s paper has been a very popular paper in ORO for a very long time but last month was extraordinary with 1,784 downloads. Fortunately, I think we can explain this, ORO is telling me that it the item is being referred to from Wikipedia and it appears the paper was added as a reference to the Wikipedia page on UML sometime in August last year.  Additionally, in June traffic to the ORO item appears to have referred from social media sites like Twitter and Facebook, and sites like Y combinator (a start-up incubator) and feedly (an aggregator service).  I think it’s fair to say the presence of the paper on Wikipedia has led to it’s greater dissemination across various web platforms – maybe it’s time for the OU to have a Wikipedian in Residence!

Secondly we have the fifth item on the list: Sharples, M.; Adams, A.; Alozie, N.; Ferguson, R.; FitzGerald, E.; Gaved, M.; McAndrew, P.; Means, B.; Remold, J.; Rienties, B.; Roschelle, J.; Vogt, K.; Whitelock, D. and Yarnall, L. (2016). 创新教学报告2015 —探索教学、学习与评价的新形式 [Innovating Pedagogy 2015]. 开放学习研究 [Open Learning] (2016.1) pp. 1–18.  Now this is the Chinese translation of the 2015 Innovating Pedagogy Report which appears in the Chinese language journal Open Learning.  Last year we wouldn’t have accepted this item in ORO as is.  Rather than a discrete item with associated metadata in the source language we would have just added the file alongside the original English language version.  However, last year during Open Access week we were challenged to accept these items as discrete records in ORO to support discoverability and make ORO a more global resource.  We changed our policy and the benefit is evident here.  Interestingly, it seems that the majority of downloads, at least in June, are coming from the U.S.A. (187) rather than China (43).

Thirdly, there are three theses on the list.  Theses occasionally get a bit higher ranked than this and make the top 15 – but they are consistently highly downloaded.  Institutional Repositories have a major role supporting the dissemination of materials that do not get published via the standard routes of academic publishing – I’m thinking particularly about theses and reports that may not get a good platform for dissemination elsewhere.  So whilst there has been such an emphasis recently on ORO and the HEFCE Open Access Policy – we shouldn’t lose sight of the key function ORO can play in the dissemination of these other research outputs.

Finally, and perhaps most fundamentally, this Top 50 draws into stark contrast the benefits different faculties get from ORO.  There is only one item from the Science Faculty on this list.  The route to Open Access for Science is well supported by disciplinary repositories and Gold Open Access publishing (frequently, but not always, funded by the RCUK block grant).  Notwithstanding the requirement for a university to be aware of, and showcase, all its published research outputs, the value of the institutional repository can be discipline specific – and we need to pay close attention to that when advocating its usage.

 

Does using ORO provide a citation advantage?

There have been several studies indicating that deposit in an institutional repository (like our own ORO) does provide a citation advantage.  The one I usually refer to when I’m promoting ORO is the one conducted by Lars Kullman “The Effect of Open Access on Citation Rates of Self archived Articles at Chalmers” which finds that “self-archived articles have a 22% higher citation rate than articles that were not self-archived“.  

It’s something I’ve often wanted to do in relation to ORO, but haven’t got round to before now, not least because it’s a tricky thing to investigate.  There are a number of things to bear in mind when evaluating the impact deposit in a repository may have on citations of any particular paper, they include:

  1. The disciplinary variance of citations – some disciplines cite heavier than others and this might skew the results if a certain discipline is more heavily represented in the dataset being analysed.
  2. Citations increase over time – obviously older papers that have been in circulation longer will accrue more citations.
  3. Items deposited in the institutional repository as Open Access may also be Open Access in another repository (e.g. arXiv) or published Gold Open Access on the publisher’s website.  So it might not be their presence in ORO that is making the difference!
  4. Author’s may self select which papers are made Open Access in a repository i.e. only the better papers are made openly available.  And these (you would hope!) are the ones that will get more citations.

And at this point I normally give up and go back to the nitty gritty of running ORO – I’m not a researcher.  However, we’ve recently been using the bibliometrics tool SciVal (an Elsevier product based on Scopus data) and evaluating how that can support individuals, research groups and the University to benchmark their research.  So I decided to run some data through it and see what came out, and this is what I found.

Total Outputs Citations per Publication Field Weighted Citation Impact
ORO research outputs with DOI 10,008 20.9 1.88
OU research outputs with DOI 11,684 16.1 1.65
OU research outputs 16,627 14.9 1.48
  • Date range: 1996-2015
  • Field Weight Citation Impact (FWCI) – a relative measure of citations normalised by discipline, publication type and published date where 1 is the average e.g. a FWCI of 1.48 means it has been 48% more cited than expected
  • All tables ranked by FWCI
  • Data available here: ORB_OROSciValData

OK, let’s unpick that a bit.

  • OU research outputs is everything in Scopus/Scival that has been affiliated to an OU researcher.  SciVal/Scopus is only a subset of everything that is published and is skewed towards the sciences – coverage of the Arts and Social Sciences is not great!
  • OU research outputs will capture all outputs from authors with an OU affiliation – these may not be research contracted staff (e.g. Academic-related staff or Associate Lecturers) e.g. some of these people may not be able to deposit in ORO.
  • ORO currently captures around two thirds of everything that gets published by OU researchers.
  • ORO captures research outputs of current OU researchers published before they joined the OU – so some ORO items will not have an OU affiliation and therefore not be in the SciVal/Scopus dataset.
  • The ORO research outputs with DOI are only those items in ORO with a DOI – there are a lot of items in ORO that don’t have DOIs – but that’s the only way I could import them into SciVal to do the analysis!  I don’t know but maybe there would be a correlation between possession of a DOI and citability of an output.

So that’s some context, but the data there does seem to indicate that items in ORO get a citation advantage compared to all OU research outputs indexed in SciVal/Scopus.  So I dug a bit deeper and tried to get some data on those publications that were only on ORO or only in the SciVal/Scopus OU dataset.

Total Outputs Citations per Publication Field Weighted Citation Impact
ORO outputs only (with DOI) 3,003 27.0 2.03
OU research outputs not in ORO (with DOI) 4,679 13.0 1.41

That appears to be even more emphatic, but it may well be self selecting – people may only be putting on ORO the stuff they want people to see!

My appetite well and truly wetted I tried to see whether being Open Access in ORO differed from just being in ORO (i.e. metadata only).

Total Outputs Citations per Publication Field Weighted Citation Impact
ORO Open Access with DOI 2,926 16.1 1.98
ORO research outputs with DOI 10,008 20.9 1.88
ORO Metadata only with DOI 7,116 22.8 1.84

Note: you’ll see the numbers don’t add up – that’s because there were a number of duplicates that appeared in both sets (i.e. they had been deposited in ORO as both open access and metadata only). 

Not much at first glance, it might appear that just getting the metadata visible to Google is enough to get the research output noticed and citable.  However,  the FWCI value for Open Access is a bit higher than metadata only, especially when considered in relation with Citations per Publication.  I’ve written here before how Arts and Social Sciences full text gets downloaded more than STEM and, given differences in citation practices, I wonder if that is happening here too?

Finally, I tried to tie this up by seeing whether appearance in high impact factor journals had a determining influence in the varying citation measures across these datasets.  So I used one of the metrics in SciVal (Publications in Top 10 Journal Percentiles% – SNIP) to see whether this might be the case.

Total Outputs Citations per Publication Field Weighted Citation Impact Publications in Top 10 Journal Percentiles % – SNIP
ORO outputs only (with DOI) 3,003 27.0 2.03 25.4
ORO Open Access with DOI 2,926 16.1 1.98 22.1
ORO research outputs with DOI 10,008 20.9 1.88 24.3
ORO Metadata only with DOI 7,116 22.8 1.84 25.2
ORO and OU with DOI 2,926 18.2 1.81 23.9
OU research outputs with DOI 11,684 16.1 1.65 22.0
OU research outputs 16,627 14.9 1.48 20.1
OU research outputs not in ORO (with DOI) 4,679 13.0 1.41 18.6

And generally yes, there is a trend that those outputs published in journals in the top percentiles in their subject area do get a higher FWCI value.

However, there is one notable exception: those items that are Open Access items in in ORO have a higher FWCI value despite having a lower Journal percentile ranking than metadata only deposits . They also have very similar Journal percentile ranking but significantly higher FWCI value than all OU research outputs with DOI.

So I think there are a couple of things to take away:

  • All items deposited in ORO do appear to get higher citations than those that aren’t, however this may be due to the fact that these items are more likely to be published in more prestigious journals in the field, rather than the fact they have been deposited in ORO.  So it’s likely that there is a self selecting element here – researchers may be more inclined to deposit items in ORO if they appear in more prestigious journals.
  • Open Access items in ORO appear to contradict this general trend. Open access deposit in ORO may lead to more citations irrespective of the apparent quality of the journal they appear in.

 

Why embargo periods are bad for academic publishers

The recent EU Competitiveness Council’s Conclusions on The transition towards an Open Science system calls for “immediate open access as the default by 2020, using the various models possible and in a cost-effective way, without embargoes or with as short as possible embargoes”.  That’s both great and ambitious.

The International Association for Scientific, Technical and Medical Publishers (STM) responded to this statement with the concern “suggested embargo periods which do not take into account the long-term sustainability of continued quality content generation”.  The conventional wisdom being is that no embargo periods, or very short embargo periods, undermine the existing subscription based business model as Author Accepted Manuscripts will be freely available at the same time, or very soon after, the final published versions.  These published versions either require an institutional subscription to access or a hefty one-off fee for downloading individual articles.  Given, this choice why would anyone pay for the research article?

Entertainingly, the Secretary General of the League of European Research Universities (LERU) Kurt Deketelaere has called the STM response “2,5 pages of nonsense” and those academic publishers that require embargo periods would do well to reflect on their reliance of this argument when justifying them.

Firstly, the BIS Report on Open Access in September 2013 states that “there is no available evidence base to indicate that short or even zero embargoes cause cancellation of subscriptions.”  Referencing the PEER (Publishing and the Ecology of European Research) project of 2012, the report goes on to state that “traffic to journal websites increased when articles were made available through a publicly accessible repository, possibly because interest grew as articles were disseminated more widely.”  So the logic in the first instance, may well be flawed.

Secondly, what publisher embargoes are doing is driving researchers away from legitimately making their research outputs more freely available.  As a practitioner that is what I see on the ground.  When researchers deposit their Author Accepted Manuscript to our Institutional Repository only to find that there is a 12 or 24 month embargo on that output being freely available they turn around and say… “Well that’s not Open Access is it?” and they are right.

Faced with this scenario what options are there for researchers?  We’ll plenty, the two obvious ones are, Academia.edu and ResearchGate, both are busting at the seams with content that shouldn’t really be there.  Failing that researchers can upload papers to personal websites or they can share articles using #icanhazpdf or they might not even have to do anything and get their papers shared for them via Sci-Hub.

Academic publishers like Elsevier, Wiley and Taylor & Francis should reflect that long embargo periods are limiting the effectiveness of legitimate Green Open Access and speeding up alternative models of content sharing that are much more threatening to them than the perceived impact of Green Open Access.