CoPILOT workshops

DELILA Project Blog

Nancy and I gave two workshops before Easter at the LILAC conference at the University of Manchester and at the OER conference at the University of Nottingham. Nick Sheppard wrote up our session at OER13 on the official blog. Our slides from LILAC13 are on their website. In both cases we wanted to find out from participants how a community of practice for sharing IL resources might work in practice. The findings will feed into the work of the CoPILOT committee who are working to set this up in the UK. CoPILOT is now a sub-group of the CILIP Information Literacy Group and we have an enthusiastic group who are helping us with this endeavour. More soon but do have a look at the IL-OER wiki we have set up.

View original post

Advertisement

Research records – filling the gaps with Google Scholar + Zotero

The stated aim of our Symplectic implementation – and integration with the repository – is to make it easier to maintain a constant, up-to-date picture of research activity across the University…historically, however, research management has been somewhat variable across the institution…frankly I knew this already and the repository had become the de facto research management tool but is itself far from comprehensive. Nor are the automatic data sources (Web of Science, Scopus and PubMed) likely to solve the problem, with variable results depending, for example, on the subject area and types of publication; I have also been importing existing records from EndNote libraries …where they exist, but there are still large swathes of research missing over the past 10 years or so that we are trying to cover. Especially less formal publications.

Other than automated search, the easiest way to get data into Symplectic is by importing RefMan (RIS) or BibTex, both of which can be exported from Google Scholar, but only as single records (so far as I can tell), unless you use Zotero in FireFox…

1. Install Zotero in FireFox – https://addons.mozilla.org/en-US/firefox/addon/zotero/
2. Go into settings in Google Scholar (top right)
3. Bibliography manager -> Check “Show links to import citations into” and select preferred output (RefMan/BibTex etc) and save preferences
4. Now a search in Google Scholar should show a folder icon in the address bar. Click the folder.


5. A small window drops down that shows the Google Scholar citations, with an empty check box in front of each citation
6. Select the citations that you need and click “OK”

7. A small window pops up that indicates the records are being saved into Zotero
9. Open the Zotero window with the icon at bottom of browser where the records should be displayed (you can keep searching and sending additional records to Zotero for eventual export)
10. Highlight (select) the Zotero records that you wish to export. Right-click on the selection and select “Export selected items”

Choose the appropriate format (in my case RIS) and save the file to the desktop with an appropriate name for subsequent import to Symplectic / research management system of choice. Ta da!

Records in Google Scholar aren’t necessarily the most reliable so care will need to be taken with this process but it’s certainly worth exploring as a method of filling the gaps in our research records.

Turning a Resource into an Open Educational Resource (OER)

As this is the inaugural Open Education Week (whaddya mean you didn’t know?!) here’s a great 5 minute animation from OER IPR support giving an overview of IPR and licensing issues you need to be aware of when creating and repurposing Open Educational Resources.

Uploaded to the Leeds Met repository under the terms of CC-BY-SA 😉

Turning a Resource into an Open Educational Resource (OER) – Leeds Met Repository Open Search.

Bibliosight – querying Web of Science from the desktop

The Bibliosight project, as part of JISCRI, officially completed at the end of November 2009. However, due to issues beyond our control, specifically the fact that Thomson Reuters’ Web Services were not fully released until October 2009 and therefore not available to us within project timescales, final deliverables were not available at that time.

I am pleased to report that the project has now produced a desktop client that is able to utilise Thomson Reuter’s “Web Services Lite” to query Web of Science directly from the desktop. The code is available to download from http://code.google.com/p/bibliosight/ (Note: This is code only, not a product distribution (which requires access to WS Lite anyway). There is some very basic info in there on what you’d need to get it running) and you should see http://bibliosightnews.wordpress.com/2009/12/23/final-progress-post/ for more information.

As Bibliosight is now officially complete I am not contributing further to Bibliosight News but am posting here to explore practical uses of the client and also limitations of WSLite. This is prompted because, as a novice user of EndNote who has recently been exploring how to export from Web of Science into EndNote I am not now convinced that the client provides us with a solution beyond what could be achieved already with EndNote alone. This in no way denigrates the fantastic work that Mike has done developing the client and I’m sure there are plenty of practical uses of WSLite in general and our Bibliosight client in particular – it is just that I am thinking very much of an integrated workflow for research management/populating the repository and, at Leeds Met, EndNote is firmly established in the research administration process. I may also gently question the limitations that Thomson Reuter’s have placed on WSLite (which is free) given that, as subscribed users of WoS, we are already able to retrieve more data from WoS by export than via this free API.

The primary usecase that evolved through Bibliosight was as follows:

  • Retrospectively download all Leeds Met records from WoS
  • Run the query on a regular basis to retrieve new Leeds Met records in WoS

It was decided that the easiest way to achieve this was via a client that could query WoS from the desktop and return records as XML; this XML could then be converted by XSLT into an appropriate format for ingest into intraLibrary and/or other repository platforms and/or EndNote.

However, the data elements that are returned by WSLite are limited to:

  • Authors — All authors, book authors, and corporate authors
    Article Title
  • Source — Includes the source title, subtitle, book series and subtitle, volume, issue, special issue, pages, article number, supplement number, and publication date
  • Keywords — all author supplied keywords
  • UT — A unique article identified provided by Thomson Reuters

In addition, a single query is limited to just 100 results; additional queries can be submitted in succession but this is inconvenient with the current application.

As a subscribed user, I am able to log-in to Web of Science, perform a query and export directly to EndNote (a maximum of 500 records); this includes most of the data available from WSLite, it also includes <ref-type> which is an EndNote specific numerical value – I don’t think we can get an equivalent from WSLite – it also includes an abstract.

N.B. Though it is possible to submit a query for Source Publication (SO) though this is for the title of a specific publication so doesn’t help to identify <ref-type>

The issue of abstracts is an interesting one and I recently posted a naive question to the UKCORR discussion list:

The T&C for Web Services explicitly disallows including the abstract – which I can’t get anyway (!) – but are WoS abstracts not simply author-produced abstracts harvested from publisher’s websites in which case shouldn’t I be able to use them?

I got a couple of helpful responses from Alison Sutton, Repository Manager at the University of Reading and Leslie Carr of Southampton:

Alison said that they explicitly asked Thomson Reuters if they could use their abstracts in their repository and were told they could not because publishers don’t give Thomson the right to distribute them, which is why they are not included in WSLite.  Les, however (while at pains to emphasise that he is not in fact a lawyer!) suggested that there is no copyright on journal article abstracts in the UK and although Thomson cannot grant a license to use them, you do not actually need one.

I suspect we need a real legal eagle to establish whether or not there is any legal reason why we could not use an abstract procured from WoS which (in most cases) is exactly the same – right down to the minutiae of the ASCII code –  as the abstract from the publisher’s website which I believe is actually supplied by the author in the first place.

The only data element that doesn’t appear to be returned by the export method, that IS returned by WSLite is the unique identifier UT which will need for AMR to return citation counts (though it is returned when exporting in HTML for example.)

The long term value of WSLite – via Bibliosight or some other implementation – would be in a more intuitive, integrated process for the end user – and though Bibliosight, perhaps, is not there yet, the project output will still provide value for the community – also, I think, as a case study for Thomson Reuters and while they certainly have their commercial imperatives, when we met with them back in September (and as I blogged at the time) I was given the impression that the company has been practising something of a balancing act to weigh their commercial interests against providing appropriate value added services to their subscribers under existing licensing agreements.

A quick look at JorumOpen

As anyone with even a passing intererest in UKOER will know, JorumOpen went live earlier this week and I, for one, was dying to see just what the good folk at Mimas and Edina have come up with with their customised DSpace installation (and possibly “borrow” one or two ideas for Leeds Met Open Search!).

JorumOpen Home is at http://open.jorum.ac.uk/xmlui/ and allows the user to browse OER by FE or HE subject; alternatively there are links to browse by Communities & collections/Issue date/Authors/Titles and Keyword.  There is also a simple search box and a link to an Advanced search form:

The results page comprises different functionality depending on the search – for example, browsing by subject heading displays “Recent Deposits” and allows the user a simple/advanced search, or browse by Titles/Authors/Dates within that subject heading (I like this hierarchical search functionality); also includes an RSS button to subscribe to updates within the collection.

Results themselves comprise a hyperlinked title, author/author affiliation and date of deposit as well as a thumbnail graphic where available:

The record page is worth looking at in detail (this item – http://open.jorum.ac.uk/xmlui/handle/123456789/567):

Show full item record (link) – Full Dublin Core metadata record

Share (AddThis button) – third part social network service allowing record to be emailed to a friend or posted to various social networking sites.

The simple record comprises:

Title/Author/Description/Keywords/Persistent Link/Date

Then there are three buttons:

“Export resource” that requires a valid email address “As some resources are quite large in size it can take some time to prepare them for download. Due to this we required you to supply a valid email address so that you can be notified when your download is ready.”  Then follows an email from support@jorum.ac.uk that informs that “The item export you requested from the repository is now ready for download.” and includes a link to download the compressed file which comprises all files associated with the resource.*

“Preview content package” which allows the user to quickly view the different files and components of the resource in their browser without downloading (though it doesn’t work for .zip files)

“Download original content package” does exactly what it says on the tin and downloads a compressed file of all files associated with the resource.*

* I’m not entirely sure what the difference is between “export” and “download” – though the exported zip is bigger and contains more files (dublin_core.xml as well as imsmanifest.xml for example) – may be someone can enlighted me?

CC Licence Note – briefly explains implications of CC and links to relevant anchor later in the record.

Files in this item – allows the user to expand a list of files and download them individually (this particular item comprises 16 .zip and 2 .docx)

Creative Commons Licence – Link to relevant CC licence (opens in a nifty little window.)

Terms of service – Link to Jorum terms of service (also opens in a nifty little window)

This item appears in the following collections – linked to appropriate search terms in browse tree

Show full item record (repeated link from top of page) – Full Dublin Core metadata record

This item has been viewed x times – presumably counts visits to the record page

All in all, first impressions are pretty favourable and there are certainly some ideas that I would like to explore for Leeds Met Open Search – I’ve already included the AddThis button on the development server and plan to go live with it as soon as it has been approved by the powers that be (there are one or two issues with user tracking by this third party service – Mike has disabled Flash tracking that the widget injects into the page by default but it will still track each click-through.)

I’m also keen to explore how we may manage packaged content in a similar way to JorumOpen (preview content and download options for individual files) – currently we have very little packaged content in the repository but the default download link is currently just for an individual file – I do know that intraLibrary is able to manage content packages, however, and that a package download link is exposed by SRU so I think we should be able to achieve this.

Browse by date (of deposit) should also be achievable I think but browse by author is a little more problematic by SRU (both for research and OER) as there is no authority file for authors.

I’m not sure about recording page visits – will need to speak to Mike.

Now I just need to figure out the most efficient way of getting our UniCycle resources into JorumOpen – I will look at the deposit process in a later post (depositors can log in from JorumOpen Home via UK Federation) and I think Jorum are still exploring harvesting RSS feeds from ukoer projects though, as discussed in a recent post, our feed is not currrently suitable for this.

Software announcement

I can finally announce that intraLibrary from Intrallect has been selected as the software platform for LeedsMet Repository.

Originally designed as a Learning Object repository, intraLibrary is the platform behind JORUM and will need some repurposing to also serve as an Open Access research archive. During our rigorous selection process and after careful liaison with Intrallect, however, we have been satisfied that such repurposing is achievable and that the software will ultimately provide the best all round solution for our requirements. We now join Oxford Brookes University’s CIRCLE project in using this software to implement a single repository for research outputs and Learning Objects.

intraLibrary will be implemented and configured over the next few months and I intend to start uploading research material almost immediately. An official launch, however, is still some way off while the necessary customisation is carried out. For demonstration purposes, priorities will be:

  • Development of appropriate workflows for ingest of research materials.
  • Integration of an SRU interface to facilitate open search and retrieval of research content.
  • Work with Intrallect to incorporate embargo functionality in line with publisher restrictions.
  • Work with Intrallect to incorporate report functionality (number of hits/downloads etc) that can be used in advocacy to the university community.

As initially prioritised in the project plan and due to the considerable amount of customisation to be undertaken, our early emphasis will be on research outputs; appropriate liaison will also continue within the university regarding LeedsMet repository and Learning Objects.

PERSoNA (Personal Engagement with Repositories through Social Networking Applications)

The JISC funded PERSoNA project will develop alongside our main repository project (software to be announced very soon!) and the related Streamline project with the main aim of building a community of users and promoting use of the repository amongst that community via social networking applications.

Such applications may include social network websites (Facebook is the most well known but Nature Network is a comparable site exclusively for scientists); blogs, wikis and social bookmarking sites (eg del.icio.us).

We hope to facilitate an interactive environment whereby staff using the repository are able to connect with one another to recommend and share resources in a way that will mitigate the anonymity of the web and build a community of trust.

The first stage of the project is to identify a pilot group of stakeholders to consult on their current use of social networking applications.

Project updates will appear on the dedicated blog called, you guessed it, PERSoNA News.

Repository Patterns?

I have posted something that maybe of interest to this project on the Streamline Blog here. I don’t know the complete remit of the Repository, but it might be worth considering creating Repository Patterns as a project out come. This is essentially drawing up guidelines for repository implementation and uptake, based on your experience of the process. Interesting or Irrelevant, what do you think?

Â