ER&L 2011 sold out well in advance of the registration deadline and attracted over 400 attendees (huge for this conference). There was a very active Twitter presence under the hashtag #erl11; ER&L organizers tweeted the conference under the Twitter name @ERandL. For a while, at least, you can see the entire conference Twitter stream at:
See data about the Twitter stream at:
"LibX 2.0: New Realities, Directions, and Possibilities"
Kyrille Goldbeck and Sony Vijay, Virginia Tech
LibX 2.0 is coming. The new version supports all the features existing in LibX 1.0 and adds many more, including the options for full and simple views and a simplified build process that makes updates much easier for the administrator and for LibX users. Users will be able to switch editions on the fly which could be a boon for collection development work: quickly switch editions to see if another library has purchased an item you are contemplating. The first release will support Firefox and Chrome. When asked about IE support, the presenters indicated that newer versions of IE have made it very difficult to keep even existing LibX functionality, much less add more [having built and tested LibX 1.0 for IE, I can testify that this is true - it's complicated to install and touchy to work with if you have any expectation for consistency.]. The bigger news in this session was the upcoming release of the LibApp builder - a tool that will allow non-programmers to create LibX-like apps and publish them for their own libraries and the LibX community. This is awesome news and I can't wait to try out the builder when it's released.
"Metrics-Based Journal Value Analysis"
Chan Li, California Digital Library
The presenter reported on an overall weighted value metric and algorithm, in use at the University of California Libraries since 2009. The metric is designed to account for many individual journal metrics in the marketplace; including Impact Factor, Eigenfactor, cost/use, and many others; and is weighted to even out the influence of any particular metric in an evaluation of journal quality and necessity. This work is important because it attempts to bring a scientific approach to journal title review, accounts for multiple methods of measuring journal quality, and emphasizes the analysis of the information produced by the algorithm rather than the collection of data.
"Tipping the Cow: Reorganizing Staff to Support Electronic Resources"
John McDonald, Claremont Colleges Library
The presenter discussed the well-known fact (at least among those who handle and support electronic resources) that our technical services departments are still mostly configured to handle print collections. To address this problem the Claremont Colleges Library underwent a radical reorganization led by its administration. Through retirement incentives, job reassignments, and withdrawing all budgetary support for print-based work (no more binding, journal issue check-in, microform purchases, processing of printed government documents, among other activities), the Library gradually changed its cultural focus from supporting print to supporting electronic resources. This support extends all the way through Library Instruction, the work of former subject specialists, and a new approach to collections management. The presenter pointed out that, even in a more traditionally organized library, all staff still come into contact with e-resources problems. Without e-resources knowledge installed somewhere besides with the e-resources librarian/staff, that person becomes an unwilling bottleneck in getting e-resources problems fixed. He emphasized that all staff need some e-resources knowledge to remove such bottlenecks and best support users.
Standards Panels and Updates:
KBART (Knowledge Bases And Related Tools):
Chad Hutchens, University of Wyoming Libraries; Julie Zhu, AIP
KBART is a recommended practice for exchanging holdings information for knowedgebases. The idea is to prevent the need for e-resources personnel to compare publisher and agent holdings information multiple times before recording access in a knowledgebase (such as an ERMS or a link resolver). Current participants are available at the KBART registry: http://sites.google.com/site/kbartregistry/
Holdings information from registrants is available via tab-delimited text files in order to maintain accessibility for all institutions. A long-term goal of the project is automated transfer of this information, possibly via XML files, along the lines of the SUSHI model. Publishers and platform providers also face problems with providing correct holdings information, especially when dealing with backfiles or small publishers that have no standard approach to metadata management.
IOTA (Improving OpenURLs Through Analytics)
Adam Chandler, Cornell University
The IOTA project has amassed quite a data set of incoming OpenURLs at http://www.openurlquality.org/. The goal of IOTA is to give OpenURL suppliers (e.g.: citation databases, among others) incentive to supply high-quality OpenURLs that will allow users to get to the items they want. The presenter cited a 2010 study published in Library Technology Reports (Trainor & Price, 46(7)) that demonstrates the need for OpenURL metrics. According to the study, of failed OpenURL requests, 1/3 of errors are caused by local link resolvers, 1/3 are caused by poor-quality incoming OpenURLs, and 1/3 have other causes. From an estimated 1 billion OpenURLs generated daily (H. van de Sompel), even 1/3 is a significant number that can be influenced by OpenURL metrics. Attendees and their institutions were encouraged to examine the data freely available on the project's site and to contribute their own link resolver data to the project. A measurment tool that assigns a score to producers of OpenURLs based on how likely OpenURLs from that source will get users to desired items is slated for release later this year.
CORE (Cost Of Resource Exchange)
Bob McQuillan, Innovative Interfaces
CORE is a NISO recommended practice for exchanging cost data between systems. While the practice was originally conceived to facilitate transfer of cost data between ILS and ERMS, need for this protocol goes beyond these two systems alone. CORE is functional now but has not enjoyed a lot of adoption. The presenter emphasized that adoption takes place when providers of cost information and system providers are aware of customer demand. Attendees were encouraged to talk to their vendors and system suppliers if interested in seeing them adopt and comply with CORE.
ESPReSSO (Establishing Suggested Practices Regarding Single Sign-On)
Heather Staines, Springer; David Kennedy, Johns Hopkins University Libraries
The presenters discussed the goals and limits of the project. The goals are to improve the single sign-on (SSO) experience for users by standardizing its terminology and user interfaces and to establish best practices for SSO relationships among customers, publishers, and aggregators. The project is not attempting to create a new SSO technology or to standardize the technology of SSO. Current SSO examples in the marketplace are Shibboleth and Athens; at K-State we use neither of these.
Slides available on request:
ER&L provides attendees with a flash drive containing most of the conference session slides. If you would like to see slides from these or other sessions, please contact me. The full conference program can be found at http://www.electroniclibrarian.com/conference/2011