Evidence Summary
Faculty Decisions on Serials Subscriptions Differ Significantly from
Decisions Predicted by a Bibliometric Tool
A Review of:
Knowlton, S. A., Sales, A. C., & Merriman, K. W. (2014). A
comparison of faculty and bibliometric valuation of serials subscriptions at an
academic research library. Serials Review,
40(1), 28-39. http://dx.doi.org/10.1080/00987913.2014.897174
Reviewed by:
Sue F. Phelps
Reference Librarian
Washington State University Vancouver Library
Vancouver, Washington, United States of America
Email: asphelps@vancouver.wsu.edu
Received: 3 Dec. 2015 Accepted: 2 Feb. 2015
2016 Phelps.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 4.0
International (http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
Abstract
Objective – To compare faculty choices of serials subscription
cancellations to the scores of a bibliometric tool.
Design – Natural
experiment. Data was collected about faculty valuations of serials. The
California Digital Library Weighted Value Algorithm (CDL-WVA) was used to
measure the value of journals to a particular library. These two sets of scores
were then compared.
Setting – A public research university in the United States
of America.
Subjects – Teaching and research faculty, as well as serials
data.
Methods – Experimental methodology was used to compare
faculty valuations of serials (based on their journal cancellation choices) to
bibliometric valuations of the same journal titles (determined by CDL-WVA
scores) to identify the match rate between the faculty choices and the
bibliographic data. Faculty were asked to select titles to cancel that totaled
approximately 30% of the budget for their disciplinary fund code. This “keep”
or “cancel” choice was the binary variable for the study. Usage data was
gathered for articles downloaded through the link resolver for titles in each
disciplinary dataset, and the CDL-WVA scores were determined for each journal
title based on utility, quality, and cost effectiveness.
Titles within each
dataset were ranked highest to lowest using the CDL-WVA scores within each fund
code, and then by subscription cost for titles with the same CDL-WVA score. The
journal titles selected for comparison were those that ranked above the
approximate 30% of titles chosen for cancellation by faculty and CDL-WVA
scores.
Researchers estimated
an odds ratio of faculty choosing to keep a title and a CDL-WVA score that
indicated the title should be kept. The p-value
for that result was less than 0.0001, indicating that there was a negligible
probability that the results were by chance. They also applied logistic
regression to quantify the association between the numeric score of CDL-WVA and
the binary variable of the faculty choices. The p-value for this relationship was less than 0.0001, also indicating
that the result was not by chance. A quadratic model plotted alongside the
previous linear model follows a similar pattern. The p-value of the comparison is 0.0002, which indicates the quadratic
model’s fit cannot be explained by random chance.
Main Results – The
authors point out three outstanding findings. First, the match rate between
faculty valuations and bibliometric scores for serials is 65%. This exceeds the
50% rate that would indicate random association, but also indicates a
statistically significant difference between faculty and bibliometric
valuations. Secondly, the match rate with the bibliometric scores for titles
that faculty chose to keep (73%) was higher than those they chose to cancel
(54%). Thirdly, the match rate increased with higher bibliometric scores.
Conclusions – Though
the authors identify only a modest degree of similarity between faculty and
bibliometric valuations of serials, it is noted that there is more agreement in
the higher valued serials than the lower valued serials. With that in mind,
librarians might focus faculty review on the lower scoring titles in the
future, taking into consideration that unique faculty interests may drive
selection at that level and would need to be balanced with the mission of the
library.
Commentary
With the rising cost of serials and a repeated need to
make choices about what to keep and what to cut, the authors of this study
present a unique process to determine how to involve faculty in the decision
making process. They state that faculty selector models for monographs have
historically been “conceptual rather than data driven” (p. 29); however,
librarians have designed data-driven tests to compare library selections to
those of faculty. Though there have been reports in the literature about how to
integrate faculty choices into serials decisions, there have not been any
experiments into how faculty valuations compare to bibliometric valuations. It
was with this intention that the authors set about designing this study, which
was set in a local context and could possibly be replicated at any other
location.
The researchers chose
the California Digital Library Weighted Value Algorithm (CDL-WVA) to assess the
value of the journals for this study because it integrates multiple datasets –
including local usage, local citations, journal ranking measures and cost
effectiveness. Because it is designed to measure the value of journals for a
specific library, and not value in general, it was determined to be an accurate
comparator for this study.
The study was evaluated
using Glynn’s critical appraisal for library and information research checklist
(Glynn, 2006). The overall score was 84%, indicating that the study is valid.
The population, data collection, study design, and results sections rated 75%,
80%, 100% and 83% respectively, all within the range of validity. The complex
nature of the methodology weighed heavily in scoring results.
Though the researchers
express regret that the study lacked actionable conclusions, they present an
interesting idea for serials valuation and faculty participation in the serials
selection. They made a compelling case for the use of the CDL-WVA as a
bibliometric tool, and how to use that data to make a fair comparison with the
faculty valuations.
There are some minor
concerns, however. One is the two year difference between data collected from
faculty and data from the link resolver. Faculty choices would heavily depend
on those specific faculty members who responded to the library request for
cancelation choices, which may change within two years based on faculty
turnover, research interests and courses they were teaching at the time. The
rationale that the faculty choices were predictive is reasonable under these
circumstances. Though the article was detailed in its description of their
methodology, some readers may want additional details regarding the dispersal
of journal funds across disciplines at the institution studied, and how varied
usage of journals by discipline factors into the analysis of the data.
These concerns in no
way detract from the value of the methodology. Overall, this study offers a
model for serials evaluation that could be replicated by other libraries, as
more serials cuts will likely be in our future.
Reference
Glynn, L. (2006). A critical appraisal tool for
library and information research. Library
Hi Tech, 24(3), 387-399. http://dx.doi.org/10.1108/07378830610692154