Guest Editorial
Current Themes in Academic Library Assessment: Select
Papers from the 2010 Library Assessment Conference
Martha Kyrillidou
Senior Director, Statistics
and Service Quality Programs
Association of Research
Libraries
Washington, District of Columbia,
United States
Email: martha@arl.org
Damon Jaggars
Associate University
Librarian for Collections and Services
Columbia University
New York, New York, United
States
Email: djaggars@columbia.edu
2013 Kyrillidou and Jaggars.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 2.5 Canada (http://creativecommons.org/licenses/by-nc-sa/2.5/ca/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
The 2010 Library Assessment Conference was the third
iteration of an ongoing partnership of three institutions, the Association of
Research Libraries (ARL), the University of Washington Libraries, and the
University of Virginia Library. The event was attended by more than 450
registrants and featured more than 68 peer reviewed papers (Hiller, Justh, Kyrillidou, & Self,
2011) and enticing keynotes (Hiller, Kyrillidou,
& Self, 2011). The Library Assessment conferences build upon ARL’s rich
history of describing research libraries typically with its statistics
collection and more recently with the qualitative ARL Profiles (Potter, Cook,
& Kyrillidou, 2011). These forums advance the
cause of assessment and data-driven decision making by engaging the academic
library community in an active and elevated discourse about the strategic and
policy issues that demonstrate the value of the library and its linkages to the
academy.
ARL’s historical trajectory includes decades of
gathering the ARL Statistics and the ARL Annual Salary Survey, but the need
to investigate new assessment measures began fermenting in the 1990s. In 1999,
the chair of the ARL Statistics and Assessment Committee, Carla Stoffle, initiated a gathering of ARL directors interested
in defining “New Measures” (Blixrud, 2003), which
became the foundation for the follow-up activities ARL pursued, as well as the
“Living the Future” conferences (Bowlby, 2011)
launched at the University of Arizona. In fall 2000, ARL sponsored the
Measuring Service Quality forum (Heath & Kyrillidou,
2001), where the latest thinking on assessment was brought together and
immortalized in a special issue of Library
Trends. At this gathering, innovative developments in library assessment
were presented, notably the new survey protocol branded as LibQUAL+®.
In early 2002, the ARL E-Metrics gathering was launched in Arizona (Shim &
McClure, 2002) with follow up work that influenced the data collected on
electronic resources and the will for ARL to be one of the founding members of
COUNTER. In early 2001, ARL and OCLC co-sponsored a forum on academic library
performance in the digital age, which led to the implementation of the Balanced
Scorecard at the University of Virginia Library (Self, 2003). Later that year
Steve Hiller and Jim Self presented a review of library surveys at the 4th Northumbria International Conference on Performance
Measurement in Libraries and Information Services, an event co-sponsored by ARL
(Stein, Kyrillidou, & Davis, 2002). Following
this event, a collaboration between the University of Virginia, the University
of Washington, and ARL was initiated with the establishment of an onsite
consultation service, initially known as “Making Library Assessment Work” and
later recast as “Effective, Sustainable, and Practical Assessment” (Hiller, Kyrillidou, & Self, 2008). In 2005 Martha Kyrillidou, Steve Hiller and Jim Self realized the growing
library assessment community needed a venue for exchanging information and
ideas, both formally and informally. Thus was born the Library Assessment
Conference (see the conference website at www.libraryassessment.org). The success of the conference reflects the growth and the success of
academic libraries – institutions that are navigating transformative changes
and are positioning themselves to help users achieve both short-term research
and learning goals and longer-term outcomes and life-long achievement.
The papers
presented at the 2010 Library Assessment Conference demonstrated that the areas
where evidence based decision making is needed are multi-faceted and complex.
The following selection of articles features several emerging facets of library
assessment and highlights activities that are shaping and have shaped the ARL
agenda. Starting with an article by Dupont and Yakel entitled “What's so special about special
collections? or, Assessing the value special collections bring to academic
libraries,” one of the major challenges research libraries face in the 21st
century, justifying and increasing the value of special collections activities
in an era of diminishing resources, is discussed. The authors “recommend
shifting from collection-centric to user-centric approaches and identifying
appropriately precise metrics that can be consistently and widely applied to
facilitate cross-institutional comparisons.” They suggest the potential
benefits of “substituting the commonly used “reader-day” metric with a
“reader-hour” metric and correlating it with item usage data in order to gauge
the intensity of reading room use.” They also “discuss attempts to assess the impact
of instructional outreach through measures of student confidence in pursuing
research projects that involve primary sources.” This discussion is timely as
the assessment of special collections is a key element of the ARL agenda and
efforts are underway to capture their value in more effective ways by
developing a set of simple annual indicators.
The transformations taking place in academic libraries are deep and
issues of organizational culture are explored in a set of three articles that focus
on implementations of the ClimateQUAL protocol, a
standardized staff survey that ARL has developed in collaboration with the
University of Maryland that focuses on organizational culture and diversity
assessment. Phipps, Franklin and Sharma examine a recent implementation at the
University of Connecticut; DeFrank and Hillyer describe the ClimateQUAL
implementation at the University of Nebraska-Omaha; and Mengel,
Smith, and Uzelac discuss their experience at Johns
Hopkins University. Our society is changing rapidly and our professional ranks
are facing pressures to reflect the increasing diversity of the larger society
– issues of fairness, justice and psychological safety are some of the areas on
which these assessment efforts focus. The overarching argument in this set of
articles is that a healthy organizational climate ultimately links to healthy
customer service.
Library service is key. Collections and access
have merged in the minds of our users, and product and process when it comes to
information discovery are seen as one and the same. In this environment it is
imperative for libraries to have good data and to use it effectively for
service improvement and effective marketing. The article by Porat
highlights some emerging organizational themes and activities at the University
of Haifa while discussing the relationship of marketing and assessment
functions and demonstrating how this relationship has been strengthened.
Marketing and assessment are two sides of the same coin – one needs the other and
yet the two do not face in the same direction. Assessment “tries” to be
objective while marketing (or advocacy) is shameless in being biased and
partial to the message we want to promote. Having a straight face on both sides
of the coin is important!
A set of four articles focuses on the measurement of library service
quality from the LibQUAL+® perspective (Harvey &
Lindstrom; Fox & Doshi; Neurohr,
Ackerman, O’Mahony, & White; and Rutner & Self). Library as place, a key dimension of
service quality measured by LibQUAL+® is explored by
Harvey and Lindstrom, and Fox and Doshi, while Neurohr et al. focus on the analysis of the qualitative
comments received through this survey protocol. Rutner
and Self replicate an earlier study that demonstrated
the insatiable appetite of faculty for journals, especially in a research
library setting – no research library in the world will ever have enough
journals for the voracious researchers at the top research institutions, or do
we foresee a world where this situation might change?
In the area of standardized protocols for assessing the merged
environment where library and information technology have come together stands
MISO – Measuring Information Services Outcomes. An effort developed within the
context of a group of liberal arts educational institutions where libraries and
IT services have merged, MISO provides a way to assess these merged services
from the perspectives of faculty and students. A group of five collaborators
from sponsoring institutions analyze data collected by 38 colleges and smaller
universities that participated in the MISO Survey between 2005 and 2010. The
survey gathers input from faculty, staff, and students about the importance,
use, and satisfaction with campus library and computing services.
There is an increasing interest in capturing the value that libraries
provide to parent institutions, and a number of methods of assessing library
value are being tested - both the Association of College and Research Libraries
and ARL have broad-ranging initiatives under development in capturing value. A
set of three papers, Jubb, Rowlands
and Nicholas, King and Tenopir, and Jantti and Cox present examples of efforts to shift the
focus of library assessment in this direction. These articles, representing three
different continents and countries (UK/Europe, US/North America, and Australia)
reflect the global perspective and movement towards the perceived need to
capture library value to the fulfillment of institutional missions. These
articles focus on library outcomes in relation to research, faculty
productivity, and student performance respectively. Work in this area is
emerging and most of it taking place at the research and development stage,
though the example from Australia stands as a strong application of theory put
into practice, with a disciplined and collaborative process that demonstrates
where the future of library assessment activities may be heading.
This work on measuring library value also demonstrates the importance of
having a robust, stable, and well-architected technical infrastructure that
enables a better understanding of user behavior in an era when users
increasingly enter the “library” through its website. The work of Joe Zucca at the University of Pennsylvania stands as a fine
example of capturing useful data through sound data architecture and
demonstrates how linkages between resource allocation and user behavior can be
drawn. In his article, “Business intelligence infrastructure for academic
libraries,” Zucca poses a strategic challenge to ARL to be “an effective broker, providing a space for
potential partners to begin addressing the challenge of creating and governing
a critical new infrastructure for managing library services.” This is a
world where libraries can share assessment data more easily to understand their
users and serve them and their institutional missions more effectively.
Lastly, Lewis, Mengel, Hiller, and Tolson describe the experience of four ARL libraries in
building library scorecards using the Balanced Scorecard framework. The article
covers “an introduction to the Balanced Scorecard and its key components; an
overview of the ARL initiative and the process used to develop scorecards at
each library; an exploration of the concept of a standardized suite of measures
for ARL libraries based on a commonality of key objectives; and a review of
organizational challenges faced by the sites during their implementations.” The
authors emphasize the importance of communication and organizational
development activities and conclude that “the Balanced Scorecard forces an
organization to have new, sometimes challenging, conversations
and to analyze aspects of its current and future state that may have otherwise
gone unexamined. Ultimately, the Scorecard may substantially shift an
organization’s strategic direction or dramatically change how its human capital
and other resources are allocated. The Scorecard is, by its very nature, a
change driver.”
Leading change is a theme articulated in all of the articles in this
collection. The authors demonstrate the continued commitment of libraries and
their staffs to push towards more “effective, sustainable and practical
assessment,” the enduring subtitle and underlying purpose of the Library
Assessment Conferences.
Acknowledgement
Preliminary versions of the papers published in this Feature section
were originally published in the Proceedings
of the 2010 Library Assessment Conference. See: http://libraryassessment.org/bm~doc/proceedings-lac-2010.pdf
References
Blixrud, J. (2003). Mainstreaming new measures. ARL: A Bimonthly Report on Research
Libraries Issues and Actions from ARL, CNI and SPARC, (230/231), 1-7.
Retrieved 18 May 2013 from http://www.arl.org/storage/documents/publications/arl-br-230-231.pdf
Bowlby, R. (2011). Living the future: Organizational
performance assessment. Journal of
Library Administration, 51(7-8), 618-644. doi:10.1080/01930826.2011.601267
Heath, F., & Kyrillidou, M. (2001). Introduction. Library
Trends, 49(4): 541-547.
Hiller, S., Justh, K., Kyrillidou,
M., &Self, J. (Eds.) (2011). Proceedings of the 2010 Library Assessment
Conference. Washington, DC: Association of
Research Libraries. Retrieved 18 May 2013 from http://libraryassessment.org/bm~doc/proceedings-lac-2010.pdf
Hiller, S., Kyrillidou, M., & Self, J. (2008). When the evidence isn’t enough: Organizational factors that
influence effective and successful library assessment. Performance Measurement and Metrics, 9(3),
223-330. doi:10.1108/14678040810928444
Hiller, S., Kyrillidou, M., & Self, J.
(Eds.) (2011a). Library Quarterly, 81(1),
3-128. Retrieved 18 May 2013 from http://www.jstor.org/stable/10.1086/657499
Potter, W.
G., Cook, C., & Kyrillidou, M. (2011). ARL profiles: Research libraries 2010.
Washington, DC: Association of Research Libraries. Retrieved 18 May 2013 from http://www.libqual.org/documents/admin/ARL_Profiles_Report_2010.pdf
Self, J. “Using data to make choices: The balanced scorecard at the
University of Virginia Library." ARL: A Bimonthly Report on Research Libraries Issues and Actions from
ARL, CNI and SPARC, (230/231): 28-29. Retrieved 18 May
2013 from http://www.arl.org/storage/documents/publications/arl-br-230-231.pdf
Shim, W., & McClure, C. R. (2002) Improving database vendors’ usage
statistics reporting through collaboration between libraries and vendors. College
& Research Libraries, 63(6): 499-514. Retrieved 18 May 2013 from
http://crl.acrl.org/content/63/6/499.full.pdf
Stein, J., Kyrillidou, M., & Davis, D.
(Eds.). (2002). Proceedings of the 4th
Northumbria International Conference on Performance
Measurement in Libraries and Information Services, Pittsburgh, Pennsylvania,
August 12-16, 2001. Washington, DC: Association of Research Libraries.
Retrieved 18 May 2013 from http://www.libqual.org/documents/admin/4np_secure.pdf