Evidence Summary
Analysis of Question Type Can Help Inform Chat
Staffing Decisions
A Review of:
Meert-Williston, D.,
& Sandieson, R. (2019). Online Chat Reference:
Question Type and the Implication for Staffing in a Large Academic Library. The
Reference Librarian, 60(1), 51-61. http://www.tandfonline.com/doi/full/10.1080/02763877.2018.1515688
Reviewed by:
Heather MacDonald
Health and Biosciences Librarian
MacOdrum Library
Carleton University
Ottawa, Ontario, Canada
Email: heather.macdonald@carleton.ca
Received: 7 Feb. 2020 Accepted: 30 Mar. 2020
2020 MacDonald.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 4.0
International (http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
DOI: 10.18438/eblip29727
Abstract
Objective – Determine the type of online chat questions to help
inform staffing decisions for chat reference service considering their
library’s service mandate.
Design – Content analysis of consortial
online chat questions.
Setting – Large academic library in Canada.
Subjects – Analysis included 2,734 chat question transcripts.
Methods – The authors analyzed chat question transcripts from
patrons at the institution for the period of time from September 2013 to August
2014. The authors coded transcripts by
question type using a coding tool created by the authors. For transcripts that
fit more than one question type, the authors chose the most prominent type.
Main Results – The authors coded the chat
questions as follows: service (51%), reference (25%), citation (9%), technology
(7%), and miscellaneous (8%). The majority of service questions were
informational, followed by account related questions. Most of the reference chat questions were
ready reference with only 16% (4% of the total number of chat questions) being
in-depth. After removing miscellaneous questions, those that required a high
level of expertise (in-depth reference, instructional, copyright, or citation)
equaled 19%.
Conclusion – At this institution, one in five chat questions needed
a high level of expertise. Library
assistants with sufficient expertise could effectively answer circulation and
general reference questions. With
training they could triage complex questions.
Commentary
This evidence summary used the CAT critical appraisal
tool (Perryman & Rathbun-Grubb, 2014).
The authors clearly state the objectives for this study. However, the mandate for the library is not
stated explicitly. As the authors’ conclusion takes into account
their service mandate, it would be helpful to have a clear statement of the
mandate. It appears that the service mandate is to provide as high a level of
expertise as possible (complete reference service) rather than simply directing
users to resources. The literature review provides adequate background on
staffing the reference desk, staffing chat reference, and whether question type
should impact staffing regardless of medium.
The data collected is for an entire year which
provides a broad view of the types of questions asked at the institution. The
authors developed a comprehensive coding scheme to evaluate the questions which
they provide in an appendix. The authors do not discuss the development of the
coding schema, whether they did pilot testing to test reliability, or if the
coding was done in duplicate.
Percentages and raw data are provided in tabular and graphic
representations. They are easy to read
and present the results clearly. In
order to calculate the number of questions requiring a high level of expertise,
the authors remove the miscellaneous questions resulting in a percentage of
19%. It is not clear why the
miscellaneous questions should be removed.
When left in, the percentage drops to 17%.
One potential limitation of this article is the
absence of analysis by student status.
However, this information may not be collected automatically. The authors note a potential critique of
their study: lack of a comparison of virtual and in-person questions. They suggest this would be an interesting
study on its own. The Bishop and
Bartlett (2015) study that the authors cite analyzed question type in a variety
of media (chat, email, phone and in-person). The authors also note that the
types of questions asked may be influenced by the medium itself. Fennewald (2006)
found that question type distribution differed between in-person and online
questions.
The authors state an institution should consider cost
vs. outcome when making staffing decisions for chat. However, the authors do not articulate what
their cost and outcome variables are (presumably staffing and service quality
respectively). Including a statement
such as the following would have summed up their study nicely: With less than
20% of questions requiring a high level of expertise, the library can maintain
high quality chat service by staffing with trained library assistants rather
than librarians. The authors do discuss
other factors that could influence staffing decisions in addition to question
type: total staff, staff expertise levels, library service mandate, and patron
expectations. This is noteworthy as
studies mentioned in the literature review found similar question type
distributions but had different staffing models.
This paper adds a comprehensive analysis of chat
question type to the growing body of literature. Question type can be helpful in determining
staffing for chat but other factors should also be considered.
References
Bishop, B. W., & Bartlett, J. A. (2013). Where do we go from here?
Informing academic library staffing through reference transaction analysis. College & Research Libraries, 74(5),
489-500. https://doi.org/10.5860/crl-365
Fennewald, J. (2006).
Same questions, different venue: an analysis of in-person and online questions.
Reference Librarian, 46(95/96), 20–35. https://doi.org/10.1300/J120v46n95_03
Perryman, C., & Rathbun-Grubb, S. (2014). The CAT: a generic
critical appraisal tool. In JotForm –
Form Builder. Retrieved from http://www.jotform.us/cp1757/TheCat