Evidence Summary
A Review of:
Mawhinney, T., & Hervieux, S. (2022). Dissonance between Perceptions
and Use of Virtual Reference Methods. College & Research Libraries, 83(3),
503–525. https://doi.org/10.5860/crl.83.3.503
Reviewed by:
Kathy Grams
Associate Professor of
Pharmacy Practice
Massachusetts College of
Pharmacy and Health Sciences
Boston, Massachusetts,
United States of America
Email: kathy.grams@mcphs.edu
Received: 22 Aug. 2023 Accepted: 11
Nov. 2023
2023 Grams.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 4.0 International
(http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
DOI: 10.18438/eblip30426
Objective – To investigate the differences that exist between
the users’ perception of virtual reference tools (chat, email, and texting) and
how these virtual reference tools are used.
Design – Multimodal research that includes a descriptive
summary of user perspectives of virtual reference tools and a descriptive and
correlation analysis of question categories (complexity, reference interview,
question category, and instruction) compared to the type of virtual reference.
Setting – A large university library in Montréal, Québec,
Canada.
Subjects – A summary of in-person interview results from 14
virtual reference users and a sample of chat (250), email (250), and texting
(250) transcripts.
Methods – The authors describe their research as part of a
larger project. In Phase One, which was published in a previous report,1
the first author interviewed 14 users and collected their preferences among
virtual reference tools and factors that impacted their use. Participants were
interviewed in fall 2019. They were eligible if they used one or more virtual
methods. In Phase Two, the users’ perceptions among virtual reference tools
were compared to the analysis of question complexity in a sample of chat,
email, and texting transcripts. Transcripts were collected from January 1, 2018,
to December 31, 2019. Text conversations were grouped as a single transcript. A
total of 250 texts were collected and were matched in number with a random
sample of chat and email transcripts; 750 transcripts were analyzed. The
transcripts were coded by question type, question complexity, and the presence
of reference interviews and instruction. The READ Scale was used to categorize
questions by complexity and READ 3 and above were deemed to be complex. A
codebook was used for consistency and intercoder reliability. A random 10% of
transcripts were coded by both authors with an agreement of 84%. After
discussion, agreement reached 100%. The remaining 90% of the transcripts were
coded by the first author. The Chi-Square test of independence (X2)
was used to determine if there was a difference in the frequency of the
delivery method in the categories analyzed. Cramer’s V was used to determine
the strength of associations.
Main Results – The authors state the main findings signify
“dissonance between users’ perceptions of virtual reference methods and how
they actually use them.” Results from the user interviews suggest that
participants felt that chat and texts should be used for basic questions and
that email be used for more complex ones. They appreciated the quick answer
from text for things such as library hours, and the back-and-forth nature of
the chat for step-by-step instruction but did not believe these were suited for
complex questions. Participants expressed that an email to the library liaison
rather than the library general email is the best for research questions. Of
note, library liaison emails were not collected as part of the virtual
reference tools for this research project. The results from the transcript
evaluation revealed that chat interactions were used for complex questions as
reflected by the READ Scale rating. Questions were categorized from READ 1
(requiring the least amount of effort) to READ 5 (requiring considerable effort
and time) with the following results: READ 1 - 0% chat, 0% email, 13% text;
READ 2 - 4% chat, 8% email, 43% text; READ 3 - 72% chat, 75% email, 38% text;
READ 4 - 20% chat, 15% email, 6% text; and READ 5 - 4% chat, 2% email, 0% text.
The authors demonstrated a moderate strength of association between the
delivery method and the READ Scale (V=0.41), reference interview (V=0.43),
question category (V=0.34), and instruction (V=0.21). There were significant
differences between the delivery method and complexity, p< 0.001. The email
and chat transcripts were more complex than text and the chat transcripts were
marginally more complex than email. Chat transcripts were also more frequent in
reference and instruction categories, p<0.001. The types of questions were
divided into 10 categories: reference/ research, library systems, problem with
access, interlibrary loan, known item, access policies, collection
acquisitions, library physical facilities, hours, and other. The most popular
question types for chat transcripts were reference/research questions (24%),
library systems (17%), problem with access to e-resources (14%), interlibrary
loans (14%), and known items (13%). The most popular question types for email
were reference/research (18%), library systems 16%), problem with access (15%),
and access policies (16%). The most popular for text transcripts were
reference/ research (15%), library systems (18%), library physical facilities
(18%), and hours (16%).
Conclusion – Mawhinney and Hervieux establish that disagreement
exists between the users’ perception of and the use of virtual reference
services. After researching the types of questions and level of complexity
associated with each virtual reference tool, the authors were able to provide a
list of practical implications of their research to improve documentation and
workflow and make suggestions for staffing needs. They recommend multiple
reference methods, training on the reference interview and virtual methods
chosen, advertising virtual resources, and making chat available on the website
in places of research. They found that their institution had a high number of
questions categorized as access policies and they suggested that easier ways to
report problems be considered.
This research was appraised using the critical
appraisal review form developed by Letts (2007).
Mawhinney and Hervieux conducted a comprehensive
literature review on the perception of virtual reference services. They
describe a conflict in the literature regarding the use of chat as a virtual
tool. Chat has been reported as both an unsuitable tool for reference and
research projects as well as an acceptable tool for all questions. Chat
exchanges have been reported to be more complex than email and
also reported to be less complex. The justification for their research
was clear, and the participant users and virtual tool samples were clearly
described.
The authors discuss appropriate limitations to their
research. They mentioned that one limitation to be considered was that
transcripts of library liaison emails were not collected as part of their
research project and conclude that they are likely rated as more complex on the
READ scale and suggest further investigation would be needed to confirm.
The authors include that this research was done at one
institution and may not apply to others. They mention that the perception of
question complexity, how users perceived their own questions, may vary among
users. Another limitation regarding transcript collection was described. In
August 2019, McGill Library moved from using QuestionPoint to LibChat. These
two virtual reference services are different in delivery and in the way they
account for text transactions. The authors discuss that they accounted for the
differences by including an equal number of chats, emails, and texts from
QuestionPoint and LibChat.
The last limitation mentioned was that this study
collected perceptions of virtual tools prior to the COVID-19 pandemic. The
authors mention that the use of all virtual tools increased during the
pandemic, staffing virtual reference was reevaluated, and that there was a need
to make virtual references more visible. What authors did not mention is that
perceptions of virtual reference post COVID-19 may have changed as well. Users
may have adapted.
The study aimed to investigate differences between
users’ perception of and the use of chat, email, and texting as virtual
reference tools and raises the possibility of other limitations.
Mawhinney and Hervieux (2022) describe McGill
University at the time of publication as a publicly funded institution with an
enrollment of 40,000 students. Participants were recruited through online and
on-campus solicitation and described as both men and women; as undergraduate
(5), masters (4) and doctoral (2) students, faculty (2), and alumni (1). The
sample was described by the first author as being based on “theoretical
saturation” where interviews were discontinued when the author did not gain
“additional insights” from the interviews.1 Questions were
appropriate to elicit from the user how the question type influenced the choice
of virtual reference. However, fourteen participants is a very small sample
size and may not reflect the perceptions of users from an institution of this
magnitude which is a potential bias.
Potential bias is reflected in the analysis of text
chat and email transcripts. Not all questions were coded independently by two
authors. The first author, who interviewed all participant users, coded 90% of
the transcripts. Although there was an 84% match in the first 10% of questions
coded, with 100% agreement after discussion, it is possible there could be a
slight change in the percent complexity per virtual reference tool in the
overall results. This, however, is unlikely to change the overall message. A
count and a measure of complexity also does not imply if the user obtained a
complete answer, or if there was resolution of the problem after using the
virtual tool. The authors state that transcripts were assessed for level of
complexity, question category, and the presence of a reference interview and
instruction. There is no description if the use of the virtual tool was
successful. A user could potentially use text or chat for a complex question
and then move to general library email or library liaison email because they
did not receive a complete answer, or their issue was not resolved. This may be
outside the scope of this research. Mawhinney and Hervieux provide suggestions
that are useful to library practice that can help address this.
The authors suggest improved policies and workflows.
They recommend librarians staff virtual references such as chat and general
email, and that library assistants and/or students staff text reference. The
authors suggest that due to the number of complex questions, more training is
needed on the reference interview and methods of virtual reference, and that
the user be made aware when a question needs to be transferred to a subject
specialist. They support virtual reference tools be placed where users conduct
library research.
Librarians can have an impact on virtual reference
services, including how they are used and where they are located, and how they
are staffed to respond to complex questions.
Letts, L., Wilkins, S., Law, M., Stewart, D., Bosch, J., &
Westmorland, M. (2007) Critical review
form – Qualitative studies (version 2.0). Retrieved from http://www.peelregion.ca/health/library/eidmtools/qualreview_version2_0.pdf
Mawhinney, T. (2020). User preferences related to virtual reference
services in an academic library. The Journal of Academic Librarianship, 46(1),
102094. https://doi.org/10.1016/j.acalib.2019.102094
Mawhinney, T., & Hervieux, S. (2022). Dissonance between Perceptions
and Use of Virtual Reference Methods. College & Research Libraries, 83(3),
503–525. https://doi.org/10.5860/crl.83.3.503