Research Article
Local Users, Consortial
Providers: Seeking Points of Dissatisfaction with a Collaborative Virtual
Reference Service
Kathryn Barrett
Social Sciences Liaison Librarian
University of Toronto Scarborough Library
Toronto, Ontario, Canada
Email: kathryn.barrett@utoronto.ca
Sabina Pagotto
Client Services and Assessment Librarian
Scholars Portal, Ontario Council of University
Libraries
Toronto, Ontario, Canada
Email: sabina@scholarsportal.info
Received: 16 Aug. 2019 Accepted: 7 Oct. 2019
2019 Barrett and Pagotto. This
is an Open Access article distributed under the terms of the Creative Commons‐Attribution‐Noncommercial‐Share Alike License 4.0
International (http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
DOI: 10.18438/eblip29624
Abstract
Objective – Researchers at an academic library consortium examined whether the
service model, staffing choices, and policies of its chat reference service
were associated with user dissatisfaction, aiming to identify areas where the
collaboration is successful and areas which could be improved.
Methods – The
researchers examined transcripts, metadata, and survey results from 473 chat
interactions originating from 13 universities between June and December 2016.
Transcripts were coded for user, operator, and question type; mismatches
between the chat operator and user’s institutions, and reveals of such a
mismatch; how busy the shift was; proximity to the end of a shift or service
closure; and reveals of such aspects of scheduling. Chi-square tests and a
binary logistic regression were performed to compare variables to user dissatisfaction.
Results – There
were no significant relationships between user dissatisfaction and user type,
question type, institutional mismatch, busy shifts, chats initiated near the
end of a shift or service closure time, or reveals about aspects of scheduling.
However, revealing an institutional mismatch was correlated with user
dissatisfaction. Operator type was also a significant variable; users expressed
less dissatisfaction with graduate student staff hired by the consortium.
Conclusions – The study largely reaffirmed the consortium’s service model, staffing
practices, and policies. Users are not dissatisfied with the service received
from chat operators at partner institutions, or by service provided by
non-librarians. Current policies for scheduling, handling shift changes, and
service closure are appropriate, but best practices related to disclosing
institutional mismatches may need to be changed. This exercise demonstrates
that institutions can trust the consortium with their local users’ needs, and
underscores the need for periodic service review.
Introduction
Chat
reference has become increasingly common since its inception in the mid-1990s,
and is now an integral part of library reference services (Radford & Kern,
2006). A study by Yang and Dalal (2015) found that
48% of college and university libraries in North America offer a chat service.
Almost a quarter of these libraries provide chat service through a consortium,
and the trend toward collaboration is increasing (Pomerantz, 2006; Yang & Dalal, 2015).
Chat reference is more resource-intensive than traditional
in-person service due to labor and software costs (Weak & Luo, 2014). Many
institutions find it difficult to launch or maintain a local chat service for
budgetary or staffing reasons, especially if usage is low (Eakin &
Pomerantz, 2009; Helfer, 2003; Radford & Kern, 2006). In an effort to make
chat reference more cost-efficient and sustainable, many libraries have joined consortial arrangements (Coffman & Arret,
2004b; Peters, 2002; Powers, Nolen, Zhang, Xu, & Peyton, 2010). By coming
together, libraries can mitigate the risks of launching a new service, build a
centralized infrastructure, share costs and staffing demands, extend service
hours, and tap into a larger target audience to increase service usage
(Bailey-Hainer, 2005; Breeding, 2001; Coffman & Arret,
2004a).
Service
quality is often a point of concern with consortial
chat reference services (Meert & Given, 2009).
Many libraries express doubt that staff from outside their institution can
respond to their users’ questions effectively, especially queries that are
local in nature (Berry, Casado, & Dixon, 2003; Bishop, 2011). The
appropriate staffing for collaborative chat services is also a matter of
debate. Approximately 39% of academic libraries rely on paraprofessional staff
or library school students to staff a consortial chat
reference service (Devine, Bounds-Paladino, & Davis, 2011). While expanding
the operator pool beyond librarians is a cost-effective way to make up staffing
deficits and extend service hours into the evenings and weekends (Blonde,
2006), there is some resistance to the practice, as librarians are considered
the appropriate staffing level for answering research and reference questions
(Weak & Luo, 2014).
Most
of the literature about consortial chat services
concerning service quality focuses on the completeness and correctness of
librarians’ responses and staff members’ adherence to behavioral guidelines.
Although some studies have reported on user satisfaction, no studies have
investigated factors affecting user dissatisfaction in the consortial
context. This paper attempts to fill the gap by reporting on an evaluation of
an academic library consortium’s chat reference service. Using transcript
analysis and exit survey responses, the researchers examined whether the consortium’s
collaborative service model, staffing choices, and policies contributed to user
dissatisfaction.
Effectiveness of the Consortial
Model
Collaborative
chat reference requires participants to respond to questions concerning
unfamiliar libraries or locations. This adds a layer of complexity to the
reference transaction, as answering questions from across the consortium may
require local knowledge, the practical, collective knowledge that is rooted in
a particular place and based on the immediacy of experience (Geertz, 1983, p.
75). Researchers have tried to estimate the proportion of chat questions that
require local knowledge. Bishop (2011) refers to these queries as
location-based questions, and defines them as questions that concern the
geography of a library location or its attributes, such as its policies,
services, or collections (Bishop, 2012, 2013). Eight studies have reported the
quantity of location-based questions; they accounted for an average of 35% of
total chat volume (Berry et al., 2003; Bishop, 2011, 2012; Bishop & Torrence, 2008; Coté, Kochkina, & Mawhinney, 2016; Hyde & Tucker-Raymond,
2006; Kwon, 2007; Sears, 2001).
Evidence
regarding consortial partners’ ability to answer
location-based questions is mixed. Kwon (2007) found that local-specific
questions are answered less completely than non-local queries and noted lower
user satisfaction among patrons with local-specific questions. Bishop (2011)
recorded a 45% referral rate for location-based questions, with non-local
librarians referring significantly more than local librarians. However, the
correctness of responses to location-based questions does not differ greatly
between local and non-local librarians (Bishop, 2012).
Researchers
have also examined the quality of service provided by consortial
chat services. Meert and Given (2009) assessed the
chat service of an academic library participating in a 24/7 consortium,
comparing local and consortial staff’s adherence to
the library’s in-house reference quality standards. Adherence was high overall,
with local staff meeting standards more often than non-local staff (94% vs.
82%, respectively). Consortial staff were less likely
to answer questions in real time and made referrals at a higher rate than local
staff. Similarly, an evaluation of Oregon’s statewide chat consortium uncovered
that guidelines were met in 62% of interactions, but staff had difficulties
working with non-local users, including making referrals (Hyde &
Tucker-Raymond, 2006). While consortial operators
often rely on referrals as a strategy to handle non-local users’ queries
(Bishop, Sachs-Silveira, & Avet, 2011), user
satisfaction with referrals is significantly lower than for completed chats.
Referred users experienced the same degree of satisfaction as patrons who
received a partial answer or no answer at all (Kwon, 2006).
Despite
these weaknesses, consortial staff are capable of
answering users’ questions accurately, although they may take a different
approach than local chat operators. Brown (2017) examined transcripts at a
community college participating in QuestionPoint’s
24/7 Reference. He found that answers from consortial
back-up staff were largely correct, but they often provided more information
rather than taking on an instructional role. Peer-review of transcripts from
the statewide NCKnows chat consortium found that
external staff from the 24/7 Reference company received similar scores for
skill in research and information use to local librarians, but were rated lower
on engagement with the user (Pomerantz, Luo, & McClure, 2006).
Users
are largely satisfied with the service provided by consortial
or collaborative chat reference services. For example, the University of
Maryland University College’s chat service, which partially outsources staffing
to provide 24/7 service, has a 90% approval rating (Rawson, Davis, Harding,
& Miller, 2012). Kwon (2007) examined exit survey responses for a large
public library system’s chat reference service and found that the results were
positive: 65% of users were satisfied with the answer provided, 68% stated that
the librarian’s handling of the question was excellent, and 77% of patrons
would use the service again. Satisfaction did not differ significantly based on
the user’s question type.
In
addition to overall satisfaction, one study compared satisfaction with
different types of staff members within a collaboration. Hill, Madarash-Hill, and Allred (2007) compared user satisfaction
with local librarians, librarians from partner libraries in the local area, and
staff from Tutor.com’s Librarians by Request on
Southeastern Louisiana University’s chat service. Local librarians received higher
satisfaction scores than external librarians overall, but the partner
librarians did receive higher satisfaction scores than local librarians in some
categories. Notably, satisfaction scores for external librarians concerning the
quality of answers, friendliness, overall service, and willingness to return
rose over time, indicating that non-local librarians’ performance improves as
familiarity with non-local libraries and campuses grows.
Appropriateness and Effectiveness of
Student Staffing
There
has been significant debate about the appropriateness of using student
employees to staff in-person and online reference services. Several studies
have argued that relying on professional librarians alone to staff a reference
desk or chat service is cost-ineffective (Bracke et
al., 2007; Bravender, Lyon, & Molaro,
2011; Ryan, 2008). Case studies have also reported a high proportion of simple
directional or technology questions at the reference desk, suggesting that many
transactions do not require the skills of a librarian (Bishop & Bartlett,
2013; Ryan, 2008; Stevens, 2013). However, there are conflicting findings about
the most common question types on chat. Bravender et
al. (2011) and Cabaniss (2015) reported that
reference questions accounted for 17.7% and 23.3% of chats on their respective
services, leading them to recommend staffing models in which graduate students
or reference assistants handle the majority of chats. However, other
researchers have reported that complex research or reference questions occur in
40%–66% of chats, supporting staffing by professional librarians (Coté et al., 2016; Fuller & Dryden, 2015; Morais & Sampson, 2010).
Studies
assessing the quality of service provided by student workers have largely been
positive. At the reference desk, case studies have shown that student employees
receive comparable satisfaction ratings to librarians and score well on
measures of approachability and helpfulness (Faix,
2014; Stevens, 2013). On chat reference, transcript analysis by Lux and Rich (2016)
found that student employees offered quality assistance in 88% of transactions.
While the reference librarians outperformed the student workers in most
measures of comparison, the margin between them was not large. Keyes and Dworak (2017) also found that librarians outperformed
students in their transcript analysis study. However, there was no significant
association between staffing type and patron ratings. Both research teams
argued that student workers are capable of providing chat reference services
and can improve on their weaknesses through training. In particular, many
student workers deviate from the Reference and User Services Association’s
(RUSA) best practices; they often fail to conduct a thorough reference
interview and communicate in an overly informal style (Barrett & Greenberg,
2018; Langan, 2012). Guiding students through the
reference interview to provide appropriate behavioral benchmarks and reviewing
transcripts can increase an awareness of reference standards among student
workers (Langan, 2012; Ward, 2003).
The
Ontario Council of University Libraries (OCUL) is a consortium representing the
libraries of all 21 universities in the province of Ontario, Canada.
Collectively, these universities have a student population of over 480,000,
representing approximately one third of the university population of Canada.
OCUL
leverages collective resources to purchase, manage, and preserve electronic
collections, and provides access to them through a digital infrastructure
offered by Scholars Portal (SP), the consortium’s service arm. OCUL’s largest
member, the University of Toronto Libraries (UTL), acts as the service
provider. SP supports a wide range of content repositories, member services,
and technical services in the areas of collections, resource sharing, research
services, and digital preservation.
Ask a
Librarian is a virtual reference service managed by SP that connects students,
faculty members, and researchers from participating university libraries across
Ontario with real-time library and research assistance through chat. The
service launched in 2011 as a partnership among seven OCUL libraries and has
since expanded to 15 of the 21 OCUL members. The service reaches approximately
400,000 full-time equivalent students and handles roughly 25,000 chats per
year. Since 2014, the service has also been offered in French under the name Clavardez avec nos bibliothécaires (“Chat with our Librarians”) at five
libraries.
Ask a
Librarian is open 67 hours per week during the academic year. Staffing is
managed through a collaborative model in which libraries provide staffing hours
relative to their student populations and service usage patterns. During
evenings and weekends, staffing is supplemented by part-time virtual reference
operators (VROs), generally second-year LIS students or recent graduates, hired
by OCUL directly.
In
2012, one year after the initial implementation of Ask a Librarian, SP staff
conducted a research project investigating the types of questions asked on the
service, the academic status and location of users, and overall user
satisfaction. In 2017, after an influx of new partners, the introduction of
bilingual service, and changes in chat software, a joint research team at SP
and UTL began another research project, building upon the previous work. This
major transcript analysis sought to investigate a wide range of questions about
virtual reference.
As
one segment of the broader analysis, this paper focuses on the service model,
policies, and practices of Ask a Librarian as a consortial
virtual reference service. The aim was to determine whether the current
collaborative model is providing appropriate and satisfactory service to local
users. Since user feedback tends to be very positive overall, the researchers
intentionally sought out points of dissatisfaction in order to highlight any
weaknesses in the service. To that end, the research questions were:
R1: Are
dissatisfaction levels higher for some types of users or some categories of
questions?
R2: Do users
experience increased levels of dissatisfaction when served by an operator from
another institution? Do levels of dissatisfaction increase if the user is made
aware that the operator is from another institution?
R3: Do users
experience increased levels of dissatisfaction when served by student staff?
R4: Do busy
shifts have an effect on user dissatisfaction?
R5: Do questions
submitted around shift change times or the service’s closure have higher rates
of user dissatisfaction? Do levels of dissatisfaction increase if the user is
told that a shift change or service closure is
approaching?
The
answers to these questions will help determine if the current collaborative
model, as well as our policies and procedures around issues such as staffing
levels and instructions to operators for handling events like shift changes,
are appropriate and successful.
Methods
The
researchers received approval for this study from the University of Toronto’s
Research Ethics Board and OCUL’s Ask a Librarian Data Working Group.
The
researchers reviewed chats that took place between June 1 and December 1, 2016.
During this period, 9,424 chats were submitted to the service. Complete chat
transcripts, responses to the question initiation form, and chat metadata were
available for each interaction through the chat software. Of the 9,424 chats
that took place during this period, 1,395 interactions (14.8%) had a
corresponding completed exit survey.
Only
chats with completed exit surveys were eligible for sampling. Four of the eight
exit survey questions assess the user’s satisfaction with the interaction; only
responses to these questions were examined in this study. The researchers used
an Excel spreadsheet to identify chat interactions that had corresponding exit
surveys with only satisfied responses, and interactions that had exit surveys
with either neutral or dissatisfied responses. The exit survey questions and
examples of satisfied, neutral, and dissatisfied responses are listed in the
Appendix.
A
total of 473 chats were sampled according to the following procedures:
Data
Preparation
The researchers compiled the chat session metadata,
responses to the question initiation form, and exit survey responses pulled
from the chat software into an Excel spreadsheet. Chat session metadata
included operator type, whether the user and operator were from the same
institution, the time the chat was initiated, and whether the shift was busy.
The question initiation form included user type and question type. The exit
survey responses related to user dissatisfaction.
The
researchers anonymized the spreadsheet data according to standards set by the
consortium’s Data Working Group. Any identifying information, such as the
identity of the chat operator, the user, or the institutional affiliation of
either individual, was removed. The same process was used to anonymize the
corresponding chat transcripts.
The
researchers recorded information related to the study variables in the same
spreadsheet containing the data extracted from the software.
User Type
Users
identified their status with the university through a mandatory question
initiation form. The options were: undergraduate student, graduate student,
faculty, alumni, or other.
Operator Type
The
operator(s) who participated in the chat interaction were listed in the chat
metadata. The researchers recorded whether they were librarians,
paraprofessionals, part-time virtual reference operators employed by the
consortium, students (graduate student workers employed directly by
participating libraries), or of different types.
Question Type
Users were asked to provide a detailed description
of their question in a mandatory question initiation form. The researchers
coded their responses by question type according to a schema that was
previously developed by local researchers (Maidenberg,
Greenberg, Whyte Appleby, Logan, & Spence, 2012). The question type
categories are: accounts, citation, e-resources, facilities, computing,
miscellaneous, non-library, policies, research, and writing.
Institutional
Mismatch and Institutional Mismatch Reveal
The
institutional affiliation of the operator and user were listed in the
software’s chat metadata. The researchers recorded whether the participants in
the chat were associated with the same institution or whether there was a
mismatch. Through transcript analysis, the researchers recorded chats in which
the operator disclosed that they did not have the same institutional
affiliation or home campus as the user.
Busy Shift
The
chat session metadata listed the time at which the chat was initiated. From
this information, the researchers determined the shift during which the chat
took place. Shifts are an hour in length. The researchers consulted SP’s chat
volume statistics to determine how many chats were submitted during that same
shift. Busyness was determined based on the number of chats submitted during
the shift, compared to the number of operators scheduled to be online during
the shift. A shift was considered busy if more than three chats were submitted
for every available operator.
Aspects of Scheduling
The chat session metadata recorded the time at which
the chat was initiated. The researchers recorded whether the chat began during
the last 10 minutes of the shift or within 10 minutes of the time the service
was scheduled to close. Through transcript analysis, the researchers also noted
whether the operator disclosed any information about their shift schedule or
about the service’s hours (i.e., whether they were about to go off shift or the
service was closing soon).
Based
on the exit survey responses associated with the chat interaction, the
researchers recorded whether the user was dissatisfied or not dissatisfied.
Users were considered dissatisfied if they answered at least one of the four
exit survey questions related to satisfaction (Appendix) with a neutral or
dissatisfied response.
Question
type was coded by two members of the research team. The researchers coded an
initial test set of 42 transcripts and achieved substantial intercoder
agreement, as measured by Cohen’s Kappa, K
= 0.794. After discussing discrepancies, the researchers coded a second test
set of 44 transcripts. They achieved near perfect agreement, as measured by
Cohen’s Kappa, K = 0.876.
Transcripts
As
part of a larger service evaluation project, transcripts were coded for 30
variables hypothesized to effect user dissatisfaction, including two variables
in the present study: institution mismatch reveal and schedule reveal. The
four-member research team coded a test set of 15 transcripts using a draft
codebook and coding form to establish intercoder reliability. The team met to
discuss discrepancies, refined the definitions and examples in the codebook,
and then coded a second test set of 10 transcripts. The researchers assessed
intercoder reliability using average pairwise percent agreement, which was set
at a threshold of 80%. For the second test set, average pairwise percent
agreement was 93.3% for institution mismatch reveal and 95% for schedule
reveal.
Data Compilation
and Analysis
Once
transcript coding was completed, the data from the coding form was merged with
the spreadsheet containing the chat metadata, survey responses, and information
for the other study variables. Pearson chi-square tests of independence were
conducted in SPSS to determine if there were significant relationships between
variables, with a significance level of p
< 0.05 set a priori. The researchers then entered the variables into a
binary logistic regression model to determine the strength and directionality of
the variables’ effects.
Results
The
researchers ran eight Pearson chi-square tests of independence to determine if
there was a significant relationship between user dissatisfaction and aspects
of Ask a Librarian’s service model and staffing and scheduling practices. Two
variables had a significant relationship with user dissatisfaction at an alpha
level of 0.05: operator type, χ2
(4, N = 473) = 25.513, p < 0.001, and institution mismatch
reveal, χ2 (1, N = 473) = 4.323, p = 0.038. The remaining variables were not significantly related
to dissatisfaction. The results of each chi-square test of independence are
available in Table 1.
Next,
we entered the variables into a binary logistic regression, in order to
determine how well the variables, taken together, can explain or predict
dissatisfaction, as well as to understand the significance, strength, and
directionality of the individual variables’ effects. The overall model was
statistically significant, χ2
(22, N = 473) = 63.087, p < 0.001, meaning that it was
statistically reliable in distinguishing between satisfied and dissatisfied
patrons. The model did not have strong predictive power, represented by a Nagelkerke R2
of 0.167. Nagelkerke’s R2 is a measure relating to the goodness of fit of the
model, and can range from 0 to 1. The model was correct in predicting the
outcome (i.e., whether the user was dissatisfied) in 64.9% of cases.
In the regression model, there were two significant
explanatory variables at the 0.05 alpha level: operator type and institutional
mismatch reveal. Within the operator type category, the part-time virtual reference
operator type was a significant, negative variable within the model (b =
-1.065, p = 0.008). This means that
dissatisfaction decreased if users were served by graduate student staff or
recent graduates hired by the consortium. The other operator types did not
significantly contribute to dissatisfaction. Institutional mismatch reveal was
a positive variable in the model, indicating that users were more likely to be
dissatisfied if the operator revealed they were not at the user’s home
institution (b =
0.875, p = 0.009).
Table
1
Summary
of One-Tailed Chi-Square Tests of Independence by Variable
Variable |
Dissatisfied |
Not
Dissatisfied |
Pearson
χ2 |
df. |
Sig. |
||
User
type |
Observed |
Expected |
Observed |
Expected |
8.010 |
4 |
.091 |
Undergraduate student |
129 |
120.7 |
134 |
142.3 |
|
|
|
Graduate student |
56 |
55.5 |
65 |
65.5 |
|
|
|
Faculty |
13 |
11.9 |
13 |
14.1 |
|
|
|
Alumni |
7 |
8.7 |
12 |
10.3 |
|
|
|
Other |
12 |
20.2 |
32 |
23.8 |
|
|
|
Operator
type |
Observed |
Expected |
Observed |
Expected |
25.513 |
4 |
.000* |
Librarian |
80 |
78.5 |
91 |
92.5 |
|
|
|
Paraprofessional |
74 |
60.6 |
58 |
71.4 |
|
|
|
Part-time virtual reference operator |
25 |
44 |
71 |
52 |
|
|
|
Student |
24 |
24.8 |
30 |
29.2 |
|
|
|
Mixed |
14 |
9.2 |
6 |
10.8 |
|
|
|
Question
type |
Observed |
Expected |
Observed |
Expected |
14.714 |
9 |
.099 |
Accounts |
14 |
18.8 |
27 |
22.2 |
|
|
|
Citation |
28 |
20.6 |
17 |
24.4 |
|
|
|
E-resources |
12 |
16.5 |
24 |
19.5 |
|
|
|
Facilities |
6 |
5.5 |
6 |
6.5 |
|
|
|
Computing |
5 |
6 |
8 |
7 |
|
|
|
Miscellaneous |
8 |
9.6 |
11.4 |
5 |
|
|
|
Non-library |
2 |
3.2 |
5 |
3.8 |
|
|
|
Policies |
16 |
18.4 |
24 |
21.6 |
|
|
|
Research |
124 |
117.4 |
132 |
138.6 |
|
|
|
Writing |
2 |
9 |
0 |
1.1 |
|
|
|
Institutional
mismatch |
Observed |
Expected |
Observed |
Expected |
0.073 |
1 |
.787 |
Match |
84 |
82.6 |
96 |
97.4 |
|
|
|
Mismatch |
133 |
134.4 |
160 |
158.6 |
|
|
|
Institutional
mismatch reveal |
Observed |
Expected |
Observed |
Expected |
4.323 |
1 |
.038* |
Revealed |
34 |
26.6 |
24 |
31.4 |
|
|
|
Did not reveal |
183 |
190.4 |
232 |
224 |
|
|
|
Busy
shift |
Observed |
Expected |
Observed |
Expected |
.745 |
1 |
.388 |
Busy |
34 |
30.7 |
33 |
36.3 |
|
|
|
Not busy |
183 |
186.3 |
223 |
219.7 |
|
|
|
Chat
initiated within 10 minutes of end of shift / service closure |
Observed |
Expected |
Observed |
Expected |
2.773 |
1 |
.096 |
Initiated within 10 minutes |
41 |
34.4 |
34 |
40.6 |
|
|
|
Not initiated within 10 minutes |
176 |
182.6 |
222 |
215.4 |
|
|
|
Reveal
of aspects of scheduling |
Observed |
Expected |
Observed |
Expected |
3.202 |
1 |
.074 |
Revealed |
39 |
32.1 |
31 |
37.9 |
|
|
|
Did not reveal |
178 |
184.9 |
225 |
218.1 |
|
|
|
Note. df.
= degrees of freedom; Sig. = significance.
*Denotes
that relationship is significant at an alpha level of 0.05.
Table
2
Summary
of Binary Logistic Regression
Variable |
Category |
b |
S.E. |
Wald |
df. |
Sig. |
Exp(b) |
User type |
|
|
|
6.993 |
4 |
.136 |
|
|
Undergraduate
student |
.558 |
.579 |
.930 |
1 |
.335 |
1.747 |
|
Graduate
student |
.331 |
.588 |
.317 |
1 |
.573 |
1.393 |
|
Faculty |
.840 |
.696 |
1.455 |
1 |
.228 |
2.316 |
|
Other |
-.368 |
.648 |
.322 |
1 |
.570 |
.692 |
Operator type |
|
|
|
26.860 |
4 |
.000* |
|
|
Librarian |
.125 |
.339 |
.135 |
1 |
.713 |
1.133 |
|
Paraprofessional |
.548 |
.349 |
2.459 |
1 |
.117 |
1.730 |
|
Part-time
virtual reference operator |
-1.065 |
.400 |
7.099 |
1 |
.008* |
.345 |
|
Mixed |
.777 |
.595 |
1.703 |
1 |
.192 |
2.175 |
Question type |
|
|
|
12.147 |
9 |
.205 |
|
|
Accounts |
-21.661 |
.000 |
28257.649 |
1 |
.999 |
.000 |
|
Citation |
-20.417 |
.000 |
28257.649 |
1 |
.999 |
.000 |
|
E-resources |
-21.657 |
.000 |
28257.649 |
1 |
.999 |
.000 |
|
Facilities |
-21.012 |
.000 |
28257.649 |
1 |
.999 |
.000 |
|
Computing |
-21.812 |
.000 |
28257.649 |
1 |
.999 |
.000 |
|
Miscellaneous |
-21.354 |
.000 |
28257.649 |
1 |
.999 |
.000 |
|
Non-library |
-22.236 |
.000 |
28257.649 |
1 |
.999 |
.000 |
|
Policies |
-21.355 |
.000 |
28257.649 |
1 |
.999 |
.000 |
|
Research |
-21.000 |
.000 |
28257.649 |
1 |
.999 |
.000 |
Institutional
mismatch |
|
-.299 |
.225 |
1.757 |
1 |
.185 |
.742 |
Institutional
mismatch reveal |
|
.875 |
.337 |
6.750 |
1 |
.009* |
2.399 |
Busyness of the
shift |
|
-.007 |
.293 |
.001 |
1 |
.981 |
.993 |
Chat initiated
within 10 minutes of end of shift / service closure |
|
.284 |
.276 |
1.059 |
1 |
.304 |
1.328 |
Reveal of
aspects of scheduling |
|
.363 |
.297 |
1.494 |
1 |
.222 |
1.437 |
Note. b =
coefficient, S.E. = standard error, Wald = Wald chi-square test (which tests
the null hypothesis); df. = degrees of freedom; Sig. = significance; Exp(b) = odds ratio.
*Denotes
that relationship is significant at an alpha level of 0.05.
Discussion
This
analysis did not find a statistically significant relationship between
dissatisfaction and user or question type (research question 1), indicating
that Ask a Librarian provides a consistent level of service to all patrons and
satisfactorily answers all types of library- and research-related questions.
The results largely reaffirm the consortium’s service model, staffing
practices, and policies. Dissatisfaction levels did not show relationships with
most of the factors examined, indicating that overall service is appropriate
and satisfactory. In particular, busy shifts and chats initiated near shift
change times or service closure (research questions 4 and 5) had no
relationship with dissatisfaction, suggesting that Ask a Librarian’s scheduling
practices and policies for handling shift changes are appropriate.
Consortial
Service Quality and Institutional Mismatch
The
analysis found no relationship between institution match and dissatisfaction,
indicating that users can be served by operators across the consortium without
compromising patron satisfaction. This fits into the literature that finds that
users tend to be satisfied with consortial or
collaborative chat reference (Kwon, 2007; Rawson et al., 2012).
The nature of OCUL as a purchasing, advocacy, and
service-providing consortium means that there are deep levels of collaboration
between institutions, which tend to have access to similar resources. This may
make it easier for operators from one institution to successfully answer
questions from another, consistent with the findings of Hill et al. (2007) that
satisfaction scores for external librarians in collaborative chat improved as
their familiarity with the user’s library increased. Therefore, the finding
that users are satisfied by service from operators at partner institutions is
not necessarily generalizable to all consortia, and particularly large, multi-type
consortia such as the one Bishop (2011, 2012) found inadequate for answering
local questions.
The reveal of an institution mismatch was associated
with user dissatisfaction. This is an area that has not been widely studied and
the authors were unable to find other literature to help provide context,
making this a fruitful area for potential future research. This finding
especially requires further investigation to rule out confounding factors.
Users may simply be more dissatisfied when they learn that they are not being
served by their own local library, but the authors’ current hypothesis is that
operators are more likely to reveal that they are from another institution if
they are unable to answer the user’s question, or if the chat is otherwise going
poorly. Pending more analysis, SP will consider changing Ask a Librarian
policies to recommend against revealing an institution mismatch unless
absolutely necessary.
Appropriate
and Effective Student Staffing
The
results show that users do not express dissatisfaction with the service of
non-librarians, and in fact show a slight preference for graduate student staff
hired by the consortium. This aligns with earlier literature indicating users
find student staff to be approachable and helpful (Stevens, 2013) and that they
provide high-quality assistance via chat, although not as high-quality as
librarians (Keyes & Dworak, 2017; Lux & Rich,
2016). However, it is also important to note that the student staff of Ask a
Librarian are all LIS graduate students who have taken at least one reference
course. As such, they may perform more like librarians than undergraduate
students and non-LIS graduate students staffing similar services (for example,
in terms of following RUSA best practices). However, as noted above, this study
did not examine response completeness or accuracy as other studies have done.
This finding reinforces Ask a Librarian’s use of
student staff to supplement evening and weekend shifts as an appropriate way to
extend reference services beyond the normal working hours of reference
librarians.
Beyond
the generalizability of specific findings, there are a few limitations to this
study. In examining consortial service quality, the
researchers did not identify whether the questions required local knowledge, as
Bishop (2011, 2012) and other researchers have done. Satisfaction was reported
by users in an exit survey, which was only presented when the operator ended
the chat, or when the user clicked an “end chat” button; users who simply
closed the window did not see it. Self-reported satisfaction scores are also
not always reliable measures as they can introduce the user’s bias, and user
satisfaction is only one measure of an interaction’s success. This study did
not examine other quality metrics, such as response accuracy or completeness or
adherence to behavioural standards like RUSA guidelines. Other factors,
including those discussed in the Further Research section below, influence user
satisfaction and therefore may complicate the relationships discussed here. The
quantitative analysis for this study did not include any moderating variables
that may partially explain relationships.
Further
Research
The
research team is already conducting further analysis on the same dataset,
building on previous knowledge of what affects dissatisfaction in reference
transactions. Articles on how operator behaviour and communication styles
impact user dissatisfaction are already published (Logan & Barrett, 2019;
Logan, Barrett, & Pagotto, 2019), and work has
begun to study instruction and referrals in chat.
More in-depth research is needed to flesh out the
nuances of the relationships uncovered in this paper. Qualitative research, in
particular, could complement these findings by disentangling what leads users
to give low scores on the exit survey.
Finally, while Ask a Librarian is a bilingual
service, the number of French interactions was so small that it was not
feasible to analyze any differences between English and French user
satisfaction. This is an area the researchers hope to examine in more depth in
the future.
Conclusions
As a
collaborative chat service, Ask a Librarian was
launched to leverage shared resources and provide cost-effective reference
service to Ontario university libraries. Its service model and policies were
developed based on standards and best practices informed by other virtual
reference practitioners. Now that Ask a Librarian has grown into a mature
service, a review is important to ensure that the model and policies are backed
by evidence.
The
study largely reaffirmed the consortium’s service model, staffing practices,
and policies. Users are not dissatisfied with the service received from chat
operators at partner institutions or by service provided by non-librarians.
Current policies for scheduling, service closure, and handling shift changes
are appropriate. Best practices related to disclosing institutional mismatches
may need to be changed, as these reveals were associated with higher levels of
dissatisfaction. This is an area that merits further investigation.
No
areas of weakness were uncovered, indicating that Ask a Librarian provides
appropriate and satisfactory service to all different user types and for all
different question types. Overall, this research demonstrates that institutions
can trust the consortium with their local users’ virtual reference needs.
Acknowledgments
The
authors would like to acknowledge the other members of the research team,
Judith Logan (University of Toronto Libraries) and Amy Greenberg (Scholars
Portal).
References
Bailey-Hainer,
B. (2005). Virtual reference: Alive & well. Library Journal, 130(1), 46-47.
Barrett,
K., & Greenberg, A. (2018). Student-staffed virtual reference services: How
to meet the training challenge. Journal
of Library & Information Services in Distance Learning, 12(3-4),
101-119. https://doi.org/10.1080/1533290X.2018.1498620
Berry,
T. U., Casado, M. M., & Dixon, L. S. (2003). The local nature of digital
reference. Southeastern Librarian, 51(3),
8-15. Retrieved from https://digitalcommons.kennesaw.edu/seln/vol51/iss3/5
Bishop,
B. W. (2011). Location-based questions and local knowledge. Journal of the American Society for
Information Science and Technology, 62(8), 1594-1603. https://doi.org/10.1002/asi.21561
Bishop,
B. W. (2012). Can consortial reference partners
answer your local users’ library questions? portal:
Libraries and the Academy, 12(4), 355-370. https://doi.org/10.1353/pla.2012.0036
Bishop,
B. W. (2013). Location-based questions: Types and implications for consortial reference services. Proceedings of the Annual Conference of CAIS. Retrieved from https://journals.library.ualberta.ca/ojs.cais-acsi.ca/index.php/cais-asci/article/view/546/496
Bishop,
B. W., & Bartlett, J. A. (2013). Where do we go from here? Informing
academic library staffing through reference transaction analysis. College & Research Libraries, 74(5),
489-500. https://doi.org/10.5860/crl-365
Bishop,
B. W., Sachs-Silveira, D., & Avet, T. (2011).
Populating a knowledge base with local knowledge for Florida’s Ask a Librarian
reference consortium. The Reference
Librarian, 52(3), 197-207. https://doi.org/10.1080/02763877.2011.555289
Bishop,
B. W., & Torrence, M. (2008). Virtual reference
services: Consortium versus stand-alone. College
& Undergraduate Libraries, 13(4), 117-127. https://doi.org/10.1300/J106v13n04_08
Blonde,
J. (2006). Staffing for electronic reference: Balancing service and sacrifice.
In R. D. Lankes, M. D. White, E. G. Abels, & S. N. Haque (Eds.), The Virtual Reference Desk: Creating a Reference Future (pp.
75-87). New York, NY: Neal-Schuman Publishers, Inc.
Bracke, M. S., Brewer,
M., Huff-Eibl, R., Lee, D. R., Mitchell, R., &
Ray, M. (2007). Finding information in a new landscape: Developing new service
and staffing models for mediated information services. College & Research Libraries, 68(3), 248-267. https://doi.org/10.5860/crl.68.3.248
Bravender, P.,
Lyon, C., & Molaro, A. (2011). Should chat
reference be staffed by librarians? An assessment of chat reference at an
academic library using LibStats. Internet Reference Services Quarterly, 16(3), 111-127. https://doi.org/10.1080/10875301.2011.595255
Breeding,
M. (2001). Providing virtual reference service. Information Today, 18(4), 42-43.
Brown,
R. (2017). Lifting the veil: Analyzing collaborative virtual reference
transcripts to demonstrate value and make recommendations for practice. Reference & User Services Quarterly, 57(1),
42-47. https://doi.org/10.5860/rusq.57.1.6441
Cabaniss, J. (2015). An
assessment of the University of Washington’s chat reference services. Public Library Quarterly, 34(1), 85-96. https://doi.org/10.1080/01616846.2015.1000785
Coffman,
S., & Arret, L. (2004a). To chat or not to
chat—taking another look at virtual reference: Part 1. Searcher, 12(7), 38-46.
Coffman,
S., & Arret, L. (2004b). To chat or not to chat:
Taking yet another look at virtual reference. Searcher, 12(8), 49-56.
Coté, M., Kochkina,
S., & Mawhinney, T. (2016). Do you want to chat? Reevaluating
organization of virtual reference service at an academic library. Reference & User Services Quarterly, 56(1),
36-46. https://doi.org/10.5860/rusq.56n1.36
Devine,
C., Bounds Paladino, E., & Davis, J. A. (2011). Chat reference training
after one decade: The results of a national survey of academic libraries. The Journal of Academic Librarianship, 37(3),
197-206. https://doi.org/10.1016/j.acalib.2011.02.011
Eakin,
L., & Pomerantz, J. (2009). Virtual reference, real money: Modeling costs
in virtual reference services. portal:
Libraries and the Academy, 9(1), 133-164. https://doi.org/10.1353/pla.0.0035
Faix, A.
(2014). Peer reference revisited: Evolution of a peer-reference model. Reference Services Review, 42(2),
305-319. https://doi.org/10.1108/RSR-07-2013-0039
Fuller,
K., & Dryden, N. H. (2015). Chat reference analysis to determine accuracy
and staffing needs at one academic library. Internet
Reference Services Quarterly, 20(3-4), 163-181. https://doi.org/10.1080/10875301.2015.1106999
Geertz,
C. (1983). Local knowledge: Further
essays in interpretive anthropology. New York: Basic Books.
Helfer,
D. S. (2003). Virtual reference in libraries: Status and issues. Searcher, 11(2), 63-65.
Hill,
J. B., Madarash-Hill, C., & Allred, A. (2007).
Outsourcing digital reference: The user perspective. The Reference Librarian, 47(2), 57-74. https://doi.org/10.1300/J120v47n98_06
Hyde,
L., & Tucker-Raymond, C. (2006). Benchmarking librarian performance in chat
reference. The Reference Librarian, 46(95-96), 5-19. https://doi.org/10.1300/J120v46n95_02
Keyes,
K., & Dworak, E. (2017). Staffing chat reference
with undergraduate student assistants at an academic library: A standards-based
assessment. The Journal of Academic
Librarianship, 43(6), 469-478. https://doi.org/10.1016/j.acalib.2017.09.001
Kwon,
N. (2006). User satisfaction with referrals at a collaborative virtual
reference service. Information Research,
11(2). Retrieved from http://informationr.net/ir/11-2/paper246.html
Kwon,
N. (2007). Public library patrons’ use of collaborative chat reference service:
The effectiveness of question answering by question type. Library and Information Science Research, 29(1), 70-91. https://doi.org/10.1016/j.lisr.2006.08.012
Langan, K. (2012).
Training millennials: A practical and theoretical approach. Reference Services Review, 40(1), 24-48.
https://doi.org/10.1108/00907321211203612
Logan,
J., Barrett, K., & Pagotto, S. (2019).
Dissatisfaction in chat reference users: A transcript analysis study. College & Research Libraries, 80(7), 925-944. https://doi.org/10.5860/crl.80.7.925
Logan,
J., & Barrett, K. (2019). How important is communication style in chat
reference? Internet Reference Services
Quarterly, 23(1-2), 41-57. https://doi.org/10.1080/10875301.2019.1628157
Lux,
V. J., & Rich, L. (2016). Can student assistants effectively provide chat
reference services? Student transcripts vs. librarian transcripts. Internet Reference Services Quarterly, 21(3-4),
115-139. https://doi.org/10.1080/10875301.2016.1248585
Maidenberg, K., Greenberg,
A., Whyte-Appleby, J., Logan, J., & Spence, M. (2012). Reference query coding key. Retrieved from http://hdl.handle.net/1807/94126
Meert, D.
L., & Given, L. M. (2009). Measuring quality in chat reference consortia: A
comparative analysis of responses to users’ queries. College & Research Libraries, 70(1), 71-84. https://doi.org/10.5860/0700071
Morais, Y.,
& Sampson, S. (2010). A content analysis of chat transcripts in the
Georgetown Law Library. Legal Reference
Services Quarterly, 29(3), 165-178. https://doi.org/10.1080/02703191003751289
Peters,
T. A. (2002). E-reference: How consortia add value. Journal of Academic Librarianship, 28(4), 248-250. https://doi.org/10.1016/S0099-1333(02)00310-5
Pomerantz,
J. (2006). Collaboration as the norm in reference work. Reference & User Services Quarterly, 46(1), 45-55. https://doi.org/10.5860/rusq.46n1.45
Pomerantz,
J., Luo, L., & McClure, C. R. (2006). Peer review of chat reference
transcripts: Approaches and strategies. Library
& Information Science Research, 28(1), 24-48. https://doi.org/10.1016/j.lisr.2005.11.004
Powers,
A. C., Nolen, D., Zhang, L., Xu, Y., & Peyton, G. (2010). Moving from the
consortium to the reference desk: Keeping chat and improving reference at the
MSU Libraries. Internet Reference
Services Quarterly, 15(3), 169-188. https://doi.org/10.1080/10875301.2010.500939
Radford,
M. L., & Kern, M.
K. (2006). A multiple-case study investigation of the discontinuation of nine
chat reference services. Library &
Information Science Research, 28(4), 521-547. https://doi.org/10.1016/j.lisr.2006.10.001
Rawson,
J., Davis, M. A., Harding, J., & Miller, C. (2012). Virtual reference at a
global university: An analysis of patron and question type. Journal of Library & Information
Services in Distance Learning, 7(1-2), 93-97. https://doi.org/10.1080/1533290X.2012.705624
Ryan,
S. M. (2008). Reference transactions analysis: The cost-effectiveness of
staffing a traditional academic reference desk. The Journal of Academic Librarianship, 34(5), 389-399. https://doi.org/10.1016/j.acalib.2008.06.002
Sears,
J. (2001). Chat reference service: An analysis of one semester’s data. Issues in Science and Technology
Librarianship, 32, 200-206. https://doi.org/10.5062/F4CZ3545
Stevens,
C. R. (2013). Reference reviewed and re-envisioned: Revamping librarian and
desk-centric services with LibStARS and LibAnswers. The
Journal of Academic Librarianship, 39(2), 202-214. https://doi.org/10.1016/j.acalib.2012.11.006
Ward,
D. (2003). Using virtual reference transcripts for staff training. Reference Services Review, 31(1), 46-56.
https://doi.org/10.1108/00907320310460915
Weak,
E., & Luo, L. (2014). Collaborative virtual reference service: Lessons from
the past decade. Advances in
Librarianship, 37, 81-112.
https://doi.org/10.1108/S0065-2830(2013)0000037008
Yang,
S. Q., & Dalal, H. A. (2015). Delivering virtual
reference services on the web: An investigation into the current practice by
academic libraries. Journal of Academic
Librarianship, 41(1), 68-86. https://doi.org/10.1016/j.acalib.2014.10.003
Appendix
Exit Survey Questions Assessing User
Satisfaction
The
following questions were included in the current study. Responses in bold were
identified as dissatisfied, responses in italics were classified as neutral,
and those with no text effects were considered satisfied.
1.
The service provided by the librarian
was
a.
Excellent
b.
Good
c.
Satisfactory
d.
Poor
e.
Very
poor
2.
The library provided me with
a.
Just the right amount of assistance
b.
Too
little assistance
c.
Too
much assistance
3.
This chat service is
a.
My preferred way of getting library help
b.
A good way of getting library help
c.
A
satisfactory way of getting library help
d.
A
poor way of getting library help
e.
A
last resort for getting library help
4.
Would you use this service again?
a.
Yes
b.
No
The
following questions also appear on the exit survey, but were not included in
this study.
1.
Was this your first time using the
service?
a.
Yes
b.
No
2.
Where were you when you chatted with us
today?
a.
Off campus
b.
On campus but not in the library
c.
In the library
3.
How did you find out about this service?
(Users could select more than one response.)
a.
Library website
b.
Librarian
c.
Library instruction session
d.
Friend
e.
Professor or TA
f.
Promotional material (poster, flyer,
etc.)
g.
Social media
h.
Other (free text response)
4.
Other feedback or suggestions (free text
response)