925 Dissatisfaction in Chat Reference Users: A Transcript Analysis Study Judith Logan, Kathryn Barrett, and Sabina Pagotto* This study aims to identify factors and behaviors associated with user dissatisfac- tion with a chat reference interaction to provide chat operators with suggestions of behaviors to avoid. The researchers examined 473 transcripts from an academic chat reference consortium from June to December 2016. Transcripts were coded for 13 behaviors that were then statistically analyzed with exit survey ratings. When present in the chat, three behaviors explained user dissatisfaction: clarification, transfers, and referrals. The absence of three more behaviors also explained dissatisfaction: ending the chat mutually; maintaining a professional tone; and displaying interest or empathy. Introduction Any library staff member who has ever answered online reference questions over chat knows that, sadly, not all interactions are positive. Sometimes a user arrives at the chat after spending several frustrating hours trying to find articles for their paper. Or they might resent “wasting” time getting help from the library when the link resolver should work seamlessly. If we are hon- est, there are times when we are not at our best either. Perhaps we are working with multiple users and lose track of one of the chats. Or maybe we are rushing because the end of our shift is approaching and we have a meeting scheduled in another part of the library directly after- ward. Whatever the reason, and despite the best intentions of everyone involved, sometimes the chat just does not go well. While it is tempting to forget these less-than-ideal interactions as soon as possible, they could be mined for valuable insights about why some interactions go bad in an effort to find things we can do to prevent that from happening. Chat reference researchers have done excellent work identifying behaviors that positively affect user satisfaction with a chat interaction, allowing us to create and validate best practices. We now know that, in text-based synchronous, online reference (hereafter referred to as “chat”), asking follow-up questions, maintaining word contact, and instruction will all contribute to a user’s positive assessment of the chat interaction.1 Since the “dos” of chat are so well covered, it might now be time for us to turn our attention to the “don’ts.” It is important to study dis- satisfaction separately from satisfaction to avoid overlooking variables that negatively affect Judith Logan is User Services Librarian at the University of Toronto, email: judith.logan@utoronto.ca; Kathryn Barrett is Social Sciences Liaison Librarian at the University of Toronto Scarborough Library, email: kathryn. barrett@utoronto.ca; Sabina Pagotto is Client Services & Assessment Librarian at Scholars Portal, email: sabina@ scholarsportal.info. ©2019 Judith Logan, Kathryn Barrett, and Sabina Pagotto, Attribution-NonCommercial (http:// creativecommons.org/licenses/by-nc/4.0/) CC BY-NC. mailto:judith.logan@utoronto.ca mailto:kathryn.barrett@utoronto.ca mailto:kathryn.barrett@utoronto.ca mailto:sabina@scholarsportal.info mailto:sabina@scholarsportal.info http://creativecommons.org/licenses/by-nc/4.0/ http://creativecommons.org/licenses/by-nc/4.0/ 926 College & Research Libraries November 2019 a user’s experience of chat reference interactions. While a library staff member’s choice to do X during a chat may increase user satisfaction, the absence of X in a chat does not necessarily lead to dissatisfaction. Studying dissatisfaction allows us to explore the effect of the lack of certain behaviors, as well as identifying other behaviors that might have negative satisfaction consequences. For the purposes of this paper, the authors will refer to the library staff member as the “operator” and the individual chatting with them as the “user.” The present study examined a corpus of chat interactions that occurred between June and December 2016 on a consortial reference service in Ontario, Canada. All eligible interac- tions included a prechat survey, a transcript of the interaction, an exit survey, and metadata about the chat. Researchers coded transcripts for operator behaviors and compared them to the user’s self-reported satisfaction as collected in the exit survey. We hoped to discover op- erator behaviors that should be avoided. The following research question guided this project: • What operator behaviors are associated with dissatisfaction… □ At the beginning of the chat? □ Anytime during the chat? □ At the end of the chat? Literature Review User satisfaction is a popular metric in chat transcript studies, as it offers practitioners action- able findings for improving users’ perception of the service received. To study factors that influence satisfaction, researchers often look to operators’ behavior. The RUSA guidelines are a convenient set of behaviors to study as they are taught in library and information programs and are widely accepted as best practice.2 Operator Behavior and User Satisfaction The reference interview comes at the beginning of the chat and is regarded by practitioners as the basis of a successful interaction. Researchers at Carnegie Mellon University observed that clarifying questions were phrased as open ended in 17 percent of interactions, and as closed in 46 percent of interactions, which users were positive about in almost all cases.3 These actions are represented in the RUSA guidelines as 3.1.7 “Uses open-ended questions to encourage the patron to expand on the request or present additional information” and 3.1.8 “Uses closed and/or clarifying questions to refine the search query.”4 RUSA 3.1.5 “Rephrases the question or request and asks for confirmation to ensure accurate understanding” has also been included in some work.5 A case study at Texas A&M revealed that this behavior was used by only 10 percent of operators in the chats studied even though 82 percent of transcripts included evidence of user satisfaction.6 In the early stages of a chat, the operator’s intention to help the user also comes through. Keyes and Dworak found that, in 5 percent of transcripts studied, the operator referred the question without attempting to assist the user.7 Ninety percent of the users in that study who responded to an exit survey reported that their overall experience was good or great.8 An operator’s manner has also been shown to relate to satisfaction. Operator courte- ousness corresponds to RUSA 3.1.1 “Communicates in a receptive, cordial, and supportive manner.”9 Kwon and Gregory’s seminal work found that “listening to questions in a cordial and receptive manner” was significantly associated with user satisfaction as expressed in exit surveys.10 A peer assessment of chat transcripts at the University of Kansas showed that the majority had exemplary courtesy (73.1% in 2015 and 65.9% in 2016) and concluded that Dissatisfaction in Chat Reference Users 927 this behavior was important to an interaction’s success.11 A study at a large-scale consortial academic virtual reference service in Ontario, Canada found that tone significantly correlated with satisfaction.12 Pomerantz, Luo, and McClure also examined courtesy and found that the majority of interactions fell in the second-highest category at 47.4 percent, with direct or indirect evidence of user satisfaction in 61 percent of transcripts.13 They also observed that operators were mostly either neutral (43.1%) or second-highest rated (32.8%) in enthusiasm, which relates to RUSA’s 2.0 section on interest.14 Though not a RUSA behavior, Prieto argues that using emotional intelligence, which includes empathy, can help chat operators seem more welcoming and supportive.15 He argues that using a more informal communication style can help “create a more relaxed and authentic environment.”16 There is mixed evidence to corroborate this statement at present, however. Waugh interviewed students and asked them to compare a formal-style chat to an informal-style one to conflicting results.17 Three interviewees would only return to the more formal operator, citing professionalism and trustworthiness, while two preferred the informal operator’s approachability.18 A linguistic analysis found that students tended to use more informal communication features than librarians did, but that librarians who mirrored the students’ language were more likely to be rated as very helpful, especially regarding the use of contractions, ellipses, capitalization, and punctuation.19 What Radford and Radford call the “closing ritual” is also often included in chat sat- isfaction studies.20 Usually, researchers represent this phase of the chat with two RUSA behaviors, 5.1.1 “Asks the patron if his/her questions have been completely answered” and 5.1.2 “Encourages the patron to return if he/she has further questions,” which are commonly called a “satisfaction check” and an “invitation to return.”21 Both of these were found to be significantly associated with satisfaction in Kwon and Gregory’s study.22 Some projects also touch on the way the chat ends with RUSA 5.1.7 “Takes care not to end the reference interview prematurely.”23 Lux and Rich found that all three closing behaviors were present in only 31 percent of chats with librarian-operators and 25 percent of chat with student-operators, which may have contributed to the positive comments and thanks received by 70 percent of librarian operators and 81 percent of student operators.24 Sometimes an operator’s behavior can be influenced by their limitations. Referring a user to another service point or library staff member is a common way for operators to direct users to someone with the necessary expertise. RUSA 4.1.9 defines this behavior as “Recognizes when to refer patrons for more help. This might mean a referral to a subject librarian, specialized library, or community resource.”25 Kwon found that users of a public library chat reference service whose questions ended in a referral fell into the middle level of satisfaction, along with those who only received partial answers.26 More recent work by Ward and Jacoby has found that the more complicated a question was, the more likely a referral would be needed, though no satisfaction component was included in that study.27 Similarly, in collaborative chat reference settings, operators may be matched with users who are not from their institutions. In a statewide chat consortium, Bishop found that nonlocal operators performed comparably to local operators, though questions related to employment, library cards, and log-ins had low rates of correct responses.28 He also observed that many nonlocal operators “put forth extra effort to answer virtual questions as if they were a local librarian”29 but did not count how often operators revealed that they were not local to the user nor how satisfied users were with local and nonlocal operators. 928 College & Research Libraries November 2019 Operators are also sometimes limited by time. On many chat services, operators are scheduled for a specific shift and are relieved by other operators when their shift is over. The chat may need to be transferred to a new operator if the first operator cannot continue chat- ting. The authors were unable to find sources discussing the influence of transfers on user satisfaction. Finally, sometimes an operator cannot complete a user’s information request because their institution does not have the resource they are looking for, because they asked for something that contravened library policy, or because of technical problems, among many other explanations. To the authors’ knowledge, having to tell the user that the operator cannot do something they requested has not been studied. Dissatisfaction Actively dissatisfied users represent a small proportion of chat reference users in most popula- tions. Strong evidence of dissatisfaction with an answer was found in only 2.8 percent of inter- actions studied by Pomerantz, Luo, and McClure.30 Similarly, only 2 percent of users surveyed at Southern Illinois University said they would not use the chat service again.31 Marstellar and Mizzy found so few unfavorable patron responses in their study (five of 270 transcripts) that they were unable to perform planned cross tabulations.32 Only 0.8 percent of respondents who participated in a chat reference pilot indicated that they would not use the service again, a measure Durrance asserts is a strong indicator of a reference transaction’s success.33 Illinois State University observed much higher rates of dissatisfaction, with 14.3 percent of survey respondents indicating that they were dissatisfied or very dissatisfied.34 This dissatisfaction seemed to stem from the quality of the answers provided and the knowledge of the librarian, as this was identified as dissatisfactory in 7.3 percent and 5.4 percent of responses.35 Similarly, Kwon observed that 12.6 percent of users were not satisfied with the answer they received.36 Phase III of the Library Visit Study, a long-term study of reference service provision and user satisfaction at Western University in Ontario, Canada, includes some of the only work looking specifically at dissatisfaction in chat reference interactions.37 In it, MLIS students posed real questions to public and academic library service points, both in person and virtu- ally, then reported on their experiences in a reflection of the interaction and a questionnaire. Nilsen identified three operator behaviors that were associated with user dissatisfaction:38 1. Bypassing the reference interview (for instance, not asking a single question to clarify the user’s information need); 2. Unmonitored referrals (such as referring the user to an information source without checking to make sure that it contained the desired information); and 3. Failure to ask follow-up questions (for example: not checking to see if the operator answered the user’s question satisfactorily or inviting the user to return later for further assistance). The reference interview was missing in 80 percent of virtual reference interactions, while 70 percent were missing follow-up questions and 38 percent contained unmonitored refer- rals.39 The operator’s helpfulness, friendliness, and understanding of the information need were not correlated with the user’s willingness to return to the service.40 Methodology Background and Setting Scholars Portal is the service arm of the Ontario Council of University Libraries, a consortium Dissatisfaction in Chat Reference Users 929 representing the 21 university libraries in Ontario, Canada. Scholars Portal’s technical infra- structure preserves and provides access to information resources collected and shared by member libraries. Scholars Portal also develops and manages a wide range of digital services, including Ask a Librarian: a collaborative, bilingual chat reference service. Ask a Librarian accepts library- and research-related questions from students, faculty, staff, and alumni at participating universities 67 hours per week during the academic year. The service reaches approximately 375,000 full-time equivalent students and receives more than 25,000 chats per year. The service is staffed primarily by librarians and paraprofessional library staff during daytime hours. Graduate student library assistants (GSLAs) from library or information studies programs also staff the service during evenings and weekends. At the time of the study, Ask a Librarian used LivePerson’s LiveEngage chat software. The researchers received approval for this study from the University of Toronto’s Research Ethics office and through the consortium’s Data Working Group before beginning. Users were informed that their interactions could be used for research in the privacy policy that was in- cluded in the prechat survey. Operators were informed during training. Data Collection and Sampling A total of 9,424 chat interactions occurred between June 1, 2016, and December 1, 2016. All interactions included a transcript of the conversation between the user and the chat operator, metadata about the interaction, and a prechat survey. An optional exit survey was presented to users when the operator terminated the chat or when the user clicked an end chat button, but not when the user closed the browser window without ending the chat first. These data were routinely archived by Scholars Portal staff. Of the 9,424 chat interactions, 1,395 interactions (14.8%) included a completed exit survey. Four of the eight survey questions contained questions designed to gauge a user’s satisfaction with the interaction: • The service provided by the librarian was □ Excellent □ Good □ Satisfactory □ Poor □ Very poor • The librarian provided me with □ Just the right amount of assistance □ Too little assistance □ Too much assistance • This chat service is □ My preferred way of getting library help □ A good way of getting library help □ A satisfactory way of getting library help □ A poor way of getting library help □ A last resort for getting library help • Would you use this service again? □ Yes □ No 930 College & Research Libraries November 2019 Responses in bold were identified as dissatisfied while those in italics were identified as neutral. Those with no text effects were deemed satisfied. The researchers noted in an Excel spreadsheet which interactions contained only satisfied responses and which included neutral or dissatisfied responses. Two samples were selected for the present study: 1. 256 interactions with satisfied exit survey responses were randomly selected using Excel. This represents 18 percent of all eligible interactions in the period (n = 1,395) with completed exit surveys. The confidence interval is 5.52 with a confidence level of 95 percent. 2. All interactions with dissatisfied or neutral exit survey responses (n = 217) were purposively selected. Homogeneous purposive sampling was determined to be appropriate, as very few of the interactions with completed exit surveys displayed neutral or dissatisfied (16%) sentiments, so all available interactions would provide valuable data. Data Preparation Once the sample interactions were identified, we anonymized the spreadsheet data and in- teraction transcripts using a checklist provided by the consortium’s Data Working Group. We removed any information that would identify the user, the operator, or institutional affiliation of either party. Coding Transcripts For the transcripts, we surveyed the literature for variables that would be relevant and created variables we hypothesized would be worth investigation. The result was a code- book with thirty variables. Only those variables included in this study’s analysis will be described: 1. Opening Behaviors: Behaviors that usually occur near the beginning of the chat. 1.1 Clarification: Did the operator ask at least one open- or closed-ended question about the user’s information need? 1.2 Confirmation: Did the operator confirm a mutual understanding of the user’s information need? 1.3 Attempt to resolve: Did the operator try to help the user with their informa- tion need? 2. Closing Behaviors: Behaviors that usually occur near the end of the chat. 2.1 Satisfaction check: Did the operator make sure that the user was satisfied with the answer they received? 2.2 Invitation to return: Did the operator invite the user to come back if they had more questions? 2.3 Chat ended mutually: Is there evidence that both the user and operator knew and agreed that the chat was ending? 3. Anytime Behaviors: Behaviors that can occur at any time during the chat. 3.1 Institution match reveal: Did the operator reveal that they did not work at the same institution as the user? 3.2 Transfer: Was the chat transferred from one operator to another? Dissatisfaction in Chat Reference Users 931 3.3 Tone: Was the operator professional and courteous? 3.4 Referral: Did the operator recommend that the user contact another service point or individual? 3.5 Interest and empathy: Did the operator make it clear that they cared about the user and/or the user’s question? 3.6 Informality: Did the operator use an informal writing style (such as sentence fragments, emoji, contractions)? 3.7 “No”: Did the operator make the user aware that their information need could not be completed? A more detailed explanation of each variable is available as an appendix. Following best practices established in the field, all four members of the research team coded a test set of 15 transcripts using a draft codebook and coding form, which fed to a spreadsheet created using Google Forms.41 We then met to discuss discrepancies in the choices, refined the codebook and coding form, and coded a further 10 transcripts. We analyzed the intercoder reliability for all variables, having predetermined a threshold of 80 percent aver- age pairwise percent agreement. A few variables fell below this threshold, so we repeated the process a third time with an additional 15 transcripts. After a third round, two variables included in this study were still below 80 percent agreement: formality (64%); and interest/ empathy (67%). Having established a strong level of intercoder reliability for most of the variables, the researchers moved on to the transcript coding stage. All variables were coded by a single researcher, with the exception of the two variables that did not have acceptable pairwise per- cent agreement. These were coded over three rounds by at least two researchers to address concerns with intercoder reliability: • Round 1: Each researcher independently coded their assigned transcripts for all variables. • Round 2: Each researcher was assigned a set of transcripts that they had not yet seen and independently coded for only the two variables that did not have acceptable pairwise percent agreement. • Round 3: Each researcher was assigned a set of transcripts they had not seen in either of the previous rounds and resolved any conflicts between the previous coders for the two variables with poor intercoder reliability. This procedure was informed by best practices in qualitative research. Barbour advises that multiple coding can address concerns with intercoder reliability and increase thoroughness.42 Exit Survey Free Text The researchers examined all free text responses included in the sample’s exit surveys. We classified comments related to operator behavior using the codes we employed in the tran- script analysis portion of the study. Data Compilation Once each transcript had been coded, we combined the coded data spreadsheet with the pre- chat and exit surveys, metadata, and exit survey free text themes in a single spreadsheet and prepared it for SPSS input. A research design consultant based at the Education Commons at the University of Toronto’s Ontario Institute for Studies in Education advised us which statistical tests to run. 932 College & Research Libraries November 2019 Results Demographic Characteristics Since our study used exit surveys that were self-selected, we investigated the demographic characteristics of the eligible and ineligible segments of the population to determine if they were skewed. A comparison is presented in table 1, which shows that the percent share of each demographic group is very similar for both the eligible and the ineligible groups. A chi-square test of independence revealed that there was no significant association between user status and the presence of a completed exit survey (χ2 = 10.062, p = 0.074). A Pearson chi square is a test of independence that determines if there is a statistically signifi- cant relationship between categorical variables (for example, variables with no hierarchy or order, only names). There were too few French-language chats to obtain a Pearson chi square, so we used a test appropriate for small samples, Fisher’s Exact test, which was also not significant at p = 0.783. TABLE 1 Comparison of Demographic Characteristics of Eligible and Ineligible Chat Interactions during the Study Period User status Ineligible (No completed exit survey) Eligible (Completed exit survey) N % N % Alumni 291 3.6% 60 4.3% Faculty Member 371 4.6% 81 5.8% Graduate Student 2,232 27.8% 353 25.3% Other 672 8.4% 116 8.3% Undergraduate 4,449 55.5% 785 56.3% TABLE 2 Summary of Chi-square Tests of Independence by Variable Category Variable χ2 df Significance Opening Behaviors Clarification 6.127 1 0.013 Confirmation 2.882 1 0.09 Attempt to Resolve 14.888 1 <0.001 Closing Behaviors Satisfaction Check 7.852 1 0.005 Invitation to Return 0.122 1 0.727 Chat Ended Mutually 33.304 1 <0.001 Anytime Behaviors Institution Match Reveal 4.323 1 0.038 Tone 14.483 1 <0.001 Transfer 5.990 1 0.014 Referral 17.328 1 <0.001 Interest and Empathy 22.692 1 <0.001 Informality 3.958 1 0.047 “No” 13.690 1 <0.001 Dissatisfaction in Chat Reference Users 933 Coded Variables The researchers used chi square tests of independence to determine if there was a relationship between user dissatisfaction and the observed operator behaviors. As shown in table 2, 11 variables had a significant relationship with dissatisfaction at a p < 0.05 level. Only Confirma- tion and Invitation to return were not significantly associated with dissatisfaction. We entered the 11 significantly associated variables into a binary logistic regression model to determine the strength of the variable’s effect and whether the association was positive or negative. These 11 variables were clarification, attempt to resolve, satisfaction check, mutual chat ending, institution match reveal, tone, transfer, referral, interest and empathy, and in- formality. The overall model was statistically significant at χ2 (11) = 99.045 with a p-value of < 0.001. This means that the model was statistically reliable in distinguishing between satisfied and dissatisfied patrons. The Nagelkerke R2 was used to determine how useful the variables were in predicting dissatisfaction. The Nagelkerke R2 was 0.252, indicating that the model has sufficient explanatory power but does not have strong predictive power. It was correct in predicting the outcome 68.5 percent of the time. Those results are summarized in table 3. Opening Behaviors Of the three opening behaviors, only two were significantly associated with dissatisfaction in the chi squares: clarification (p < 0.05) and attempt to resolve (p < 0.001). The regression model showed that attempting to resolve the question was not a significant explanatory variable (β = –0.52, p = 0.115). Clarification was a positive, statistically significant variable in the model (β = 0.679, p = 0.002). When a coefficient (β value) is positive in the regression model, it indi- cates that the presence of the variable in the chat explained increases in user dissatisfaction. Conversely, a negative coefficient would explain decreases in dissatisfaction. In the exit survey, users expressed frustration when they perceived that the operator did not understand their information need, something we interpreted as relating to clarification: TABLE 3 Summary of Binary Logistic Regression Model Category Variable β Std. Error Wald df Significance Opening Behaviors Attempt to Resolve –0.52 0.33 2.481 1 0.115 Clarification 0.679 0.22 9.492 1 0.002 Closing Behaviors Satisfaction Check –0.166 0.219 0.571 1 0.45 Chat Ended Mutually –0.92 0.227 16.378 1 <0.001 Anytime Behaviors Institution Match Reveal 0.397 0.324 1.503 1 0.22 Tone –1.287 0.409 9.906 1 0.002 Transfer 1.031 0.412 6.249 1 0.012 Referral 0.528 0.248 4.539 1 0.033 Interest and Empathy –0.689 0.215 10.26 1 0.001 Informality –0.137 0.213 1 1 0.522 “No” 0.367 0.237 2.385 1 0.122 R2 = 0.189 (Cox & Snell), 0.252 (Nagelkerke) Model (11) = 99.045, p < 0.001 934 College & Research Libraries November 2019 I might use this service again (hopefully the person I talk to is more helpful next time).… even though I gave the person enough info, he wasn’t super helpful and he didn’t really understand what I wanted. We found that users mentioned that the operator had attempted to resolve their issue more frequently than they criticized a lack of operator effort: Present: Chat didn’t solve my issue. The librarian did try though so I was satisfied that she made the effort. Absent: My issue remains unsolved and they were not able to help because Ask a Librar- ian was closing in 8 minutes. A little upset that my answer was that through trial and error I’ll find online articles. Closing Behaviors Invitation to return was not significantly related to dissatisfaction in the chi squares, but satis- faction check (p < 0.05) and mutual chat ending (p < 0.001) were both related. However, in the regression model only mutual chat ending was a statistically significant variable (p < 0.001). The coefficient of mutual chat ending was negative, indicating that the variable explained decreases in dissatisfaction (β = –0.92). The exit surveys confirmed that the way the chat ended was important for users: Librarian left too quickly. I was not able to ask any additional questions and the librarian immediately left. I couldn’t read the librarian’s reply before the chat ended. Anytime Behaviors The chi-square tests of independence suggested that all behaviors in this category were related to dissatisfaction. Revealing an institutional mismatch, using an informal communication style, and saying “no” were not significant explanators of dissatisfaction in our regression model, however. Transfers (β = 1.031, p = 0.012) and referrals (β = 0.528, p = 0.033) were both positive, significant variables in the model, meaning that their presence explained increases in dissat- isfaction. The presence of professional tone (β = –1.287, p = 0.002) and interest or empathy (β = –0.689, p = 0.001) were both strongly negatively associated with dissatisfaction, suggesting that they explain decreases in dissatisfaction. Exit survey comments corroborated that an operator’s manner was important to users. The absence of a professional tone and a lack of warmth or empathy were both cited as rea- sons they were dissatisfied: This was my first time using this chat but I’ve used other live chats before and even though they couldn’t always help me they were a lot warmer with their reply. ‘I doubt it’ isn’t the best to use when trying to help someone during a chat service. Dissatisfaction in Chat Reference Users 935 This was extremely unhelpful. I have a quiz and I had difficulty finding an article. I thought the response was extremely rude and unhelpful. Delays in having their question answered—whether because users were referred to an- other service point or transferred to another operator—were also common themes: This was my second time using this option and both times I am told the shift is ending and request to transfer me. It seemed to take longer than it should (about 30 minutes with 2 librarians) to find out that Library didn’t have an online subscription to a journal. Several comments were made about the operator not being from the user’s institution: The operators are not [University] librarians, and hence are not aware of the resources at [University], which was the subject of my question. I thought I was connecting with someone at my institution. I think this service is good for general question[s] but not really for institution specific questions. So it was a bit of a let down. Discussion of Findings The purpose of this study was to identify operator behaviors that contribute to user dissatis- faction. A series of chi-square tests of independence on 473 chat transcripts with completed exit surveys (of which 217 had dissatisfied responses) found 11 behaviors that were signifi- cantly associated with dissatisfaction. Further investigation with a binary logistic regression revealed that only six of these had strong explanatory power. Three of these behaviors had positive associations, meaning that their presence in the chat explained increases in user dissatisfaction. Those behaviors were (1) clarification; (2) operator transferring the chat to another operator; and (3) operator referring the user to another service point. A further three behaviors were negative explanators, meaning that their presence explained decreases in user dissatisfaction. Put another way, the absence of these behaviors explained increases in dissatisfaction: (4) ending the chat mutually; (5) maintaining a professional tone; and (6) showing interest in the question or empathy with the user. These results add an interesting layer to Nilsen’s Library Visit Study.43 Phase III of that study concluded that bypassing the reference interview, providing unmonitored refer- rals, and failing to ask follow-up questions were associated with user dissatisfaction. Of the two variables included in our study that relate to the reference interview, clarification and confirmation, only clarification was significantly associated with dissatisfaction, but it was a positive association. In the exit survey comments, dissatisfied users said they were frustrated when the operator did not understand their information need, similar to com- ments collected by Nilsen.44 This suggests that users want to be clearly understood but did not want to spend time explaining themselves in depth. We hypothesize that this is a function of chat as a reference medium. It can be difficult to express complex concepts 936 College & Research Libraries November 2019 textually, so perhaps users are frustrated at being unable to make their needs clear in a fast, easy way. Our study also corroborates Nilsen’s designation of referrals as a dissatisfying behavior. We found that the presence of a referral in a chat was a strong positive explainer of user dis- satisfaction. Nilsen’s third behavior, asking follow-up questions, was represented in our study by two variables, satisfaction check and invitation to return. Though satisfaction check was not independent from dissatisfaction, it ultimately was not found to be a statistically signifi- cant predictor of dissatisfaction. Invitation to return had no association with dissatisfaction. It should be noted, though, that Nilsen did not count automated messages as true invitations to return.45 Our study allowed these “canned” messages because operators on the service were trained to use them as a time-saving measure. Since so many operators use these automated messages as directed, the researchers felt discounting them would result in a less usable dataset. The way a chat ends explains the user’s dissatisfaction, according to our findings. Spe- cifically, chat endings that were not mutual explained user dissatisfaction. Previous stages of the Library Visit Study have focused on chat termination, drawing on Nolan’s theories.46 Nolan offers time as a policy-institutional factor that influences an operator’s decision to ter- minate a chat. As with transfers, chat operators may wish to rush a closing or simply leave a chat because their shift is ending. The medium of chat might also cause a user to be slow in responding to an operator if they are multitasking with multiple browser windows open, causing an operator to believe they no longer wish to continue the interaction. Though the present research project cannot explain why these unsatisfactory terminations occurred, it suggests that mutual chat endings will help avoid dissatisfaction in users. The difference in findings between our study and Nilsen’s could be accounted for by the methodology. The Library Visit Study was unobtrusive; individuals recruited by the research- ers to act as users initiated and then reported on reference interactions with operators who did not know they were being observed.47 The “users” in that study were always aware that their purpose was to evaluate the operator, even though they were directed to ask a question that mattered to them personally. They may have reflected on the interaction differently if it had occurred organically. Further, our study used obtrusive means to measure user dis- satisfaction. The interactions collected were in no way influenced by the researchers, though the users were aware they were evaluating the operator in their exit surveys. We also suspect that the user’s investment in reference as a professional practice is important. In the Library Visit Study, the participants were recruited from an MLIS program. Library science students may have different expectations about the operator’s behavior than “civilians” would. They may have knowledge of the reference interview, giving them a rubric with which to judge the interaction that users in our study might not have. Logically, behaviors associated with delaying completion of the user’s information need could result in dissatisfaction. Connaway, Dickey, and Radford found that convenience and immediacy were the most valued factors of chat reference.48 Transferring the user to another operator or referring the operator to another service point delay the user from getting a defi- nite answer. Our research suggests that they both explain dissatisfaction. Unfortunately, the operator often has good reasons for these behaviors. At Ask a Librarian, transfers usually happen at shift change time when the operator needs to leave but the user wants to continue chatting. Similarly, referrals are common in consortia when the operator may not have subject expertise or local knowledge needed to complete the question. The operator may provide a Dissatisfaction in Chat Reference Users 937 referral to save the user’s time, something they may appreciate in the near future even if it is frustrating in the present. Both of these scenarios may feel to the user like unnecessary delays, despite the operator’s best intentions. We suspect that telling the user “no” was not an explanatory variable because it does not delay the user. It provides a definite answer that the user can act upon immediately. This is good for operators, as saying “no” is something that often cannot be avoided. A common example from our practice is when the user’s institution does not have access to a particular article or book. Having confirmation that this is the case, the user can choose to place an in- terlibrary loan request or find another source, both decisions that can be done immediately. Our results showed that the operator’s manner had a strong influence on user dissatisfac- tion. Interactions where the operator was rude or abrupt, and/or failed to show empathy to the user or express interest in their question were more likely to result in dissatisfied scores. Maintaining a professional demeanor is a basic expectation of customer service. Instances where it was not maintained in the sampled transcripts were dismaying to the researchers. We found several chats where the operator’s first response to the user’s query was “I doubt it” or “we can try,” which came off as very abrupt ways to begin a chat. Failing to show inter- est and empathy, too, was much more common than we had anticipated. It was absent in 103 (40%) satisfied chats and 135 (62%) dissatisfied chats. We hypothesize that these instances may be borne of the operator’s desire to “get down to business,” something that might be more common if the operator is chatting with multiple users or if their shift is about to end. It can be easy to forget about the user’s relational needs as a chat operator when they are not physically in front of you and you are stressed out from managing multiple chats at once. There seemed to be more leeway in the operator’s communication style, happily. Informality was not a predictor of dissatisfaction in our study, though it is possible that some types of users could prefer one style more than others. The present study did not distinguish types of users, but this might be a fruitful avenue for future research. Limitations Our methodology carries a few limitations that should be noted when considering the general- izability of our findings. We used exit surveys to gauge the user’s satisfaction or dissatisfaction with a chat interaction. This approach can be problematic because it only measures the user’s feelings in the moment. The exit survey response rate is also a consideration. Our chat software only presented the exit survey to users who completed the chat and did not prematurely close the browser window. Interactions where the user left by closing the browser window without clicking an “end chat” button would not have been invited to participate. Next, user satisfac- tion is only one measure of a chat’s success. Other transcript analysis studies have assessed the quality of the answer provided by the operator.49 Finally, our quantitative analysis does not include potential confounding variables in the regression model. Conclusion There are many factors that can cause a user to leave an interaction less satisfied than operators might like. Though it is impossible to control for all of them, our research suggests that there are some things operators can do to decrease the likelihood that a user will leave dissatisfied: • Avoid being abrupt or rude. The user has no visual or tonal cues, so ensuring that your words are polite and welcoming is even more important than in face-to-face reference. 938 College & Research Libraries November 2019 • Avoid being “all business” during the chat. Users appreciate your interest and empathy. • Avoid transferring the user to another operator. Though not always possible, staying on with the user as long as you can reduces delay for them. • Avoid referring the user to another service point or staff member. You might be able to contact other service points on their behalf instead. • Avoid terminating the chat before the user is ready. Wait until they acknowledge your closing messages before you leave. Our research also indicates that there are some unavoidable behaviors that are associ- ated with dissatisfaction. Asking clarifying questions is necessary to understanding the user’s information need and thus must be employed. Finally, our research suggests that operators should worry less about revealing that they are from a different institution than the user, how hard they attempt to resolve the question, telling the user “no,” how formal or informal their communication style is, inviting the user to return, and performing a satisfaction check. While these behaviors might make a difference to satisfaction, our study found that they made no significant difference to dissatisfaction. Acknowledgments Thank you to Olesya Falenchuk at the OISE Education Commons for her help with planning the statistical analysis and interpreting SPSS’s outputs. Thank you to Amy Greenberg for her contributions to the research team, especially coding the chat transcripts. Dissatisfaction in Chat Reference Users 939 A P P EN D IX . V ar ia b le s C od ed in T ra n sc ri p ts C at eg o ry V ar ia b le In sp ir at io n C o d eb o o k D es cr ip ti o n Ex am p le O p en in g B eh av io rs C la ri fic at io n RU SA 3 .1 .8 50 Th e o p er at o r a sk ed a n o p en - o r cl o se d -e n d ed q u es ti o n a b o u t th e u se r’s in fo rm at io n n ee d . O p er at o r: C o u ld y o u te ll m e a lit tl e m o re a b o u t yo u r t o p ic a n d w h at y o u h av e fo u n d s o fa r, [P at ro n ]? C o n fir m at io n RU SA 3 .1 .5 51 Th e o p er at o r c o n fir m ed t h at th ei r u n d er st an d in g o f t h e u se r’s in fo rm at io n n ee d w as c o rr ec t, u su al ly b y p ar ap h ra se o r c lo se d - en d ed q u es ti o n . O p er at o r: O ka y, y o u a re a sk in g a b o u t ci ti n g o n lin e ar ch iv al m at er ia l, sp ec ifi ca lly w h et h er y o u sh o u ld b e in d ic at in g t h at y o u r s o u rc es a re o n lin e o n es . I s th at c o rr ec t? A tt em p t to R es o lv e K ey es a n d D w o ra k fo u n d th at , i n 5 p er ce n t o f in te ra ct io n s in t h ei r s tu d y, th e o p er at o r f ai le d to m ak e su ffi ci en t eff o rt .52 Th e o p er at o r p ro vi d ed a b ar e m in im u m o f s u p p o rt to t h e u se r. Th e o p er at o r’s e ff o rt s h o u ld h av e m at ch ed t h e co m p le xi ty o f t h e q u es ti o n . C o m m o n e xa m p le s o f n o n -a tt em p ts in cl u d e: • Pr ov id in g a li nk w it h no c on te xt a s an a ns w er • Tr yi ng s om et hi ng o b vi ou s th en gi vi ng u p • N ot lo ok in g fo r l oc al in st ru ct io ns O p er at o r: S o rr y, I ca n’ t ac ce ss t h at in fo rm at io n o n lin e. A re y o u a b le to v is it t h e in fo d es k at [B ra n ch ]? T h at w o u ld b e th e b es t w ay to fi n d o u t. — O p er at o r: W h at d o y o u m ea n a b o o ki n g ? U se r: L ik e w h en I am b o o ki n g a s tu d y ro o m . O p er at o r: I’ m n o t fr o m [U n iv er si ty ] a n d I d o n’ t se e an y o p er at o rs fr o m [U n iv er si ty ]. I l o o ke d t h is u p a n d c o u ld n o t fin d in fo rm at io n . Y o u s h o u ld ca ll th e lib ra ry a n d a sk . S o rr y. C lo si n g B eh av io rs Sa ti sf ac ti o n C h ec k RU SA 5 .1 .1 53 Th e o p er at o r c h ec ke d to s ee if t h ey an sw er ed t h e u se r’s q u es ti o n o r if th ey w er e sa ti sfi ed w it h s o m e el em en t o f t h e se rv ic e. O p er at o r: Is t h at w h at y o u’ re lo o ki n g fo r? In vi ta ti o n to R et u rn RU SA 5 .1 .2 54 Th e o p er at o r i n vi te d t h e u se r to re tu rn e it h er w it h a “c an n ed m es sa g e” o r i n t h e o p er at o r’s o w n w o rd s. O p er at o r: T h an k yo u fo r u si n g A sk a L ib ra ri an ch at . R em em b er to c o m e b ac k if yo u h av e m o re q u es ti o n s. 940 College & Research Libraries November 2019 C at eg o ry V ar ia b le In sp ir at io n C o d eb o o k D es cr ip ti o n Ex am p le C lo si n g B eh av io rs C h at E n d ed M u tu al ly D u in ke rk en , S te p h en s, an d M ac D o n al d in cl u d ed p re m at u re e n d in g s in th ei r s tu d y. T h e ca te g o ri es w er e d ra w n fr o m o u r p ro fe ss io n al p ra ct ic e. 55 B o th t h e o p er at o r a n d t h e u se r ac kn o w le d g ed a n d a g re ed t h at t h e ch at w as e n d in g . O p er at o r: Is t h er e an yt h in g e ls e I c an d o fo r y o u to d ay ? U se r: N o, t h an k yo u . H av e a g o o d d ay :) O p er at o r: Y o u a s w el l! < U se r c lo se d c h at > A n yt im e B eh av io rs In st it u ti o n M at ch R ev ea l Bi sh op , K w on , a nd o th er re se ar ch er s in c on so rt ia l c ha t se tt in g s ex am in es n on lo ca l op er at or s. 56 O ur p ro fe ss io na l p ra ct ic e m ad e us w on d er if th e ou tc om es w ou ld b e th e sa m e w he th er a u se r r ea liz es th ey a re c ha tt in g w ith a no nl oc al o p er at or . Th e o p er at o r r ev ea le d to t h e u se r th at t h ey a re n o t fr o m t h e u se r’s in st it u ti o n o r c am p u s if w it h in t h e sa m e in st it u ti o n . T h e o p er at o r m u st h av e ex p lic it ly s ta te d t h is ; c o d er in fe re n ce s ar e n o t su ffi ci en t. O p er at o r: A re y o u lo o ki n g fo r [ an ] a rt ic le t h en ? Is C C T C o m p u te r a n d T ec h , s o rr y I’m n o t fr o m [U n iv er si ty ]. To n e RU SA 3 .1 .2 57 Th e o p er at o r m ai n ta in ed a p ro fe ss io n al a n d c o u rt eo u s to n e. Th ey w er e n ev er ru d e, a b ru p t, in ap p ro p ri at e, o r u n p ro fe ss io n al . O p er at o r: h o w y o u d o in g , U se r? U se r: ju st g re at . y o u ? O p er at o r: L IV IN G T H E D RE A M ! O p er at o r: :0 — O p er at o r: Y o u’ ll n ee d to s ift t h ro u g h t h e re su lt s. i’l l g iv e yo u t h e lib g u id e in a s ec … O p er at o r: S o o ff to p ic b u t i l iv e in [ To w n ] t o o !! U se r: I th in k I f o u n d a n a rt ic le t h at c o u ld b e g o o d O p er at o r: G re at ! U se r: w h o a: w h er e d o es it s ay I’ m fr o m [ To w n ]? U se r: b u t co o l! O p er at o r: It te lls m e th e co u n tr y, c it y, [s ta te / p ro vi n ce ], in te rn et p ro vi d er a n d t h at y o u u se [T el ec o m ]. :) O p er at o r: A n d t h at y o u’ re a n u n d er g ra d . O p er at o r: A n d o n c h ro m e. :) Dissatisfaction in Chat Reference Users 941 C at eg o ry V ar ia b le In sp ir at io n C o d eb o o k D es cr ip ti o n Ex am p le C lo si n g B eh av io rs C h at E n d ed M u tu al ly D u in ke rk en , S te p h en s, an d M ac D o n al d in cl u d ed p re m at u re e n d in g s in th ei r s tu d y. T h e ca te g o ri es w er e d ra w n fr o m o u r p ro fe ss io n al p ra ct ic e. 55 B o th t h e o p er at o r a n d t h e u se r ac kn o w le d g ed a n d a g re ed t h at t h e ch at w as e n d in g . O p er at o r: Is t h er e an yt h in g e ls e I c an d o fo r y o u to d ay ? U se r: N o, t h an k yo u . H av e a g o o d d ay :) O p er at o r: Y o u a s w el l! < U se r c lo se d c h at > A n yt im e B eh av io rs In st it u ti o n M at ch R ev ea l Bi sh op , K w on , a nd o th er re se ar ch er s in c on so rt ia l c ha t se tt in g s ex am in es n on lo ca l op er at or s. 56 O ur p ro fe ss io na l p ra ct ic e m ad e us w on d er if th e ou tc om es w ou ld b e th e sa m e w he th er a u se r r ea liz es th ey a re c ha tt in g w ith a no nl oc al o p er at or . Th e o p er at o r r ev ea le d to t h e u se r th at t h ey a re n o t fr o m t h e u se r’s in st it u ti o n o r c am p u s if w it h in t h e sa m e in st it u ti o n . T h e o p er at o r m u st h av e ex p lic it ly s ta te d t h is ; c o d er in fe re n ce s ar e n o t su ffi ci en t. O p er at o r: A re y o u lo o ki n g fo r [ an ] a rt ic le t h en ? Is C C T C o m p u te r a n d T ec h , s o rr y I’m n o t fr o m [U n iv er si ty ]. To n e RU SA 3 .1 .2 57 Th e o p er at o r m ai n ta in ed a p ro fe ss io n al a n d c o u rt eo u s to n e. Th ey w er e n ev er ru d e, a b ru p t, in ap p ro p ri at e, o r u n p ro fe ss io n al . O p er at o r: h o w y o u d o in g , U se r? U se r: ju st g re at . y o u ? O p er at o r: L IV IN G T H E D RE A M ! O p er at o r: :0 — O p er at o r: Y o u’ ll n ee d to s ift t h ro u g h t h e re su lt s. i’l l g iv e yo u t h e lib g u id e in a s ec … O p er at o r: S o o ff to p ic b u t i l iv e in [ To w n ] t o o !! U se r: I th in k I f o u n d a n a rt ic le t h at c o u ld b e g o o d O p er at o r: G re at ! U se r: w h o a: w h er e d o es it s ay I’ m fr o m [ To w n ]? U se r: b u t co o l! O p er at o r: It te lls m e th e co u n tr y, c it y, [s ta te / p ro vi n ce ], in te rn et p ro vi d er a n d t h at y o u u se [T el ec o m ]. :) O p er at o r: A n d t h at y o u’ re a n u n d er g ra d . O p er at o r: A n d o n c h ro m e. :) C at eg o ry V ar ia b le In sp ir at io n C o d eb o o k D es cr ip ti o n Ex am p le A n yt im e B eh av io rs Tr an sf er O u r p ro fe ss io n al p ra ct ic e in cl u d es t ra n sf er ri n g u se rs to o p er at o rs a t th e en d o f ch at s h ift s. W e w er e cu ri o u s to s ee if t h is a ff ec te d d is sa ti sf ac ti o n . Th e u se r i s tr an sf er re d fr o m a t le as t o n e o p er at o r t o a n o th er d u ri n g th e co u rs e o f t h e ch at . N o w ar n in g m es sa g es a re re q u ir ed to q u al ify . O p er at o r1 : G re at ; t h an ks ! I ’ll t ak e a lo o k, b u t ju st to le t yo u k n o w m y sh ift is e n d in g in a fe w m in u te s. I’ d b e h ap p y to t ra n sf er y o u to a n o th er lib ra ri an , t h o u g h , w h o c an h el p y o u fu rt h er U se r: T h at ’s aw es o m e; t h an k yo u ! O p er at o r1 : O k, I’ ll tr an sf er y o u o ve r t o [O p er at o r2 ]. Pl ea se g iv e m e a fe w m o m en ts … U se r: O ka y. T h an ks :) Sy st em : P le as e w ai t w h ile I tr an sf er t h e ch at to ‘[O p er at o r2 ].’ Sy st em : Y o u a re n o w c h at ti n g w it h ‘[ O p er at o r2 ].’ R ef er ra l W ar d a n d J ac o b y am o n g m an y o th er s st u d ie d re fe rr al s in c h at re fe re n ce .58 Th e o p er at o r s h ar ed c o n ta ct in fo rm at io n o r a d vi se d t h e u se r t o co n ta ct a n o th er s ta ff m em b er o r a d iff er en t se rv ic e p o in t to c o m p le te th e q u es ti o n . O p er at o r: A ls o, if s ti ll n o lu ck b y to m o rr o w . W e d o h av e a lib ra ri an w h o m ay k n o w a b o u t sp ss . U n fo rt u n at el y, s h e is n’ t w o rk in g to n ig h t. B u t yo u c an fi n d h er c o n ta ct in fo h er e. < a h re f= “U RL ”> [L ib ra ri an ]< /a > In te re st a n d Em p at h y RU SA 2 .0 59 Th e o p er at o r m ad e it c le ar t h at th ey c ar ed a b o u t th e u se r a n d / o r t h e u se r’s q u es ti o n . B eh av io rs lik e “s m al l t al k” (s u ch a s h o w a re yo u , t al ki n g a b o u t th e w ea th er ), ex h ib it in g k in d n es s (e xa m p le s: sy m p at h iz in g w it h p ro b le m s, ac kn o w le d g in g d iffi cu lt ie s) , sh o w in g s u p p o rt (e n co u ra g in g u se r) , a n d o ff er in g t h e u se r co n g ra tu la ti o n s ar e ex am p le s o f in te re st a n d e m p at h y. O p er at o r: T h at lo o ks li ke a n in te re st in g q u es ti o n ! W h ic h c o u rs e o r d ep ar tm en t is t h is fo r? I w an t to g et y o u t h e b es t re so u rc es . : ) — O p er at o r: S o rr y. T h at ’s fr u st ra ti n g ; t h ey s h o u ld g iv e PD F co p ie s. 942 College & Research Libraries November 2019 C at eg o ry V ar ia b le In sp ir at io n C o d eb o o k D es cr ip ti o n Ex am p le A n yt im e B eh av io rs In fo rm al it y W au g h h ad s u b je ct s co m p ar e a ch at w it h a fo rm al o p er at o r a n d o n e w it h a n in fo rm al o p er at o r an d c o lle ct ed im p re ss io n s. 60 Th e o p er at o r t en d ed to u se m o re in fo rm al la n g u ag e d u ri n g t h e ch at in cl u d in g : • Se nt en ce fr ag m en ts • Em oj is • C on tr ac ti on s • A b b re vi at io ns • La ck o f p un ct ua ti on • La ck o f c ap it al iz at io n • “P ro so d ic fe at ur es ” l ik e el lip si s fo r p as sa g e of ti m e • Re ac ti on s (li ke “l ol ”) • M ul ti p le p un ct ua ti on fo r e m p ha si s (s uc h as m or e th an q ue st io n m ar k at th e en d o f a q ue st io n or ex cl am at io n p oi nt a t t he e nd o f a st at em en t) O p er at o r: O ka y. N o w o rr ie s. T ec h n o lo g y tr o u b le s ag ai n :) — O p er at o r: A aa aa n d it w o n’ t b ec au se w e o n ly h av e th e d ig it al c o n te n t fo r t h is jo u rn al fr o m 19 65 –1 98 4 P at ro n : : ’( O p er at o r: Y u p , t h at s u ck s. B u t th at ’s h o w y o u ca n g et a ro u n d t h e te ch n ic al p ro b le m t h at ’s h ap p en in g r ig h t n o w . P ro q u es t st ill w o u ld n’ t h av e b ee n a b le to fi n d t h is a rt ic le . I f y o u re al ly re al ly n ee d it , y o u c an re q u es t it v ia in te rl ib ra ry lo an . “N o” O u r p ro fe ss io n al p ra ct ic e m ad e u s cu ri o u s as to w h et h er u se rs w er e o n ly d is sa ti sfi ed b ec au se t h ey co u ld n o t co m p le te t h ei r in fo rm at io n n ee d s. A t so m e p o in t in t h e ch at , t h e u se r f o u n d o u t th ey c o u ld n o t d o so m et h in g t h ey w an te d d u e to te ch n ic al , p o lic y, li b ra ry c o lle ct io n , o r a n y o th er re as o n . T h e o p er at o r d id n o t ac tu al ly h av e to s ay t h e w o rd “n o. ” O p er at o r: O ka y. I’ m a fr ai d I d o n’ t th in k th er e is an yt h in g I ca n d o to h el p w it h t h is . I d o n’ t h av e ac ce ss to t h e b ac k en d to b e ab le to s ee w h y it is m is si n g . — O p er at o r: T h e b ad n ew s is w e d o n’ t h av e th is ar ti cl e o n lin e— ev en t h o u g h w e d o h av e m o re re ce n t o n lin e vo lu m es o f t h e jo u rn al . T h e g o o d n ew s, t h o u g h , i s yo u c an g et t h e ar ti cl e in p ri n t at t h e [B ra n ch S ci en ce L ib ra ry ]. Dissatisfaction in Chat Reference Users 943 Notes 1. Nahyun Kwon and Vicki L. Gregory, “The Effects of Librarians’ Behavioral Performance on User Satisfac- tion in Chat Reference Services,” Reference & User Services Quarterly 47, no. 2 (2007): 137–48; Klara Maidenberg and Dana Thomas, “Do Patrons Appreciate the Reference Interview? Virtual Reference, RUSA Guidelines and User Satisfaction” (2014 Library Assessment Conference, Seattle, WA, 2014), 697–705; Steven Baumgart, Erin Car- rillo, and Laura Schmidli, “Iterative Chat Transcript Analysis: Making Meaning from Existing Data,” Evidence Based Library and Information Practice 11, no. 2 (June 20, 2016): 39–55, https://doi.org/10.18438/B8X63B. 2. Reference & User Services Association (RUSA), “Guidelines for Behavioral Performance of Reference and Information Service Providers,” Reference & User Services Association (RUSA), (Sept. 29, 2008), available online at www.ala.org/rusa/resources/guidelines/guidelinesbehavioral [accessed 30 November 2018]. 3. Matthew R. Marsteller and Danianne Mizzy, “Exploring the Synchronous Digital Reference Interaction for Query Types, Question Negotiation, and Patron Response,” Internet Reference Services Quarterly 8, no. 1/2 (2003): 149–65, https://doi.org/10.1300/J136v08n01_13. 4. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 5. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 6. Wyoma van Duinkerken, Jane Stephens, and Karen I. MacDonald, “The Chat Reference Interview: Seeking Evidence Based on RUSA’s Guidelines,” New Library World 110, no. 3/4 (2009): 107–21, https://doi. org/10.1108/03074800910941310. 7. Kelsey Keyes and Ellie Dworak, “Staffing Chat Reference with Undergraduate Student Assistants at an Academic Library: A Standards-Based Assessment,” Journal of Academic Librarianship 43, no. 6 (2017): 469–78. 8. Keyes and Dworak, “Staffing Chat Reference with Undergraduate Student Assistants at an Academic Library.” 9. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 10. Kwon and Gregory, “The Effects of Librarians’ Behavioral Performance on User Satisfaction in Chat Reference Services,” 145. 11. Greta Valentine and Brian D. Moss, “Assessing Reference Service Quality: A Chat Transcript Analysis,” in At the Helm: Leading Transformation (Baltimore, MD: ACRL, 2017), 67–75. 12. Maidenberg and Thomas, “Do Patrons Appreciate the Reference Interview?” 13. Jeffrey Pomerantz, Lili Luo, and Charles R. McClure, “Peer Review of Chat Reference Transcripts: Ap- proaches and Strategies,” Library & Information Science Research 28, no. 1 (2006): 24–48. 14. Pomerantz, Luo, and McClure, “Peer Review of Chat Reference Transcripts”; RUSA, “Guidelines for Be- havioral Performance of Reference and Information Service Providers.” 15. Adolfo G. Prieto, “Humanistic Perspectives in Virtual Reference,” Library Review 66, no. 8/9 (2017): 695–710, https://doi.org/10.1108/LR-01-2017-0005. 16. Prieto, “Humanistic Perspectives in Virtual Reference,” 701. 17. Jennifer Waugh, “Formality in Chat Reference: Perceptions of 17- to 25-Year-Old University Students,” Evidence Based Library and Information Practice 8, no. 1 (2013): 19–34, https://doi.org/10.18438/B8WS48. 18. Waugh, “Formality in Chat Reference.” 19. Jack M. Maness, “A Linguistic Analysis of Chat Reference Conversations with 18–24 Year-Old College Students,” Journal of Academic Librarianship 34, no. 1 (2008): 31–38, https://doi.org/10.1016/j.acalib.2007.11.008. 20. Marie L. Radford and Gary P. Radford, Library Conversations: Reclaiming Interpersonal Communication Theory for Understanding Professional Encounters (Chicago, IL: Neal-Schuman, an imprint of the American Library As- sociation, 2017). 21. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 22. Kwon and Gregory, “The Effects of Librarians’ Behavioral Performance on User Satisfaction in Chat Reference Services.” 23. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 24. Vera J. Lux and Linda Rich, “Can Student Assistants Effectively Provide Chat Reference Services? Student Transcripts vs. Librarian Transcripts,” Internet Reference Services Quarterly 21, no. 3/4 (2016): 115–39, https://doi.or g/10.1080/10875301.2016.1248585. 25. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 26. Nahyun Kwon, “User Satisfaction with Referrals at a Collaborative Virtual Reference Service,” Information Research 11, no. 2 (2006): 70–91. 27. David Ward and JoAnn Jacoby, “A Rubric and Methodology for Benchmarking Referral Goals,” Reference Services Review 46, no. 1 (2018): 110–27, https://doi.org/10.1108/RSR-04-2017-0011. https://doi.org/10.18438/B8X63B http://www.ala.org/rusa/resources/guidelines/guidelinesbehavioral https://doi.org/10.1300/J136v08n01_13 https://doi.org/10.1108/03074800910941310 https://doi.org/10.1108/03074800910941310 https://doi.org/10.1108/LR-01-2017-0005 https://doi.org/10.18438/B8WS48 https://doi.org/10.1016/j.acalib.2007.11.008 https://doi.org/10.1080/10875301.2016.1248585 https://doi.org/10.1080/10875301.2016.1248585 https://doi.org/10.1108/RSR-04-2017-0011 944 College & Research Libraries November 2019 28. Bradley Wade Bishop, “Can Consortial Reference Partners Answer Your Local Users’ Library Questions?” portal: Libraries & the Academy 12, no. 4 (2012): 355–70. 29. Bishop, “Can Consortial Reference Partners Answer Your Local Users’ Library Questions?” 367. 30. Pomerantz, Luo, and McClure, “Peer Review of Chat Reference Transcripts.” 31. Stephanie J. Graves and Christina M. Desai, “Instruction via Chat Reference: Does Co-Browse Help?” Reference Services Review 34, no. 3 (2006): 340–57. 32. Marsteller and Mizzy, “Exploring the Synchronous Digital Reference Interaction for Query Types, Ques- tion Negotiation, and Patron Response.” 33. Jo Kibbee, David Ward, and Wei Ma, “Virtual Service, Real Data: Results of a Pilot Study,” Reference Services Review 30, no. 1 (2002): 25–36, https://doi.org/10.1108/00907320210416519; Joan C. Durrance, “Reference Success: Does the 55 Percent Rule Tell the Whole Story?” Library Journal 114, no. 7 (Apr. 15, 1989): 31–36. 34. Cassidy R. Sugimoto, “Evaluating Reference Transactions in Academic Music Libraries,” Music Reference Services Quarterly 11, no. 1 (2008): 1–32, https://doi.org/10.1080/10588160802157124. 35. Sugimoto, “Evaluating Reference Transactions in Academic Music Libraries.” 36. Kwon, “User Satisfaction with Referrals at a Collaborative Virtual Reference Service.” 37. Kirsti Nilsen, “The Library Visit Study: User Experiences at the Virtual Reference Desk,” Information Research 9, no. 2 (2004), available online at www.informationr.net/ir/9-2/paper171.html [accessed 25 September 2018]; Kirsti Nilsen, “Comparing Users’ Perspectives of in-Person and Virtual Reference,” New Library World 107, no. 3/4 (2006): 91–104, https://doi.org/10.1108/03074800610654871. 38. Nilsen, “The Library Visit Study”; Nilsen, “Comparing Users’ Perspectives of in-Person and Virtual Ref- erence.” 39. Nilsen, “The Library Visit Study.” 40. Nilsen, “Comparing Users’ Perspectives of in-Person and Virtual Reference,” 96. 41. Baumgart, Carrillo, and Schmidli, “Iterative Chat Transcript Analysis.” 42. Rosaline S. Barbour, “Checklists for Improving Rigour in Qualitative Research: A Case of the Tail Wag- ging the Dog?” BMJ 322, no. 7294 (May 5, 2001): 1115–17, https://doi.org/10.1136/bmj.322.7294.1115. 43. Nilsen, “The Library Visit Study”; Nilsen, “Comparing Users’ Perspectives of in-Person and Virtual Ref- erence.” 44. Nilsen, “The Library Visit Study.” 45. Nilsen, “The Library Visit Study.” 46. Catherine Sheldrick Ross and Patricia Dewdney, “Negative Closure: Strategies and Counter-Strategies in the Reference Transaction,” Reference & User Services Quarterly 38, no. 2 (1998): 151–63; Christopher W. Nolan, “Closing the Reference Interview: Implications for Policy and Practice,” RQ (1992). 47. Nilsen, “The Library Visit Study.” 48. Lynn Sillipigni Connaway, Timothy J. Dickey, and Marie L. Radford, “‘If It Is Too Inconvenient I’m Not Going after It’: Convenience as a Critical Factor in Information-Seeking Behaviors,” Library & Information Science Research 33, no. 3 (2011): 179–90. 49. Deborah L. Meert and Lisa M. Given, “Measuring Quality in Chat Reference Consortia: A Comparative Analysis of Responses to Users’ Queries,” College & Research Libraries 70, no. 1 (Jan. 1, 2009): 71–84, https://doi. org/10.5860/crl.70.1.71; Kate Fuller and Nancy H. Dryden, “Chat Reference Analysis to Determine Accuracy and Staffing Needs at One Academic Library,” Internet Reference Services Quarterly 20, no. 3/4 (2015): 163–81; Marie L. Radford and Lynn Silipigni Connaway, “Not Dead Yet! A Longitudinal Study of Query Type and Ready Refer- ence Accuracy in Live Chat and IM Reference,” Library & Information Science Research 35, no. 1 (2013): 2–13. 50. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 51. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 52. Keyes and Dworak, “Staffing Chat Reference with Undergraduate Student Assistants at an Academic Library.” 53. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 54. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 55. Duinkerken, Stephens, and MacDonald, “The Chat Reference Interview.” 56. Bishop, “Can Consortial Reference Partners Answer Your Local Users’ Library Questions?”; Kwon, “User Satisfaction with Referrals at a Collaborative Virtual Reference Service.” 57. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 58. Ward and Jacoby, “A Rubric and Methodology for Benchmarking Referral Goals.” 59. RUSA, “Guidelines for Behavioral Performance of Reference and Information Service Providers.” 60. Waugh, “Formality in Chat Reference.” https://doi.org/10.1108/00907320210416519 https://doi.org/10.1080/10588160802157124 http://www.informationr.net/ir/9-2/paper171.html https://doi.org/10.1108/03074800610654871 https://doi.org/10.1136/bmj.322.7294.1115 https://doi.org/10.5860/crl.70.1.71 https://doi.org/10.5860/crl.70.1.71