Meert.indd Measuring Quality in Chat Reference Consortia: A Comparative Analysis of Responses to Users’ Queries Deborah L. Meert and Lisa M. Given Academic libraries have experienced growing demand for 24/7 access to resources and services. Despite the challenges and costs of chat reference service and consortia, many libraries are finding the demand for these services worth the cost. One key challenge is providing and measuring quality of service, particularly in a consortia setting.This study explores the quality of service provided in one academic library partici- pating in a 24/7 chat reference consortium, by assessing transcripts of chat sessions using in-house reference quality standards. Findings point to both similarities and differences between chat interactions of local librarians versus consortia staff. hat reference services are available to patrons in many academic libraries through- out North America. To save money and extend monitoring time, many libraries are opting to join consor- tia, which allow patrons’ questions to be monitored by reference librarians at dif- ferent institutions based on criteria such as hours of availability. Users’ questions can be answered by any of the consortia’s libraries. Despite the increasing popularity of chat reference (and consortia), the authors found that many academic librarians express doubts regarding the ability of staff from an outside institution to answer their users’ questions effectively. To date, the literature has not examined whether library staff can adequately support other institutions’ reference needs. This paper reports on one study that was designed to explore this question, in the context of a consortia-based chat reference service used by a large Canadian university library. Chat Reference Services: An Overview of the Literature The library and information studies literature documents various opinions about the capabilities and challenges of chat reference, as well as some assessment of service quality and patron satisfaction. This section briefly examines the core literature, including the few papers that address chat reference consortia. Meeting Patrons’ Needs: The Chat Reference Context Jana Ronan and Carol Turner note that academic libraries report a decline of Deborah L. Meert is Liaison Librarian in Macdonald Campus Library at McGill University; e-mail: deborah. meert@mcgill.ca. Lisa M. Given is Associate Professor in the School of Library and Information Studies at the University of Alberta; e-mail: lisa.given@ualberta.ca. 71 mailto:lisa.given@ualberta.ca mailto:meert@mcgill.ca 72 College & Research Libraries January 2009 in-person, reference desk traffic since the early 1990s, despite increases in en- rollment.1 Fran Wilson and Jacki Keys note the same trend and point to the proliferation of new online resources and technologies, as well as users’ increasing desire to access digital materials and ser- vices, as contributing factors.2 Although patrons still need reference services, the nature of those needs have changed. Chat reference is merely one digital service now available to academic library patrons. However, despite its popularity, agreeing on a definition of “chat refer- ence” is problematic. Some librarians view it as an add-on to “real” (that is to say, in-person) reference services, while others see it is an integral part of a “changing information culture, central to the continued vitality of reference at the point of service.”3 If users’ online (24/7) access continues to proliferate, do librar- ians have a responsibility to be present in this environment “as role models and facilitators of scholarship conducted with integrity”?4 Most librarians agree that it is impor- tant to provide service to users who are not physically in the library when they require assistance, and that this need increases as online resources increase. How best to meet these needs, and the ability of chat reference (especially col- laborative services) to do so, remains unresolved in the literature.5 As Ian Lee notes, “academic libraries have gone into cyberspace and maybe the librarian has to meet the student there.”6 Indeed, librar- ies are beginning to use a variety of new technologies for reference services (for instance, creating virtual reference desks in Second Life). However, without research that examines the array of digital services on offer, librarians cannot make effective financial and staffing decisions. This project addresses this gap as it pertains to chat reference consortia. Chat Reference Consortia: New Territory for Reference Assessment Libraries are increasingly exploring col- laborative ventures, to save time and money and to use existing resources best. However, with respect to chat reference consortia, Lee notes that, while some librarians feel these services represent exciting developments, others feel that these are overrated.7 Steve McKinzie states that the profession’s infatuation with technology has caused librarians to make more out of chat reference than it’s worth, noting that chat reference does not meet users’ needs efficiently or deepen their research capability.8 Strengths of Chat Reference and Consortia Chat reference not only allows librar- ians to answer remote users’ questions, in real time, but it also allows staff to demonstrate online resources with “co- browsing” so ware. As users may be in computer labs, unable to phone or physi- cally seek immediate help, chat reference may be more helpful than waiting for an e-mail response. Kathy Dempsey suggests that, when users are given the choice of using non- library online resources (for instance, found via Google) to answer their ques- tion immediately or postponing their question until they can go to the library (or hear from the librarian by phone or e-mail), users typically choose the non- library source.9 Chat consortia also push the boundaries of traditional service hours and locations by stepping in when local librarians are busy with other pa- trons or libraries are closed. In addition, some users do not (or cannot) use the traditional reference desk because of a disability, anxiety, or a language barrier.10 Wilson and Keys note that people with certain types of hearing, vocal, or mobility challenges are also hesitant to approach reference librarians in person, because they may feel guilty about needing more time to have questions answered.11 The Challenges of Chat Reference and Consortia It is not unusual that a new service or technology presents challenges. Chat http:answered.11 http:barrier.10 Measuring Quality in Chat Reference Consortia 73 reference and consortia services face numerous issues, but many institutions are successfully addressing them. The two most problematic areas are: 1) the technology itself and 2) the perception that digital reference cannot adequately address complex or “serious” questions. Similarly, Ciccone and VanScoy note the “feast or famine” nature of chat refer- ence, where librarians can be inundated with questions one moment and then receive none for hours. This prompts some libraries to question the cost-ben- efit ratio of belonging to a chat reference consortium.12 Staffing, interpersonal communication, and quality of service within and between institutions are just a few additional concerns raised by librarians. Ciccone and VanScoy note that 24/7 service is not something most institutions can do inde- pendently but that joining a consortium can make this possible.13 However, many libraries worry that the quality of answers will decrease, that the libraries in their consortium will not understand their local institution’s mission and curricular con- text. They also question the ability of any one librarian (or nonprofessional staff) to be familiar with numerous different policies, services, and collections across consortium institutions.14 Some librarians also raise concerns about the lack of non- verbal communication cues (such as facial expressions and tone of voice).15 How Do You Assess the Quality of Chat Reference? Library managers regularly assess service quality by reviewing transcripts, creating policies, and monitoring users’ feedback. However, few libraries have developed formal assessment tools. Ciccone and VanScoy, for example, state two of the challenges managers face: 1) defining “quality” virtual reference service, espe- cially when offered in collaboration with other institutions; and 2) defining “good service” from the user’s perspective.16 Pro- cedures for assessing chat reference qual- ity are starting to appear in the literature.17 Libraries that provide chat reference via consortia must also develop appropriate assessment tools to determine quality within this type of service context. Wilson and Keys note that another assessment-related challenge within a consortium is the diversity of skills, knowledge, experience, and approaches to customer service that different institu- tions bring to the chat reference format.18 Defining a “successful” interaction is particularly problematic. Can a chat reference transaction and a traditional reference desk transaction be judged with the same criteria? Will librarians, users, and institutions define success in similar ways? David Ward examined some of these questions by focusing on the “com- pleteness” of transcripts to ascertain the effectiveness of answers to short, subject- based questions.19 Online transactions may well require the creation of new measures to assess “quality” and “suc- cess” in virtual environments. Chat ref- erence transcripts offer library managers new ways of evaluating certain aspects of reference service, despite concerns raised about patron and employee privacy.20 As one of Ronan’s survey respondents notes, “Each session becomes a tangible artifact that is invaluable for studying user and reference staff behaviour, the research process, and resource usage.”21 The Current Research There are many guides emerging for “best practice” standards, evaluation tools, and marketing strategies of chat reference services, addressing usage statistics, user satisfaction, and interpersonal commu- nication. Marie Radford has published three interesting studies that look at communication and/or accuracy in chat reference interactions.22 In the introduc- tion of her 2003 study, Radford asserts “evaluating virtual reference services is both greatly needed and sorely lacking... Research projects that evaluate individual chat sessions on a micro level…are very few in number.”23 However, li le research addresses quality assessment of consortia, http:interactions.22 http:privacy.20 http:questions.19 http:format.18 http:literature.17 http:perspective.16 http:voice).15 http:institutions.14 http:possible.13 http:consortium.12 74 College & Research Libraries January 2009 particularly comparative studies of chat reference transcripts between local and nonlocal staff. Research Design and Methods This study involved the development and application of a new measure for assessing the quality of chat reference interactions, with a focus on comparing process results for local vs. consortia li- brary staff. The se ing was the University of Alberta Libraries, where chat refer- ence services are provided by local and consortia library staff members. Library staff at the university (referred to here as UofA staff) who engage in chat reference services include professional librarians (that is to say, they have MLIS degrees), MLIS students, and nonprofessional staff. Consortia staff (referred to here as non- UofA staff) responsible for chat reference services include reference librarians from college and university libraries across North America, as well as staff of 24/7 Reference. The goal was to compare the process and quality for online chat refer- ence answers as provided by UofA and non-UofA chat reference staff. The University of Alberta is Canada’s third largest research university and houses Canada’s second largest academic library system. 24/7 was originally started by professional librarians but is now owned and run by OCLC. It provides chat reference so ware for libraries and also offers membership in a chat reference consortium. Policy procedures for 24/7 can be found on their Web site, www. questionpoint.org. Goals of the Project The goal of the first part of the study was to examine whether UofA and non- UofA chat reference staff answered UofA patrons’ questions using processes and measures of quality similar to those set by UofA reference management for their in-house reference interactions. The goal of the second part of the study was to determine how many questions were answered in “real time” (by both UofA and non-UofA staff) or “deferred” (that is, where users had to wait for staff to contact them, at another time, with an answer), as well as the reasons particular questions were deferred. As one of the benefits of chat reference is to allow for real-time interaction with users, it is important to assess how o en real-time answers are provided. Transcript Selection and Data Preparation Chat reference transcripts from the first year that the consortium service was in- stituted were collected. Transcripts from October 1 to April 30 were used; the data set was provided in chronological order and separated by month, allowing for comparisons over the academic year. Copies of the original transcripts were made, and student and librarian identi- fiers were removed by the manager of the chat reference service, so that individuals were anonymized prior to the research- ers’ analysis. In total, 2,983 transcripts were gathered from October 1 to April 30. Of these, 604 transcripts were removed as they were incomplete or otherwise inappropriate for this analysis (for example, patrons ending the transaction prematurely). Also, inter- actions between UofA staff and non-UofA users were excluded from the study, as the measures of quality were developed for UofA’s patrons. A total of 2,379 transcripts were included in the final data set; 1,402 logged interactions between a UofA staff member and a UofA user, with 977 docu- menting interactions between a non-UofA staff member and a UofA user. As there were fewer “non-UofA staff” transcripts than ‘”UofA staff” transcripts, a sample of the 1,402 transcripts was drawn using a disproportionate stratified random sampling technique. This approach made the data set more manageable for data analysis and allowed for stratification of the population into two subpopulations, with a minimum number of respondents in each of the “UofA staff” vs. “non-UofA staff” categories. As the transcripts were already grouped by month, this strategy http:questionpoint.org Measuring Quality in Chat Reference Consortia 75 TABLE 1 Breakdown of Transcripts (N = 478) in Study Sample, by Month and Staff Sub-categories. U of A non-U of A Total October 40 37 77 November 37 34 71 December 31 33 64 January 40 30 70 February 37 31 68 March 34 32 66 April 33 29 62 Total 252 226 478 was applied separately for each one- month period. This resulted in a final sample size of 478 transcripts; with a total population of 2,379, a sample size of 477 provides for a confidence level of 99%, with a confidence interval of 5.28. Table 1 provides a month-by-month breakdown of the full sample, across staff categories. To obtain this sample from the com- plete collection, each month of transcripts was sampled separately. First, all of the October transcripts were divided into the two subgroups (UofA staff; non-UofA staff); if both types of staff interacted with the user during the transaction, the tran- script was assigned to the category of the first staff member to engage with the user. Each subgroup was then divided into four “Question Categories” (created by the au- thors as broad but descriptive categories encompassing most questions asked), and a random sample of 10 transcripts was se- lected from each of those resulting (that is, eight) groups. This process was repeated across the seven months reflected in the data set. The four “Question Categories,” which categorize the types of questions asked (or information requested) by us- ers, are as follows: 1. Library User Information (e.g., What’s my PIN number?) 2. Request for Instruction (e.g., How do I access an online article?) 3. Request for Academic Information (e.g., Where can I find information on genetics research?) 4. Miscellaneous/Nonlibrary (e.g., Can I pay my tuition online?) Each complete transcript was coded as reflecting one of the four question categories. If a user asked more than one type of question within a single reference interaction, the question and answer that composed the majority of the interaction was used to assign a question category to that transcript. Unfortunately, there were not always enough transcripts per month to provide a sample of 10 transcripts for each ques- tion category each month (especially for Question Category #4). Therefore, as table 1 shows, some months have fewer than 40 transcripts. In some months, there were not enough transcripts for Question Cat- egory #4 to be considered statistically sig- nificant; however, when all the transcripts for Category #4 are combined, the results are statistically significant. Therefore, data are presented here with all seven months combined rather than presented for each month individually. Data Analysis: Part One To address the goal of the first part of the study, the transcripts were analyzed to examine the process by which chat reference staff provided responses to users’ questions. These responses were coded as to whether they did or did not meet the standards set by UofA reference management, governing in-house refer- ence transactions. These standards are as follows (per Question Categories 1–4): Reference Transaction Standards Set by University of Alberta Reference Management Question Category 1: Library User Infor- mation (e.g., What’s my PIN number?) Was correct information (that is to say, that accurately answered the question) given to the user? If an answer was not provided, was the user referred to an authoritative source that could provide an 76 College & Research Libraries January 2009 answer (for instance, referred to academic department or university Web site)? Question Category 2: Request for In- struction (e.g., How do I use a database?) Were correct, step-by-step instructions given (or demonstrated) to the user re- garding their query? If users required further instruction, were they referred to another authoritative source (for example, asked to make an appointment with a librarian)? Question Category 3: Request for Academic Information (e.g., Where can I find information on genetics research?) Was correct information (that is to say, that accurately answered the question) given to the user? If an answer was not provided, was the user referred to an authoritative source that could provide an answer to the question (such as a scholarly journal)? If the staff member could not answer the user’s question, or if the user required additional information, was the user referred to a subject specialist? Question Category 4: Miscellaneous/ Nonlibrary (e.g., Can I pay my tuition online?) Was correct information (that is to say, that accurately answered the ques- tion) given to the user? If an answer was not provided, was the user referred to an authoritative source that could provide an answer to the question (for instance, referred to academic department or uni- versity Web site)?24 Each transcript received either a “yes” or “no” allocation based on the standards for each Question Category. Comparative analyses were then conducted to see if the UofA and non-UofA chat reference staff interactions differed in their abilities to successfully meet these process stan- dards. Data Analysis: Part Two To address the goal of the second part of the study, each transcript was also coded with a “yes” or “no” designation as to whether the user received an answer from the staff member in “real time.” If the transcript was coded “no,” the data were further analyzed to determine why the user did not receive a real-time response. These reasons were grouped into five categories: Reasons Users’ Questions Were Not Answered in “Real Time” Reason 1: Technical difficulties (for in- stance: system disconnection; so ware not responding) Reason 2: Information is not avail- able to staff member at time of transac- tion (examples: database not available; academic department where information housed is closed) Reason 3: User’s question requires in-depth reference interview/search or a subject specialist (example question: Can you help me write a business proposal?) Reason 4: Staff member does not know the answer and must forward it to another institution, department, or staff member (for example: Do you know what poem contains the line, “By the dawn’s early light”?) Reason 5: Staff member does not have time to answer the question. Comparative analyses were also con- ducted to see if the number of questions being answered in “real time” was the same or different between UofA and non-UofA chat reference staff transcripts. Further analysis was also conducted to compare the reasons why questions were not answered in “real time,” to compare across UofA and non-UofA chat refer- ence staff. Findings and Discussion Research Question, Part One: Do UofA and non-UofA chat reference staff answer UofA patrons’ questions using processes and measures of quality similar to those set by UofA reference management? When the data presented in Table 2 are examined, it can be seen that UofA staff met the standards 94 percent of the time for all question categories combined. This high percentage suggests that UofA staff are meeting the standards set by their managers. Non-UofA staff met these same standards only 82 percent of the time, Measuring Quality in Chat Reference Consortia 77 TABLE 2 Total % of Transcripts by Question Category that Met the Standards Library User Information Request for Instruction Academic Information Misc. Non-library All Categories Combined U of A 97% (68 of 70) 97% (68 of 70) 90% (63 of 70) 93% (39 of 42) 94% (238 of 252) non-U of A 76% (53 of 70) 84% (56 of 67) 87% (61 of 70) 83% (15 of 18) 82% (185 of 225) for all categories combined. Differences between these groups are most significant when each question category is examined separately. The first question category, “Library User Information,” requires knowledge of, or access to, information about library procedures, policies, standards, and records. UofA staff met the standards for answering this type of question 97 percent of the time, while non-UofA staff met the standards only 76 percent of the time. Interestingly, much of the informa- tion that was not provided to patrons by non-UofA staff was, indeed, available online; either this information was not found by the staff member or was not used during the reference transaction. The UofA Libraries provided an “information page” to 24/7 of policies, scripts, and “best practices” to support non-UofA staff who may need to respond to administrative or frequently asked questions, but these questions were still not always answered by non-UofA staff. That said, there were also a number of questions that were not addressed on the “information page” (such as “Where can I watch a video in the library?”). Although this information is available online at the UofA Libraries Web site, it may be more difficult to find, even for someone familiar with the site. Also, some of the information required to answer these types of questions was not available to the non-UofA staff member. For example, one of the students’ most commonly asked questions in this cate- gory was “What’s my PIN number?” This information is not available online; how- ever, some UofA staff can access student records or can phone other individuals who can access student records. As UofA staff typically serve on chat reference dur- ing regular campus business hours, find- ing this information would be relatively easy. During evening and weekend hours (which is when UofA MLIS students work in chat reference), the circulation desks are open, so PIN numbers would be ac- cessible. However, non-UofA staff o en answer questions at times when they cannot contact a UofA department to ob- tain an answer. Further, it is not common practice for a non-UofA staff member to contact the UofA by telephone to obtain information, even during normal busi- ness hours. If this type of information cannot be made available to all staff, at all hours, it will not be possible for all individuals to accurately respond to the user’s request. For the types of questions that can be answered by non-UofA staff, it is essential that this information is clearly and publicly available and that these staff members access and use that information to answer patrons’ questions. Providing alternative sources to non-UofA staff (for instance, phone numbers for department contacts) would also increase the success rate for meeting the standards for answer- ing these types of questions. In question category two, “Request for Instruction,” the UofA staff also had a high success rate, with 97 percent meeting the standard. Non-UofA staff also fared well in this category, by meeting the stan- dards 84 percent of the time; however, this is well below the UofA staff performance level. The transcripts show that non-UofA staff most commonly stated that they could not help users, as they were unfa- miliar with the resources the UofA library 78 College & Research Libraries January 2009 owned or accessed. This reason was also commonly cited in question category three, “Request for Academic Informa- tion,” where non-UofA staff performed only slightly be er. It would appear that non-UofA staff were slightly more able or willing to use an unfamiliar resource themselves to find information for a user than they were to provide instruction for a resource with which they were unfa- miliar. However, if non-UofA staff were uncomfortable providing instruction to users on how to use these resources, or provided some instruction but knew it was not as thorough as it should have been, they could still increase success in this question category by forwarding the user ’s question to an authoritative source. The data for question category three, “Request for Academic Information,” proved quite interesting, especially for UofA staff. UofA staff met the standards for this question category 90 percent of the time (their lowest score for all the question categories), while non-UofA staff met the standards 87 percent of the time (their highest score and only 3 percent lower than UofA for meeting the standards). The results for this category suggest that non-UofA library staff ap- pear almost equally competent in an- swering questions requesting academic information as UofA library staff, even though non-UofA staff voice concern over not being familiar with UofA resources. Although the numbers appear consis- tent with regard to the non-UofA staff’s tendency to meet the standards across categories, they do not appear to be con- sistent with the UofA staff’s tendency to meet the standards. The responses to question category four, “Miscellaneous Non-Library In- formation,” are very similar to those in question category one, “Library User Information.” This category also contains questions asking for administrative or fac- tual information, but about the university in general rather than the library itself. Most questions asked in this category were for information that could be found on the university Web site and/or found by contacting departments on campus. Interestingly, non-UofA staff performed be er in answering the general campus questions than the library-related ques- tions included in category one. This may reflect better use and/or layout of the university’s Web pages; however, if that were the case, one might expect the UofA staff to have a similar rise in performance on this question, but they did not. UofA staff met the standards only 93 percent of the time for this category, 4 percent lower than the level seen in category one. This might make sense, considering that these are library staff, and they would be more familiar with the library’s Web pages than they would be with the general uni- versity Web pages. However, one would still expect this percentage to be closer to the percentage in question category one for UofA staff, since they do work chat reference at a time when they have access through the telephone, during regular business hours, to obtain general campus information. TABLE 3 Total Number of Transcripts by Question Category that are Answered in “Real Time” Library User Information Request for Instruction Academic Information Misc. Non-Library All Categories Combined U of A 91% (64 of 70) 93% (65 of 70) 86% (60 of 70) 86% (36 of 42) 89% (225 of 252) Non-U of A 59% (41 of 70) 78% (52 of 67) 74% (52 of 70) 55% (10 of 18) 69% (155 of 225) Measuring Quality in Chat Reference Consortia 79 Part Two, Question One: How many questions are actually answered in “real time” by both the UofA library staff and non-UofA chat reference staff? The results for this section showed significant differences between the numbers of questions being answered in real time by UofA and non-UofA staff across every question category. Gener- ally, UofA staff answered 89 percent of their questions in real time, while non-UofA staff answered 69 percent of their questions in real time. Typically, UofA staff are encouraged to forward questions to a subject specialist when they feel a specialist can best answer a patron’s question. However, this policy seems counter to the intended goal of offering real-time, 24/7 access to chat ref- erence service, as users must wait for an answer to their question. Technically, the transcripts for these types of interactions would meet the reference standards, as individuals were referred to another au- thoritative source. However, the value of real-time interaction must also be taken into account in assessing the value (and quality) of chat reference service. For this part of the study, then, referring a patron to a specialist was classified as not answering the user’s question in real time; however, if the staff member did answer the question, but also forwarded the transcript to another person (for instance, to see if a subject specialist might add something more to the an- swer), the transcript was coded as being answered in real time. Indeed, if UofA staff members answered users’ questions to the best of their ability at the time the question was being asked during chat reference, and then forwarded the ques- tion to a subject specialist for follow-up, they could continue to favor their local culture of forwarding questions to spe- cialists, yet still answer most questions in real time. This would allow UofA staff members to meet the standards for part one of this study while retaining a high degree of performance for answering questions in real time. Part Two, Question Two: Why are questions deferred (not answered in real time)? For question category one, “Library User Information,” UofA staff did not answer 6 out of 70 questions in real time, 3 of these due to the staff member not knowing the answer owing to lack of expertise. For the same question category, non-UofA did not answer 29 out of 70 questions, a significant difference, with TABLE 4 Raw Data for Transcripts not Answered in Real Time by U of A Staff Question Category Total Transcripts Not Answered in Real Time Technical Difficulty Information Not Available In Depth or Subject Specialist Does Not Know Answer Doesn’t Have Time to Answer Lib User Info 9% (6 of 70) 1 1 1 3 0 Request Instruction 7% (5 of 70) 3 1 0 1 0 Request Information 14% (10 of 70) 2 1 2 5 0 Misc. Non-Library 14% (6 of 42) 1 1 0 4 0 Total 11% (27 of 252) 7 4 3 13 0 80 College & Research Libraries January 2009 TABLE 5 Raw Data for Transcripts not Answered in Real Time by Non-U of A Staff Question Category Total Transcripts Not Answered in Real Time Technical Difficulty Information Not Available In- Depth or Subject Specialist Does Not Know Answer Doesn’t Have Time to Answer Lib User Info 41% (29 of 70) 0 17 0 12 0 Request Instruction 22% (15 of 67) 7 2 2 3 1 Request Information 26% (18 of 70) 4 0 2 9 3 Misc. Non-Lib 44% (8 of 18) 0 3 0 4 1 Total 31% (70 of 225) 11 22 4 28 5 12 of these being due to the staff member not knowing the answer owing to lack of expertise, and 17 of these because of the information not being available at the time of the transaction. It is not surpris- ing that UofA staff would naturally have more expertise in answering local library administrative questions than non-UofA staff, although many of these answers can be found on the library’s Web site. The differences in this question category for this part of the study can be related directly to the results and reasons for the differences in this question category for part one of this study. UofA staff also answered most ques- tions in the second question category, “Re- quest for Instruction,” in real time; only 5 of 70 questions were not answered in real time, with 3 of these being due to techni- cal difficulty. Non-UofA staff performed much be er in this question category than in the first question category; only 15 of 67 questions were not answered in real time, with 7 because of technical difficulty. Considering the potential for this ques- tion category to use the “co-browsing” feature of the so ware more o en than the other question categories, this is not surprising, as using the co-browsing fea- ture requires more technical capability on the part of the staff members’ and users’ computers. There is the potential, when co-browsing, for more technical difficul- ties to occur; and this question category, “Request for Instruction,” would tempt staff members to use this feature more of- ten (for instance, to demonstrate database use) to patrons in real time. In the third question category, “Re- quest for Academic Information,” UofA staff did not answer 10 of 70 questions in real time; 5 of these 10 were due to the staff member not knowing the answer to the question owing to lack of expertise. As in part one of the study, this was their most challenging question category for not meeting the standards and not an- swering questions in real time. Non-UofA staff did not answer 18 of 70 questions in real time for this question category, with 9 of those because of the staff member not knowing the answer. There were two subcategories created for staff members not answering the ques- tion in real time due to “Not Knowing the Answer”: 1) “Lack of Expertise”; or 2) “Cultural Barrier” (for instance, not understanding the Canadian educational context). In this question category, only 1 of the 9 questions was not answered in real time by non-UofA staff because of Measuring Quality in Chat Reference Consortia 81 a cultural barrier. In fact, as will be dis- cussed later, the subcategory of “Cultural Barrier” only accounted for 3 transcripts in total, for all question categories, not being answered in real time for non- UofA staff. The fourth question category also correlates with part one of the study for both the UofA and non-UofA staff mem- bers. UofA staff did not answer 6 out of 42 questions in real time, with 4 of these due to the staff member not knowing the answer. The non-UofA staff did not answer 8 out of 18 questions, with 4 of these because the staff member did not know the answer. Again, for this category, for both types of staff members, half of the questions not being answered in real time were due to the staff member not knowing the answer to the question, and the numbers were greater for non-UofA staff than they were for UofA staff, with the suggestion again being that UofA staff had access to administrative information in different ways than non-UofA staff. The data show that the deferment cat- egory, “Does Not Know Answer,” was the reason cited for almost half of the ques- tions not being answered in real time by both UofA and non-UofA staff members. Distinguishing between a staff member forwarding the question because they did not know the answer (deferment category 4), and forwarding to a subject specialist (deferment category 5), was important, particularly to account for times the ques- tion legitimately could not be answered in the chat format (for instance, length of time needed to answer the question) versus those times that the question could have been answered if the staff member had appropriate knowledge. Deferment category 3 represents questions not suit- able for the chat reference format. The fact that deferment category 4 is high for both groups might indicate that it is “typical” to not be able to answer certain questions; however, it would be interesting to see if this is the situation at physical reference desks as well. Performing part two of this study at the physical reference desk of UofA, and comparing the results to the UofA chat reference data, may show if this is actually the case. Another significant reason why ques- tions were not answered in real time by UofA staff was deferment category 1, “Technical Difficulty.” This could occur on the librarian’s end or the user’s end and could be due to problems with the hard- ware, so ware, or server. UofA staff did not answer 26 percent of their questions in real time because of some type of techni- cal difficulty, while non-UofA staff did not answer 18 percent of their questions for the same reason. This does not necessarily mean that UofA staff have more technical difficulties than non-UofA staff; rather, it means that technical difficulties account for a larger percentage of the reasons that UofA staff do not answer questions in real time when compared to non-UofA staff. For UofA staff, this is the second largest reason why questions are not being an- swered in real time. This indicates that solving technical difficulties should be a TABLE 6 Breakdown of Transcripts not Answered in Real Time by Deferment Category Deferment Categories Total Transcripts Not Answered in Real Time 1 Technical Difficulty 2 Information Not Available 3 In- Depth or Subject Specialist 4 Does Not Know Answer 5 Doesn’t Have Time to Answer U of A 11% (27 of 252) 26% (7 of 27) 15% (4 of 27) 11% (3 of 27) 48% (13 of 27) 0% (0 of 27) Non-U of A 31% (70 of 225) 16% (11 of 70) 31% (22 of 70) 6% (4 of 70) 40% (28 of 70) 7% (5 of 70) 82 College & Research Libraries January 2009 priority if UofA reference management wants to increase the number of questions that UofA staff answer in real time. The second largest reason for non- UofA staff not answering questions in real time was deferment category 2, “Informa- tion Not Available”; they did not answer 31 percent of their questions in real time for this reason. This category does not include the possibility that the non-UofA staff member did not utilize, or was not able to find, information. It includes only transcripts where questions were asked that the staff member could not answer because the information was not available to them at the time of the transaction (for instance, where they could not provide a PIN number because on-campus depart- ments were closed). If deferment category 4, “Does Not Know Answer,” is actually “typically” high for reference situations, then the deferment category “Information Not Available” is the most significant rea- son that non-UofA staff do not meet the standards and do not answer questions in real time. Unfortunately, this reason may not be within their control to change. Deferring a question to a subject specialist or in-depth research time (de- ferment category 3) did not account for a large number of questions not being answered in real time for either UofA (at 11%) or non-UofA (at 6%) staff. Ad- ditionally, only 7 percent of non-UofA transcripts were not answered in real time because the staff member did not have enough time (deferment category 5). However, this never occurred with UofA staff in the sample. It could be that there are many more staff members, both UofA and non-UofA, monitoring the chat service during daytime hours than there are during the late evening and weekend hours, when only non-UofA staff are monitoring. However, if even just 5 out of every 70 transcripts show that users are turned away because staff does not have time to help, those users may never return; with thousands of transactions, this could adversely affect a large number of students. This is an important issue to consider when assessing the value of consortia systems. Conclusions and Implications for Reference Management In this study, the UofA chat reference staff met the standards expected by their own reference management 94 percent of the time, while non-UofA chat reference staff met them 82 percent of the time. UofA staff performed be er in all types of question categories than non-UofA staff; however, the difference varies according to the type of question asked by the user. Overall UofA staff answered 89 percent of questions in real time, while non-UofA staff answered 69 percent of questions in real time: a significant difference, again, with a variety of circumstances influenc- ing it. The most significant suggestion for future decision making that this study offers is that if UofA reference manage- ment can provide adequate and easily accessible information to non-UofA staff (assuming that non-UofA staff use this information) that allowed them to answer most questions regarding library user in- formation correctly, and in real time, this would decrease the number of questions not meeting the UofA reference manage- ment standards and would increase the number of questions answered in real time by non-UofA staff. The data presented here can be used by other similar academic institutions to guide decisions about joining and managing a chat reference consortium. Although the consortium staff score lower than the home university staff on quality of answers and answering ques- tions in real time, the differences should be significantly lessened by following the suggestions offered in this study. Specifically, consortium staff should have the information they need to answer the most commonly asked types of questions, particularly the kind described in the “Library User” question category. If this consideration is made, it would be likely that the quantitative differences between Measuring Quality in Chat Reference Consortia 83 the groups in both quality of answers and quantity of answers in real time would decrease. The manager of the UofA’s chat refer- ence at the time of this study created an information page that would offer non- UofA staff the facts, policies, and pro- cedures they would need to answer the types of questions that this study showed were not being answered correctly or in real time. Pages of this kind were also be- ing created by 24/7 for all libraries in the consortium, which should decrease the difference in quality of answers between the local and nonlocal staff of all institu- tions in the consortium. Repetition of this study with these measures in place would be informative and should provide further assurance that high standards of quality can be achieved by nonlocal staff in a chat reference consortium. There are many considerations when deciding whether to participate in a chat reference consortium. This study has a empted to create data that may help answer questions about quality and give suggestions on how to achieve and main- tain it. If quality of responses is a concern when considering a consortium, this study should demonstrate that it need not be if precautions are taken to provide the nonlocal librarians with the information they need to answer questions accurately and in real time. There are new technologies being cre- ated and implemented every day that will help to make the chat reference librarian’s job even easier. Voiceover IP is already being considered, as is the use of instant messenger “buddy lists” so librarians can call for reference “backup.” Another interesting proposal is the “meta-search tool.” Most librarians are familiar with the desperate look of a student in the stacks or reference area looking per- plexed or lost, and it is quite normal to ask that student if he or she needs assis- tance. Imagine the scenario of a student searching the databases and coming up with failed search after failed search. A failed search could be electronically routed to the chat reference librarian, a virtual “digital intervention.”25 It is important for libraries to support their costly resources if they want them to be used. Tenopir quotes Barbara Dewey, Dean of Libraries at the University of Tennessee, as saying, “The cost of content without service is irrelevance.”26 In five years’ time, chat reference might look very different, and it might be capable of more precise and effective information provision. Perhaps time, experience, and technology can close the gap between local and nonlocal suc- cess in meeting standards for answering users’ questions, both effectively and in real time. Notes 1. Jana Ronan and Carol Turner, Chat Reference. Washington, D.C.: Association of Research Libraries (2002). 2. Fran Wilson and Jacki Keys, “AskNow! Evaluating an Australian Collaborative Chat Refer- ence Service: A Project Manager’s Perspective,” Australian Academic and Research Libraries 35 (June 2004): 81–94. 3. Edana McCaffrey Cichanowicz, “Live Reference Chat from a Customer Service Perspec- tive,” Internet Reference Services Quarterly 8, no. 1/2 (2003): 28. 4. Corey M. Johnson, “Online Chat Reference: Survey Results from Affiliates of Two Universi- ties,” Reference and User Services Quarterly 43, no. 3 (2004): 238. 5. Marshal Breeding, “Providing Virtual Reference Service,” Information Today 18 (Apr. 2001): 42–43; Johnson, “Online Chat Reference.” 6. Ian J. Lee, “Do Virtual Reference Librarians Dream of Digital Reference Questions? A Qualitative and Quantitative Analysis of Email and Chat Reference,” Australian Academic and Research Libraries 35 (June 2004): 95. 7. Lee, “Do Reference Librarians Dream of Digital Reference Questions?” 8. Steve McKinzie, “Virtual Reference: Overrated, Inflated, and Not Even Real,” Charleston 84 College & Research Libraries January 2009 Advisor 4 (Oct. 2002): 56. 9. Kathy Dempsey, “Here’s Your Guide to VR: Use it to Stay Relevant,” Computers in Libraries 23 (Apr. 2003): 6. 10. Laura Jacobi, “Cha ing at Gallaudet,” Library Journal 129 (Spring 2004): 3. 11. Wilson and Keys, “AskNow!,” 81–94. 12. Karen Ciccone and Amy VanScoy, “Managing an Established Virtual Reference Service,” Internet Reference Services Quarterly 8, no. 1/2 (2003): 95–105. 13. Ibid. 14. Cichanowicz, “Live Reference Chat.” 15. Johnson, “Online Chat Reference,” 237–47. 16. Ciccone and VanScoy, “Managing an Established Service.” 17. Julie Arnold and Neal Kaske, “Evaluating the Quality of a Chat Service,” Libraries and the Academy 5.2 (2005) 177–93; Marilyn Domas White, Eileen G. Abels, and Neal K. Kaske, “Evalu- ation of Chat Reference Service Quality,” D-Lib Magazine 9, no. 2 (Feb. 2003), available online at www.dlib.org/dlib/february03/white/02white.html [Accessed 3 February 2008]. 18. Wilson and Keys, “AskNow!” 81–94. 19. David Ward, “Measuring the Completeness of Reference Transactions in Online Chats: Results of an Unobtrusive Study,” Reference and User Services Quarterly 44, no. 1 (2004): 46–56. 20. Johnson, “Online Chat Reference.” 21. Ronan, “Staffing Real-time,” 33. 22. Marie Radford, “Hmmm…Just a Moment While I Keep Looking: Interpersonal Com- munication in Chat Reference,” RUSA 10th Annual Reference Research Forum (2004), available online at www.ala.org/ala/rusa/rusaourassoc/rusasections/rss/rsssection/rsscomm/rssresstat/ 2004refreschfrm.cfm [Accessed 10 December 2007]; Marie Radford, “In Synch? Evaluating Chat Reference Transcripts,” Virtual Reference Desk: 5th Annual Digital Reference Conference (2003), available online at www.webjunction.org/do/DisplayContent/jsessionid=F3D25772218194BEB76 52D4CFD1AE98F?id=12664 [Accessed 10 December 2007]; Marie Radford, “Yo Dude! YRU Typin So Slow?” Virtual Reference Desk: 6th Annual Digital Reference Conference (2004), available online at www.webjunction.org/do/DisplayContent?id=12497 [Accessed 10 December 2007]. 23. Radford, “In Synch?” 24. Kathryn Arbuckle, Wanda Quoika-Stanka, and Kathy West, Reference Management Standards. Edmonton: University of Alberta Libraries (2005). 25. Ciccone and VanScoy, “Managing an Established Service”; Cichanowicz, “Live Reference Chat.” 26. Ronan, “Staffing Real Time”; Johnson, “Online Chat Reference”; Tenopir, “Rethinking.” www.webjunction.org/do/DisplayContent?id=12497 www.webjunction.org/do/DisplayContent/jsessionid=F3D25772218194BEB76 www.ala.org/ala/rusa/rusaourassoc/rusasections/rss/rsssection/rsscomm/rssresstat www.dlib.org/dlib/february03/white/02white.html