361 Coding into the Great Unknown: Analyzing Instant Messaging Session Transcripts to Identify User Behaviors and Measure Quality of Service Sarah Maximiek, Erin Rushton, and Elizabeth Brown Sarah Maximiek is Subject Librarian for Political Science, Public Administration and Government Docu- ments; Erin Rushton is Subject Librarian/Coordinator of Digital Reference Services; and Elizabeth Brown is Scholarly Communications and Library Grants Officer, all at Binghamton University, SUNY; e-mail: maximiek@binghamton.edu, erushton@binghamton.edu, ebrown@binghamton.edu. The authors would like to extend their gratitude to their past and current colleagues who helped with the transcript coding project: Nancy Abashian, Abigail Bordeaux, Katharine Bouman, Tina Clemo, Angelique Jenks-Brown; and to Alesia McManus and Dave Vose for their assistance on the project. © Sarah Maximiek, Erin Rushton, and Elizabeth Brown After one year of providing virtual reference service through an instant messaging (IM) service, Binghamton University (BU) Libraries, under the purview of its Digital Reference Committee (DRC), undertook a study of collected session transcripts. The goals of this work were to determine who was using the IM service and why; if staffing for the service was adequate and met our in-person reference standards; and if improvements to the libraries’ existing reference services were needed. The findings revealed that 31 percent of identifiable users were students and 5 percent of users were campus community members. The analyses also revealed that many used the service for complex questions and not just ready reference, policy, and directional questions as had been expected. The most common question types were Web site navigation help (29% of all sessions), research assistance (22%), and instructional questions (23%). The American Library Association Reference & User Services Association (RUSA) Guidelines for the Behavioral Performance of Reference and In- formation Service Providers were used to measure quality of service. The findings reveled that approachability, showing interest, and listening were each demonstrated in over 80 percent of sessions, indicating these activi- ties can be demonstrated effectively in a virtual environment. The study also found that questions were correctly answered 84 percent of the time. The study provided valuable insight into how patrons approach and locate information on our Web site and demonstrated a need for additional train- ing, improved site design and navigational aids, and future discussions of staffing alternatives for the IM service. crl-48r1 362  College & Research Libraries  July 2010 Introduction Binghamton University, part of the State University of New York system, is a doctoral-degree granting research institution with an enrollment of over 14,300 students and 800 faculty members. Binghamton University Libraries consist of four library locations. The Glenn G. Bartle Library serves the humanities, social sciences, and fine arts. The Science Library serves the science and engineer- ing and houses the University Map Col- lection. The University Downtown Center Library/Information Commons opened in fall 2007 and serves the College of Com- munity and Public Affairs. The Library Annex is a high-density facility housing over 350,000 volumes in all subject areas. In 2005 the libraries’ DRC was charged with initiating IM reference service at the Bartle Library and Science Library. Each library created and supported ac- counts on AOL, MSN, and Yahoo! and monitored this service at the reference desk alongside in-person, e-mail, and telephone reference. A more detailed description of the DRC’s experiences in implementing and maintaining the IM reference service through Trillian was documented in an earlier published article titled “Connecting to Students: Launching Instant Messaging Reference at Binghamton University.”1 A year after the service was launched, the DRC began developing a method to analyze IM transcripts to accomplish the following objectives: • Evaluate quality of service and recommend improvements • Produce quantitative and demo- graphic data describing usage trends • Recommend changes for library services in reference, Web design, and collections based on identified needs of virtual users. Literature Review A literature review was conducted to see how others had measured quality of service in virtual reference. The review found that most studies focused on the evaluation of transcripts from commer- cial chat vendors such as QuestionPoint and data analysis centered on collecting basic statistical data. Since this literature provided minimum guidance for a study on evaluating quantitative and qualitative data, the DRC developed a unique meth- odology for data collection. The analysis incorporated evaluative factors from the literature review as well as additional qualitative and quantitative measures not previously studied. The reference desk, whether physi- cal or virtual, is one of the most visible library services; and the interaction with librarians, as well as quality and delivery of information provided, can significantly impact a patron’s overall perception of the library. Librarians have employed a vari- ety of research methods to evaluate ref- erence services. Some examples of these methods include having library students pose as patrons or having researchers ob- serve reference desk transactions. There is belief among some researchers that these techniques alter desk behaviors of both patron and librarian.2 Less intrusive methods for evaluating reference services became possible through the availability of e-mail and chat transcripts. An early example of transcript analysis was conducted at Auburn University Li- braries. Sears3 manually saved transcripts from the libraries’ chat service infoChat, a text-based chat system provided by HumanClick (www.humanclick.com/) and then coded transcripts by day of the week, user affiliation, and type of ques- tion. Results showed that 60.1 percent of questions were related to the libraries’ policies, procedures, resources, and/or services, while only one research ques- tion was asked. This led Sears to question whether the chat medium was conducive to research-based questions. At the Murdoch University in Perth and Macquarie University in Sydney, Lee4 conducted an evaluation of the libraries’ real time/real talk chat service called “Online Librarian.” Forty-seven chat transcripts and 47 e-mail reference Coding into the Great Unknown  363 transcripts were examined for a number of quantitative and qualitative measures, including population characteristics, question type, and the presence of dis- jointed communication in chat conver- sations. Lee reported that research and reference inquiries were more common in chat while administrative questions were found more frequently in e-mail. Lee also reported that reference interviews were more common in chat than in e-mail transactions. Arnold and Kaske5 from the University of Maryland College analyzed 351 chat transcripts to determine types of ques- tions asked and by whom. The research- ers also evaluated the correctness of the answers. Arnold and Kaske reported that the most common types of questions were policy and procedural (41.25 %) followed by “specific search” questions (19.66%). Students, at 41 percent, were the most frequent users of the service, while “outsiders” (individuals not affiliated with the university) asked 25.1 percent of questions. This led the researchers to question whether the service should be limited to only University of Maryland customers. In this study, they reported that 91.72 percent of questions were an- swered correctly. Ryan and others6 from the Louisiana State University Libraries reviewed 349 chat reference transcripts from LiveAs- sistance, the libraries’ chat service, to evaluate the service’s strengths and weak- nesses. The authors coded the transcripts in two different areas: the type of question and “customer service.” Customer service related to the librarian’s performance (for instance, if he or she provided a saluta- tion), types of chat features employed (such as pushing pages) and resources used. Most questions were informational or known item questions (example: does the library own) and the authors won- dered whether patrons realized that the chat medium could be used for in-depth questions. The authors also found that librarians almost always greeted the patron but were less consistent in pro- viding adequate closing language such as asking the user if there were any more questions. The authors also found that librarians provided compensation for visual cues, such as “please wait while I check the catalog,” only 31 percent of the time. Shachaf and Horowitz7 used Refer- ence & User Services Association (RUSA) behavioral guidelines and International Federation of Library Associations (IFLA) digital reference guidelines to evaluate effectiveness of e-mail virtual reference. The researchers sent a total of 324 que- ries to fifty-four participating libraries. Overall, the researchers found that few transcripts adhered completely to both sets of guidelines, with objective behav- ior (90.4%) and clarity of writing (90.4%) observed in a majority of transactions. Behaviors observed in less than 50 percent of the transactions included explaining the search strategy (IFLA and RUSA), rephrasing the question (RUSA), and asking what the user had already tried (RUSA). The researchers suggested that the lower frequencies of some behaviors could be a result of the types of questions encountered. The researchers also found no correlation between user satisfaction and adherence to either set of guidelines. Desai and Graves8 analyzed transcripts and conducted a survey to determine to what extent instruction was or could be offered and whether patrons wanted or expected instruction during an IM ref- erence transaction. The results showed that librarians provided instruction in 83 percent of the cases when it was possible and 95 percent of the time when a patron specifically requested instruction. The analysis revealed that students indicated a willingness to learn, even when they had not specifically requested instruction. Kipnis9 from Thomas Jefferson University analyzed 102 IM transcripts to exam- ine question types and usage patterns. Kipnis also looked for instances of IM shorthand and evidence of greetings from the patrons and/or librarian. The most common type of question was “document delivery,” and the use of IM shorthand 364  College & Research Libraries  July 2010 by patrons was relatively rare. The re- searchers also noted that librarians intro- duced themselves 72 percent of the time. The literature reviewed revealed that most transcript analysis studies have focused primarily on commercial chat reference services and are often limited to variables such as usage statistics (such as time of day question asked), user demo- graphics, and types of questions asked. This indicated there was an opportunity to conduct a more comprehensive study examining multiple variables in an IM environment. Study As noted in the literature review, most IM transcript analyses are limited to studying selected elements of the transaction. The DRC wanted to study as many quantita- tive and qualitative factors as possible since it would provide a unique opportu- nity to learn about usage patterns and the information needs of users. The factors that the DRC decided to study included: a. Demographics b. Session length c. Session by day and time d. Types of reference questions e. Resources used to answer question f. RUSA guidelines for behavioral performance g. Correctness and completeness of answer Methodology After finalizing which factors to evalu- ate, a system was created for data input and analysis. The DRC chose Microsoft Access for the analyses because it could be used to create a data input form as well as generate queries for analysis. Seven reference department staff vol- unteered to assist with evaluating the transcripts. Each volunteer obtained Human Subjects Education Certification, and the data analysis project received Human Subjects Research approval. The Libraries downloaded 284 IM ses- sions that occurred between June 2005 and June 2006. For privacy reasons, identifiable information such as IM user name, personal names, instructor’s name, or e-mail addresses were removed from the transcript prior to the analysis. The transcripts were printed and hand- numbered. A coding key (see Appendix) was then created to assist staff evaluating transcripts and to ensure consistency. Transcripts that contained reference behavior more complex than a catalog search for an item or a simple directional question were analyzed by two volun- teers. The analysis data created by these double-coded transcripts were compared and incorporated into a single data record by the DRC. Results User Demographics The libraries’ IM service is publicly avail- able from the libraries’ Ask a Librarian (http://library.binghamton.edu/research/ askalibrarian) Web page. User demo- graphics were gathered from the tran- scripts through self-identification (for in- stance, user says, “I’m an undergraduate student”), librarian query (for instance, librarian asks, “Are you a student here?”), or from clues provided in the transcript (for instance, user says, “I’m in Biology 101 and I need this book for a class”). Due to the challenges in identifying users, the DRC labeled 64 percent of users as “un- known.” Thirty-one percent were identi- fied as students, and 5 percent as campus community users (faculty or staff). Of the 31 percent student population, 11 percent were identified as undergradu- ates, 4 percent as graduate students, and 16 percent simply as “student.” It would appear from this data that the IM service attracts more undergraduate students than graduate students, faculty, or staff. Traffic IM usage patterns were calculated from session transcripts and were compared with Reference Desk activity and traf- fic when possible. Statistics showed the lightest IM service in early morning hours (8 am–noon), higher usage in the early Coding into the Great Unknown  365 afternoon (12 noon–3 p.m.), peak use dur- ing mid to late afternoon (3 p.m.–6 p.m.) and lower usage beginning in the early evening hours (6 p.m.–9 p.m.). Reference staff anticipated lower usage on Friday and Saturday from experience with walk- in traffic. Table 1 shows IM transactions by day of the week. The weekend, Friday through Sunday, had less activity than weekdays. This data mirrors patterns ob- served at both Reference Desks. Weekday traffic is high, with a slowdown beginning on Friday, bottoming out on Saturday, and building again on Sundays as students prepare for the week ahead. Use of IM Services Reference question categories were based on those defined by Katz,10 with some minor modifications. Multiple categories could be assigned to a transcript to ac- commodate complex or multiquestion sessions. An example of this might be a session where a patron asked if the libraries owned a specific item (Research or Subject) and then asked where it was located (Directional). As shown in figure 1, the most frequent types of questions encountered concerned Website Navi- gation (29%), followed by Instructional (23%) and Research or Subject (22%). Interestingly, each of these question types requires significant patron interaction, with multiple exchanges necessary to correctly communicate relevant informa- tion. Directional, policy, and bibliographic assistance questions were less common. This is contradictory to the nature of IM service, which would seem to be better suited for quick, factual questions and requests. Table 1 IM Transactions by Day of the Week Day Total % of Total Monday 51 18% Tuesday 54 19% Wednesday 45 16% Thursday 49 17% Friday 39 14% Saturday 17 6% Sunday 28 10% FIgure 1 Frequency of Questions by Question Category 0 5 10 15 20 25 30 35 Computer/Technical Research or Subject Out of Scope Other Web site Naviga�on Instruc�onal Policy Direc�onal Bibliographic Frequency (%) 366  College & Research Libraries  July 2010 The mean IM session length was 1 hour 9 minutes, and the longest session was 4 hours. Longer sessions usually oc- curred when librarians offered assistance and patrons then searched on their own, checking back in with the librarian as needed. The mode session length was 2.52 minutes, indicating that IM transactions tended to be relatively brief. Initially there was concern that research and subject as- sistance questions would lead to lengthy, cumbersome sessions that were better answered through an in-person transac- tion. However, the session-length data show that, while more research-oriented, instructional, and navigational questions were encountered than anticipated, most sessions were succinct. Quality For this factor, the DRC modified Arnold and Kaske’s11 model in “Evaluating the Quality of a Chat Service.” As shown in figure 2, the DRC found that 84 percent of questions were answered correctly, similar to results obtained by Arnold and Kaske. Ten percent of these correctly answered questions were “correct but not complete,” indicating that a correct answer was pro- vided but other activities such as referral to a colleague or request for additional questions were not offered. Seven percent of the questions were answered incor- rectly, indicating that there is some need for additional reference staff training, particularly in the areas of online reference interview techniques and referrals. The DRC had hoped the transcripts would show if using non-MLS gradu- ate students and staff to monitor IMs might impact the quality of service. Un- fortunately, 90 percent of sessions were marked “unknown” for staff member demographic, and any relationships be- tween formal staff training and effective- ness in answering questions could not be measured. Coding volunteers assigned a demographic category for patrons when it was self-identified in an IM session. While a closer look at scheduling and transcript data would give more information on demographics, privacy and ethical con- siderations would preclude such efforts. The number of unanswered IM ses- sions and time lapses during sessions can be indicators of service quality. When ref- erence staff took longer than one minute to first respond to an IM, it was counted as a “time lapse.” A time lapse could occur for multiple reasons. Due to the variety of in-person and online reference services available, both reference staff and patrons could have multiple conversations occur- ring when an IM session was initiated. It could also take a few moments for refer- ence staff to notice an IM and respond to the patron. Fifty-seven sessions (20% of all IM sessions) had a time lapse, with the numbers varying slightly between Bartle Library (19%) and the Science Library (23%). Time lapses ranged from one minute to 144 minutes. A scatter plot diagram indicated that the 144-minute delay was an anomaly. When this data point wa s r e m o v e d , t h e mean time lapse was recalculated at 1.53 minutes with a maxi- mum length of 74 min- utes. Nonresponses to FIgure 2 Correctness and Completeness of answers Correct and complete 10% 9% 5% 2% Correct but not complete Not correct and not complete Not correct but complete No transac�on Coding into the Great Unknown  367 IM sessions were measured, with Bartle Library having an 8 percent nonresponse rate and the Science Library had an 11.7 percent nonresponse rate. The RUSA Guidelines for Behavioral Performance, as developed by the RUSA RSS Management of Reference Commit- tee,12 served as standards for effective reference transactions in both the physical and remote world. For each guideline, the DRC chose behaviors that could be discovered in transcripts. Identification, or lack thereof, was coded. As seen in figure 3, at least one indicator of approachabil- ity, showing interest, and listening were observed in more than 80 percent of ses- sions, indicating that these activities can be demonstrated in a virtual environment. Considering the results of all the data collected, IM has been a successful ser- vice. We were pleased with the high per- centage of correctly answered questions, considering the number of variables: the high level of walk-in desk traffic; the use of graduate students to monitor the ser- vice; and the oft-quoted “55 percent” ba- rometer of traditional reference service.13 There are repeat users, and activity is increasing since the service began in 2006. Comments from the transcripts indicate that patrons find the service useful and convenient. Challenges in the service that remain include dropped and inactive sessions, incorrectly answered questions, and lack of proper referrals to colleagues Lessons Learned Discuss alternative methods for staffing IM services during peak hours. IM traffic appears to mirror walk-in desk traffic, and the busiest times for both services are the same. To ensure neither service is compromised, scheduling staff to monitor IM in their offices may reduce the number of lapsed responses and missed IMs. Staff on reference duty could also monitor IM on a dedicated computer close to the reference desk, which would allow them to assist with desk activity and also devote more attention to IM services when the need arises. Offer continuous training on IM reference. Our goal is to help staff adapt and evolve traditional reference interview techniques to the virtual environment. Since a signifi- cant portion of questions received through IM were research/subject and instructional questions that required the information gleaned from the reference interview, this skill is essential to successful IM practice Use feedback from transcripts to improve the libraries’ Web site usability and design. The most common patron questions concerned Web navigation followed by FIgure 3 behaviors Demonstrated and Not Demonstrated during IM Sessions 368  College & Research Libraries  July 2010 instructional questions. Users frequently have difficulty locating the desired re- source or link on the libraries’ Web site. After finding the needed source, they are unsure how to effectively search and locate relevant information. Web pages and navigational aids need to be designed with consideration for how patrons access information. Examples of this include ensuring multiple access points to re- search tools, using clear language free of jargon, testing Web page usability with a diverse population of users, and plac- ing instructional tools such as tutorials at point of need. Continue to monitor the impact of IM on all reference services using online data collection tools. The libraries collected reference transac- tion data using DeskTrackerTM since July 2007. Date, time, resources used, service used (in-person, phone, e-mail, IM), and length of question can be collected and analyzed for all reference service points. The DRC anticipates that information gathered with DeskTrackerTM will be in- valuable in collecting IM usage data, iden- tifying sources used to answer questions and indicating if reference staff frequently needs to refer questions to colleagues. The DRC also anticipates that future transcript analyses will be much quicker to compile due to the extent of demographic and qualitative data collected. While these data are useful, they will not provide evidence of user behaviors or determine if questions were correctly and completely answered. Nevertheless, the DRC consid- ers that DeskTrackerTM data will provide sufficient information to make effective decisions concerning staffing and support of the libraries’ virtual reference services. Continue to explore and expand virtual reference services. Based on the popularity of the IM service, the DRC expanded the libraries’ virtual reference services to include MeeboMe, a chat-messaging widget, and a text- messaging reference service. As virtual reference technologies continue to evolve, the DRC will evaluate new tools and services that can be used to enhance refer- ence services. Conclusions When the DRC undertook its transcript analysis project, it underestimated the length of time and commitment needed to successfully analyze and code IM tran- scripts. Challenges included the tedious and time-consuming work of download- ing, printing, and identifying transcripts to double code as well as removing iden- tifying information. Later decisions such as determining which factors to measure and coding them quickly proved to be a never-ending challenge showing there can never be too much communication or too many meetings. Creating a database to store the data proved less straight- forward than imagined. Originally, the project goal was to input and process all transcript data using Microsoft Access. After the data were collected, we found we were unable to analyze the data in Access due to lack of expertise. The final data analyses were completed by import- ing and processing the data using Excel. Given the volume of transcripts ana- lyzed, the DRC needed reference staff volunteers to assist with the initial round of coding. Training volunteers to code and analyze transcripts took more time than we had anticipated. Analyzing qualitative data proved difficult due to its subjective nature, and it was particularly difficult for deciding on the correctness and completeness of answers using the behavioral guidelines. Librarians have differing standards of ideal service levels, leading to some disagreement on judging correctness and completeness of answers. Van Duinkerken, Stephens and MacDon- ald,14 in a recent study, concluded much of the same when they suggested that librarians let the behaviors of the users determine when a reference interview is required and focus training on the RUSA guidelines that are viable in a chat environment, such as remaining cordial Coding into the Great Unknown  369 and nonjudgmental, and using referrals. Interestingly, the student undertaken by Van Duinkerkenand others mirrors an earlier observation by Bernie Sloan,15 who argued that many of the skills prized in a reference interview many seem contrived or artificial in a textual environment. Sloan speculated that complaints about a librar- ian’s “attitude” in a VR environment are likely to “stem from the impersonal nature of the chat medium itself” and may well be “endemic to virtual librarianship.” Based on literature collected analyzing IM reference use in libraries, we expected that our service would be used frequently by patrons asking quick questions regarding library services and policies. Instead, we discovered that a wide variety of questions were asked, including many in-depth re- search questions. In addition, the absence of vendor chat features such as cobrows- ing or split screens did not impact provid- ing effective instructional assistance using IM. These analyses indicate that virtual reference services within the libraries are now a core reference service for many patrons and may be the primary service a patron uses to contact the reference desk. Library policies, reference staffing, and the purchasing of electronic reference materi- als and books need to reflect this change to meet the needs of all users. Notes 1. Elizabeth Brown, Sarah Maximiek, and Erin Rushton, “Connecting to the Students: Launch- ing Instant Messaging Reference at Binghamton University,” College & Undergraduate Libraries 1, no. 4 (2006): 31–42. 2. Peter Hernon and Charles R. McClure, Unobtrusive Testing and Library Reference Services (Norwood, N.J.: Ablex, 1987). 3. JoAnn Sears, “Chat Reference Service: An Analysis of One Semester’s Data,” Issues in Sci- ence and Technology Librarianship 32 (2001): 200–06. 4. Ian J. Lee, “Do Virtual Reference Librarians Dream of Digital Reference Questions? A Qualitative and Quantitative Analysis of E-mail and Chat Reference,” Australian Academic and Research Libraries 35, no. 2 (2004): 95–109. 5. Julie Arnold and Neil Kaske, “Evaluating the Quality of a Chat Service,” portal: Libraries & the Academy 5, no. 2 (2005): 177–93. 6. Jenna Ryan, Alice L. Daughtery, and Emily C. Mauldin, “Exploring the LSU Libraries’ Virtual Reference Transcripts: An Analysis,” Electronic Journal of Academic & Special Librarianship 7, no. 3 (2006): 1. 7. Pnina Shachaf and Sarah M. Horowitz, “Virtual Reference Service Evaluation: Adherence to RUSA Behavioral Guideline and IFLA Digital Reference Guidelines,” Library and Information Science Research 30, no. 2 (2008): 122–37. 8. Christina Desai and Stephanie J. Graves, “Instruction via Instant Messaging Reference: What’s Happening,” The Electronic Library 4, no. 2 (2006): 174–89. 9. Daniel G. Kipnis and Gary E. Kaplan, “Analysis and Lessons Learned Instituting an Instant Messaging Reference Service at an Academic Health Sciences Library: The First Year,” Medical Reference Services Quarterly 27, no. 1 (2008): 33–51. 10. William A. Katz, Introduction to Reference Work, 7th ed. (New York: McGraw Hill, 1992). 11. Arnold and Kaske, “Evaluating the Quality of a Chat Service.” 12. RUSA RSS Management of Reference Committee, Guidelines for the Behavioral Per- formance of Reference and Information Service Providers, June 2004. Available online at www.ala.org/Template.cfm?Section=Home&template=/ContentManagement/ContentDisplay. cfm&ContentID=26937. [Accessed 18 March 2009]. 13. Peter Hernon and Charles R. McClure, “Unobtrusive Testing: The 55% Rule,” Library Journal 111, no. 7 (1986): 37–41. 14. Wyoma Van Duinkerken, Jane Stephens, and Karen I. MacDonald, “The Chat Reference Interview: Seeking Evidence Based on RUSA’s Guidelines: A Case Study at Texas A&M University Libraries,” New Library World 110, nos. 3/4 (2009): 107–21. 15. Bernie Sloan, E-mail to Library Reference Issues electronic mailing list (Re: How to help frustrated searchers), May 10, 2005. 370  College & Research Libraries  July 2010 Appendix. Transcript Coding Key Day: Select from drop-down box. Date: The date should be entered in same format as it appears on transcript: e.g., 04_May_06 OR 20_Jan 2_05. Session start and session end: Indicate in 24-hour time. Account: DEFAULT is BuMain. Change if BuSci. Repeat user: DEFAULT is No. A repeat user is someone who has used the service more than one occasion. Do not count users who reopen a session to ask additional or follow-up questions. The transcripts have been organized by user name so repeat users should be grouped together. Delay in Response: Indicate NO if question was responded to in less than a minute. Indicate YES if it took a librarian over a minute to respond. No response: DEFAULT is No. Check yes if there was a delay in the response. Time lapse (minutes): Indicate the number of minutes it took for the librarian to respond OR if no response. No Response: Check if there was no response from a librarian OR if user failed to respond after asking a question. User demographic: Enter as “unknown” UNLESS the user identifies him- or herself (e.g., “Hi; I’m an undergraduate student”) or when it is it evident from reading the transcript. Staff demographic: Leave as “unknown” UNLESS the librarian identifies her- or himself (e.g., “I’m the nursing librarian”) or when it is it evident from reading the transcript. How many questions did the user ask? Count only distinct reference questions. For example, “Can you tell me how much photocopying is AND where do I find peer- reviewed article?” Do not count related questions. What was the reference question? Quote or paraphrase the user’s question(s) using the user’s terminology. If possible, identify the topic: e.g., “looking for articles related to the portrayal of women in advertising.” How would you characterize the reference question? Select as many as apply:  • Bibliographic » Relates to catalogue look-ups OR any aspect of authorship or publication of a work. Use for citation verifications, names of authors, information about works, edition information, copyright information, etc. • Computer/mechanical/technical help » e.g., problems connecting off campus, Getit@BU not working, database issues. Coding into the Great Unknown  371 • Directional » e.g., where is the photocopier, where are the PS books located? • Instructional » Use for questions where the user asked for assistance in using library resources (e.g., how do I search EconLit) or where the librarian provided instruction (regardless if the user asked for it or not). • Library Web site navigation » Use for questions where the user wanted to know where something was located on the Web site (e.g., where are e-reserves?) or where the librarian explained how to find something on the Web site. • Other » Use when the question does not fit into any other category. • Out of scope » Use for questions that fall outside the reference service’s purview and need to be referred to another service in library (e.g., Special Collections) or to an outside service (e.g., computing services). • Policy, procedural or service » Use for questions about library services: e.g., circulation, laptop lending, reserves, interlibrary loans, annex, instruction, research assistance. • Ready reference » Use for questions that have uncomplicated, straightforward answers. An- swers are usually found in standard reference sources such as almanacs (e.g., what is capital of Nova Scotia, what are the dates of National Cat Week?). • Research or subject request » Use for questions where user wanted article, book, or information on a topic: e.g., “where can I find information about poverty in South America?” What resources were used to answer the question: • Books/printed material » The librarian indicated that they found the answer in a book or printed item (e.g., reference book) • BU only » A subscription database or resource was used to answer the question (e.g., Biosis) • Internet sources » The library referred to a Web site to answer the question. Do NOT use for Binghamton University Web sites. • Other » Use when the source does not fit into the other categories. Please reference the source in the box below. • Library Web site » A page from the library Web site OR if the library Web site was used as a gate- way to a resource (e.g., a government Web site or another library Web site) Please list sources used to answer the question: List any sources mentioned by name (e.g., Lexis Nexis, APA Style Guide, Wikipedia) *Was user aware of the time needed for research? • Leave as n/a UNLESS the user commented on the amount of time needed to complete research OR when it is evident from reading the transcript. 372  College & Research Libraries  July 2010 *Did user have trouble settling on the topic? • Leave as n/a UNLESS: » user switched topics depending on available resources » user had a topic that was too specific or generalized » user could not define a topic *Did user use unreliable Internet sources? • Leave as n/a UNLESS user indicates that he or she has been using unreliable OR inappropriate resources OR when it is it evident from reading the transcript. *Did user use an appropriate number of resources? • Leave as n/a UNLESS user indicates how many resources she or he needs for an assignment OR when it is it evident from reading the transcript. *Has user used effective search strategies? • Leave as n/a UNLESS: » user has incorrectly searched a resource (e.g., tried to use infoLINK to find an article) » user has correctly searched a resource (e.g., used CINAHL to find a nurs- ing article) » user does not demonstrate effective search strategies in infoLINK or library databases » user demonstrated effective search strategies in infoLINK or library data- bases Question was…:  • Correct and complete » Use for questions that were answered correctly and completely. • Correct but not complete » Use for questions that were answered correctly BUT where a complete ref- erence interview was not conducted OR where referral/follow-up should have been offered. • Not correct but complete » Use for questions that were answered incorrectly OR where wrong informa- tion was provided BUT where a reference interview/follow-up was given as appropriate. • Not correct and not complete » Use for questions that were answered incorrectly OR where wrong infor- mation was provided AND where a complete reference interview was not provided OR where a librarian ended the session prematurely OR where a referral or follow-up should have been offered. Was the librarian approachable? • Librarian acknowledges user through the use of a friendly greeting to initiate conversation. • Librarian communicates in a receptive, cordial, and encouraging manner. • Librarian uses a tone of voice and/or written language appropriate to the nature of the transaction. Did the librarian show interest?  • Librarian maintains or re-establishes “word contact” with the patron in text-based Coding into the Great Unknown  373 environments (e.g., “I see”) by sending written or prepared prompts, etc., to convey interest in the patron’s question. Did the librarian “listen” to the question? • Librarian allows the patrons to state fully their information need in their own words before responding. • Librarian identifies the goals or objectives of the user’s research, when appropri- ate. • Librarian rephrases the question or request and asks for confirmation to ensure that it is understood. • Librarian seeks to clarify confusing terminology and avoids excessive jargon. • Librarian uses open-ended questioning techniques to encourage patrons to expand on the request or present additional information. Did the librarian find out what the patron had already tried? • Librarian finds out what patrons have already tried and encourages patrons to contribute ideas. Did the librarian explain the search strategy? • Librarian constructs a competent and complete search strategy. • Librarian explains the search strategy and sequence to the user, as well as the sources to be used. • Librarian attempts to conduct the search within the patrons’ allotted time frame. • Librarian explains how to use sources when appropriate. • Librarian works with the patrons to narrow or broaden the topic when too little or too much information is identified. • Librarian asks the patrons if additional information is needed after an initial result is found. • Librarian recognizes when to refer patrons to a more appropriate guide, database, library, librarian, or other resource. • Librarian offers pointers, detailed search paths (including complete URLs), and names of resources used to find the answer, so that patrons can learn to answer similar questions on their own. Flag transcript Use for transcripts that… • have incomplete or incorrect answers • exemplify outstanding reference service • should be further reviewed by the DRC • Note – please indicate in the comment section why you have flagged the tran- script. *Authors’ Note: The data gathered from these questions were not included in the final analysis, since there were not enough data gathered to be useful. PMLA Publications of the Modern Language Association The leading journal in literary studies for more than a century, PMLA reaches over 30,500 subscribers and about 2,200 libraries—the largest circulation of any scholarly journal in the humanities. PMLA Is Online Containing more than 1,500 pages a year, PMLA is available from all major subscription services. Library subscribers receive current issues electronically (in PDF), as well as in print; 2010 subscribers will also receive all issues from 2002–09 in electronic form. (An electronic archive of PMLA issues from 1884 to 2004 is available through JSTOR.) Upcoming Features OctOber 2010 Literary criticism for the twenty-first century Other tOPIcS Queer theory and ecocriticism . . . oceanic studies . . . visual cultures . . . Aimé Césaire: poet, politician, intellectual . . . community reading . . . museum studies . . . materiality and writing . . . remembering Eve Kosofsky Sedgwick better-Value Pricing A subscription to the print and electronic versions of PMLA is $180. Profession, an annual journal of articles about the modern language profession, comes free with your subscription in electronic and print formats. Free trial Subscription For a free trial subscription to the electronic version of PMLA, or for more information, please write or call Library Subscriptions, Modern Language Association, 26 Broadway, 3rd floor, New York, NY 10004-1789; 646 576-5166; fax 646 576-5160; subscrip@mla.org. All prices subject to change. www.mlajournals.org