ACRL News Issue (B) of College & Research Libraries 570 / C&R L News Using the A C R L perform ance manual: The L S U Libraries experience B y B a rb a ra W ittkopf R eference Librarian Louisiana State University and P a tricia Cruse R eference Librarian Louisiana State University Using Measuring Academic Library Performance makes it easier to plan‚ implement‚ and analyze user surveys. U ser surveys are occasionally touted as be­ing self-serving. In addition, surveys canbe demanding and expensive to admin results difficult to interpret, and the data collected frequently are never used. However, the reality is that surveys are often the only way a library can determine if its services are meeting the patrons’ needs. With the publication of the ACRL-commis- sioned manual Measuring Academ ic L ibrary Per­ form a n ce: A Practical A pproach, it is easier to plan, implement, and analyze a survey.1 Plann ing an d im plem en tation In 1989 the Louisiana State University (LSU) Libraries conducted a user survey which revealed that patrons were “very satisfied” with service at Middleton Library. The Reference Services De­ partment decided to conduct a similar survey in 1990 using the procedures and forms provided in the ACRL manual. The goal was to exceed or maintain the perceived level o f service reported in the previous year. In using the manual the following objectives were addressed: ( 1) selecting appropriate surveys, (2 ) establishing a time frame, (3) adapting the manual, and (4) training the staff. (1) Survey selections. Two surveys from the manual were selected to be conducted simulta­ neously: the Reference Satisfaction Survey and the Reference Transaction Survey. The Reference is Satisfaction Survey (Form 14-1) was designed to qualitatively report the user’s response regarding: ter, th• te he outcome o f the reference transaction (i.e., relevance, amount, and completeness of informa­ tion) • the service experience (i.e., the perceived help­ fulness of the staff) • the overall satisfaction with the transaction. The second survey, quantitatively measured the number of reference transactions that transpired by hour during the survey periods using the Reference Service Statistics (Form 13-1). It was anticipated that data from this survey could be used to ascertain whether there was adequate staffing during peak hours of service. (2) Time frame. A decision was made to con­ duct a pilot survey in the fall of 1990 as a “dress rehearsal” for another survey to be conducted in the spring of 1991 prior to the 1992-94 University Accreditation Review. A representative week was chosen in the middle of each semester, avoiding midterms and holidays such as Thanksgiving, Mardi Gras, and spring break. (3) Adapting the manual. The surveys in the manual were easily adapted. There were minor changes to the forms and a procedural change in the way in which the forms were distributed. The “R eferen ce Transaction.” Both the Ref­ erence Satisfaction Survey (Form 14-1) and the Reference Service Statistics (Form 13-1) are based October 1991 / 571 on the premise that a “reference transaction” is taking place. The manual suggests that everyone who approaches the reference desk during the survey period be handed a form. Patrons who feel they have received “reference assistance” are asked to complete the form assessing the service received. However, at LSU instead of letting the patron decide if they had “asked a reference question” (versus a directional question) the sta ff member, who was involved in the transaction, decided if a reference transaction had transpired, and was re­ sponsible for distributing the surveys to the patrons receiving assistance. This guaranteed that only the patrons who had actually received reference assis­ tance were surveyed. It was felt that this was more appropriate than letting the patron decide because the staff shared a common understanding of the term “reference transaction” as defined by the Integrated Postsecondary Education Data System. IPEDS defines a reference transaction as an infor­ mation contact that involves the knowledge, use, recommendation, interpretation, or instruction in the use of one or more information sources by a member of the library staff.2 … surveys are often the only way a library can determine if its services are meeting the patrons’ needs. Distribution of forms. In the pilot survey the reference staff distributed the Reference Satisfac­ tion Surveys (Forms 14-1) to the patrons at the end of the reference transactions. The manual recom­ mends that third party individuals such as students distribute the forms to separate the reference trans­ action activity from the request to complete the survey. Although budgetary constraints in the student budget for the fall semester prevented third party individuals from distributing the forms, a commitment was made by the staff for the spring 1991 survey to followthe procedure in the manual. Therefore, during the spring 1991 survey, once a staff member had completed the reference trans­ action s/he communicated non-verbally to the stu­ dent worker on the other side of the desk to hand the form to the patron. The staff member then noted the transaction on the Reference Services Statistics form which was taped on the desk. Changes in forms. During the surveys, statis­ tics were taken on an hourly basis. This is a change from the Reference Services Statistics (Form 13-1) in the manual which uses a combination of one- and two-hour time slots. The changes to the Reference Satisfaction Sur­ vey (Form 14-1) were more numerous and were based on recommendations made in the ACRL manual and comments made at the 1990 ACRL program on performance measures. As a result of these recommendations and in keeping with other statistical surveys that have been taken at the LSU Libraries, the categories for respondents were ad­ justed to accommodate both LSU and non-LSU patrons. A category for elementary and secondary school students was added and “personal use” was included as a reason for using the library. The basic change to Form 14-1, however, as noted above, was to distribute this survey form only to users who had received reference assistance as determined by the staff. As a result, the line at the top of the form reading “If you were NOT asking a reference ques­ tion today, please check here and stop” was omitted. Data analysis. The ACRL manual is divided into two sections. The first section, titled “Measure­ ment” is a general overview of survey implementa­ tion. The second section, “The Measures,” is an in- depth description of the specific surveys. Both of these sections were extremely helpful in analyzing the collected data. The data for both surveys were loaded into LOTUS 1-2-3. Following the manual’s suggestions and directions, it was extremely easy to create spreadsheets and graphs of the collected data. However, because of limitations in the LO­ TUS program the data cannot be cross tabulated, which would provide meaningful data. For ex­ ample, it might be very useful to know how non- LSU students rated Reference Services. In the future, SPSS-PC+ will be used (as suggested in the manual) in place of LOTUS in order to manipulate the data more efficiently. Although the data collected at every institution are unique and should be treated as such, it would have been helpful if a more in-depth discussion on data analysis were included describing what the numbers mean and how to interpret them. (4) Training staff. Reference desk service is provided by a team of librarians, associates, and library school graduate assistants (GAs) who share desk hours at three desks. Approximately two weeks before the pilot survey began a memo announcing the surveys was distributed to the reference staff and the library administration. The surveys had previously been announced as an upcoming activity in the Department’s Annual Report. The memo explained the purpose of the surveys, emphasized that “service is being evaluated, not individuals,” and explained the components of each survey, i.e., the five questions regarding user satis­ faction, and the simultaneous recording of transac­ tions by hour. The dates were announced and copies of the forms were distributed. A meeting of the staff members who would be involved with the surveys was called. The proce- 572 / C &RL News dures for both surveys were discussed in-depth. This allowed the staff the opportunity to ask ques­ tions about the content of the survey and proce­ dures, as well as offer comments and suggestions. The IP E D S definition was explained by way of examples. Before the spring survey, statistical graphs of the users responses from the fall pilot survey were shared with the staff. By allowing staff to be an integral part of the survey implementation and sharing data from the fall pilot survey, it was hoped that they would develop a vested interest in its execution and outcome. P ro b lem s in im p lem en tatio n (1) Staff burnout. Despite staff training before each survey, the number of surveys distributed and the number of transactions recorded daily dramati­ cally decreased by the end of the week as staff participation waned in the process. It may have been helpful to keep the staff informed as the week progressed with a chart indicating the number of surveys that were returned each day along with the desired response rate. (2) Patron burnout. The user satisfaction sur­ vey requires a response ea ch tim e a patron receives assistance during the week-long survey period. Frequent users of reference services verbally (and perhaps silently) commented to the reference staff that they had already completed a survey form and were not enthusiastic about completing multiple forms throughout the week. The manual empha­ sizes that the person distributing the forms must be quite aggressive in these circumstances. (3) Survey comprehension. In the process of data analysis two observations were noted regard­ ing patron comprehension of the user surveys. First, the terms were occasionally perceived as generic and somewhat vague, i.e., what does “com­ pleteness o f the answer that you received” signify? Second, “overall satisfaction” was often interpreted as an opportunity to comment broadly on any area o f the library not just Reference Services, for ex­ ample, “You need more serials” or “Why don’t you have more terminals on the third floor?” I f the LSU Libraries conducted this survey in the future a recommendation would be to qualify this question to read: “Overall, how satisfied are you with R efer­ en ce Services today?” S urvey d a ta a n d a c c r e d ita tio n In light of dwindling resources, accreditation review teams are now assessing qualitative as well as quantitative data. Attention is being given to service outputs at an institution as well as to budgets, staff size, and volume holdings. Reference is considered a public service area that can be assessed along with other educational and research units within the university. The South­ ern Association of Colleges and Schools’ (SACS’) R esou rce Manual f o r Institutional Effectiveness suggests that a statement of purpose for these units might be that “the university maintains a major commitment to public service” and that the ex­ pected results would be that “client satisfaction with service provided would be consistently high.”3 In anticipation of the 1992-1994 SACS review at LSU, reference staff were pleased that the data from both the fall and spring surveys indicated that the patrons were still “very satisfied” with the refer­ ence services they received in Middleton Library. Results In addition to the positive feedback from the users indicating a high degree of satisfaction with the service, the quantitative data reinforced staff perception that the reference desks were adequately staffed during peak hours o f services. The hourly data, by day, will also be useful should it be neces­ sary or desirable to reduce hours of service. The manual made it easy to plan, implement, and ana­ lyze two reference surveys simultaneously. JNancy Van House, Beth Weil, and Charles McClure, Measuring Academic Librart/ Performance (Chicago: ALA, 1990). 2Ibid, p. 96. 3R esou rce Manual on Institutional Effectiveness, 2nd ed. (Atlanta: The Commission on Colleges of the Southern Association o f Colleges and Schools, 1989), pp. 28-29. Ed. note: Measuring A cadem ic L ib rary Perfor­ m ance: A Practical A pproach by Nancy Van House, Beth Weil, and Charles McClure is available as both a book and a book and software package. The software package is designed for entering and ana­ lyzing performance measures data collected in sur­ veys as suggested in M easuring A cadem ic L ib rary Perform ance. The software, developed on a run­ time version of the popular database program Para­ dox, is meant to be easy to use by those without extensive computer experience. The software re­ quires: an IBM-compatible computer with at least 640K o f RAM (an 80286 or faster model is desired for adequate speed.); DOS 3.1 or higher; a high- density 5 1/4" disk drive; and a hard disk drive with at least 3 megabytes of free space. Although the software development was somewhat delayed, the package is completed and ready for shipment. Measuring Academ ic L ib rary Perform ance is available from ALA Publishing Serv., Order Dept., 50 E . Huron St., Chicago, IL 60611; phone: (800) 545-2433, press 7; or fax (312) 944-2641. Book only: ISBN 0-8389-0529-3, $29.00; book and software package: ISBN 0-8389-0542-0, $70.00. ■ ■