Untitled-1 156 College & Research Libraries March 1998 Explaining User Satisfaction with Academic Libraries: Strategic Implications Syed Saad Andaleeb and Patience L. Simmonds Competitive pressures, information availability, rising costs, and an in­ creasingly aware and selective student population mandate that aca­ demic libraries become more user focused. This calls for a better under­ standing of the specific needs of library users in order to provide the appropriate type and level of service that meets those needs. This study proposes and tests a five-factor model to explain user satisfaction with academic libraries. Students availing the services of three academic libraries in Erie, Pennsylvania, were surveyed over a period of three semesters. The model explained 64 percent of the variation in the de­ pendent variable. Strategic implications of the proposed model are dis­ cussed. oday’s academic libraries are confronted with challenges on several fronts: Megabookstores, online information providers, multimedia products, document delivery services, and other competitive sources of information are apparently threatening their role and even their very survival.1,2 With evolving technological innovations and the variety and abundance of infor­ mation that is becoming available to in­ formation users, competitive pressures will continue to intensify for academic li­ braries. Rising college costs and a student population that is becoming increasingly selective in choosing academic institu­ tions also represent indirect threats to aca­ demic libraries. For example, various as­ pects of a college’s offerings are factored into a student’s decision to attend a par­ ticular institution. Although the authors did not find any supporting evidence, it is quite likely that when selecting a col­ lege, some students are influenced par­ tially by the college’s academic library and the quality of service the library pro­ vides. Consequently, academic libraries may have to adopt a more strategic ori­ entation in which the creation and deliv­ ery of service satisfactions for their users play an important role. By doing so, aca­ demic libraries also can help their colleges meet their enrollment and student reten­ tion goals. In addition, each year new students enter the academic environment with varying library usage and information- gathering skills. Student perceptions and expectations of service from academic li- Syed Saad Andaleeb is an Associate Professor in the The Behrend College School of Business at Pennsyl­ vania State University; e-mail: ssa4@psu.edu. Patience L. Simmonds is an Assistant Librarian in The Behrend College at Pennsylvania State University; e-mail: pls@psulias.psu.edu. 156 mailto:pls@psulias.psu.edu mailto:ssa4@psu.edu Explaining User Satisfaction 157 braries also vary, making it imperative to better understand and define specific stu­ dent needs and to provide the type and level of service that meets them. Thus, Christopher Millson-Martula and Vanaja Menon assert that one element of high- quality service is “the incorporation of users’ personal needs and expectations into the development of programs and service.”3 According to them, the contin­ ued success of a service organization such as an academic library depends on the organization’s ability to adjust its prod­ ucts and services to correspond to user needs. Similarly, Peter Hernon and Phillip Calvert suggest that only customers jus­ tify the existence of a library.4 Danuta A. Nitecki also claims that “the assessment of how well a library succeeds depends on the user as a judge of quality.”5 As these views gain greater acceptance among aca­ demic librarians, librarians must orient themselves and their programs to become better customer advocates and address their problem-solving needs. A review of the literature indicates that such a process already may have begun. In reassessing what role academic librar­ ies should be playing, the need to im­ prove and deliver better services based on user needs is emerging as an impor­ tant theme. At the same time, providing access to information is being advocated as a more desirable measure of the qual­ ity of academic libraries. Thus, tradi­ tional definitions of quality as reflected in the size and diversity of a library’s holdings are being increasingly ques­ tioned.6 To harness and implement the concept of service quality within the context of aca­ demic libraries, several authors have turned to the marketing literature. This lit­ erature has provided initial guidance in defining library service quality and clari­ fying the conceptual framework for ser­ vice delivery improvements. Recent re­ search on academic libraries, based on this literature, has been conducted by Vicki Coleman, Yi Xiao, Linda Blair, and Bill Chollett,7 Susan Edwards and M. Browne,8 Hernon and Calvert,9 Nitecki,10 and White and Abels.11 In these studies, the five-di­ mensional SERVQUAL model and the twenty-two-item scale proposed by A. Parasuraman, Leonard Berry, and Valarie Zeithaml12 are prominent. Although this vein of research has been pursued with some enthusiasm, empirical support for the suggested framework and the desir­ ability of the measurement instrument has not been very encouraging. This is not surprising given the criticisms that have been leveled at various aspects of the SERVQUAL model and the related mea­ sures by marketing academics. For ex­ ample, Tom J. Brown, Gilbert A. Churchill Jr., and J. Paul Peter have suggested mea­ surement problems in the use of difference scores.13 J. Joseph Cronin Jr. and Steven A. Taylor also have suggested that service quality can be predicted adequately by us­ ing perceptions alone.14 In addition, James M. Carman has suggested that in specific service situations it may be necessary to delete or modify some of the SERVQUAL dimensions—or even introduce new ones.15 Emin Babakus and Gregory W. Boller have shown several shortcomings in the origi­ nal scale.16 Moreover, in cross-sectional studies, measuring the gap between expec­ tations and performance can be problem­ atic. Because data generally are collected subsequent to the service encounter, ques­ tions about service expectations may be based on memory or biased by actual ser­ vices received.17 Questions also have been raised as to whether higher scores on the SERVQUAL dimensions are sufficient to indicate higher quality and greater cus­ tomer satisfaction.18 In the context of academic libraries, a detailed study by Nitecki also highlighted several problems with the SERVQUAL measures.19 Hernon and Calvert have thus concluded that “it is not possible to develop a generic instrument applicable to all libraries in all circumstances.”20 Consequently, rather than limiting this study to the theoretical structure and mea­ http:measures.19 http:satisfaction.18 http:received.17 http:scale.16 http:alone.14 http:scores.13 http:Abels.11 158 College & Research Libraries March 1998 sures suggested by the SERVQUAL per­ spective, the authors introduce an alter­ nate framework, with its attendant mea­ sures, that attempts to explain overall li­ brary user satisfaction. Although the au­ thors believe that overall satisfaction can be influenced by several of the SERVQUAL dimensions, other factors also were deemed pertinent to their framework. These factors were deduced from in-depth interviews with library us­ ers. The conceptual framework and its related propositions are outlined below, followed first by an explanation of the re­ search method and then the analyses, re­ sults, and discussions. Conceptual Framework and Propositions A library’s resources are critical to user satisfaction. However, no library can sat­ isfy all its users all the time. Some librar­ ies have very limited resources and clearly are unable to satisfy their users, whereas others are large in size, have sub­ stantial holdings, and can provide a vari­ ety of services. Obviously, those libraries that are able to provide users with what­ ever they want will achieve higher levels of user satisfaction. Thus, the availability of resources can have a significant influ­ ence on user satisfaction. It is important to note, however, that the quality of the resources may be judged from an overall perception as to whether the library can provide access to materials (e.g., through interlibrary loans or other document de­ livery services) when and where needed. It is this overall perception of a library’s resources that contributes to user satisfac­ tion. Thus, the authors propose that: P1: The higher the perceived quality of the library’s resources, the greater the level of user satisfaction. The SERVQUAL literature identifies re­ sponsiveness as an important element of service quality. It is defined as the will­ ingness of the staff to be helpful and to provide prompt services. At academic li­ braries, users expect that the library staff will attend to their needs quickly and ef­ ficiently. Promptness, therefore, can be critical to users’ perceptions of respon­ siveness. Helpfulness, identified in the literature as a component of responsive­ ness, finds its place in another factor the authors have termed demeanor (which is Promptness, therefore, can be critical to users’ perceptions of responsive­ ness. subsequently addressed). The authors’ measures of responsiveness also are some­ what different from those in the original SERVQUAL scale and focus on promptness. These measures address waiting time, avail­ ability of the staff to help users when needed, and whether the staff quickly make sure that users have what they need. The au­ thors also propose that: P2: The greater the responsiveness of the library staff, the greater the level of satis­ faction among academic library users. Another expectation among library users is that of competent services. In the context of academic libraries, as in other libraries, users want the staff to be knowl­ edgeable and to be able to assist them in locating needed materials and informa­ tion quickly and efficiently. When users perceive that the library staff are compe­ tent, they will feel assured that problems will be easily resolved, leading to greater satisfaction with the services. Although competence was proposed as a separate service quality dimension in the original conceptualization by Parasuraman, Zeithaml, and Berry,21 sub­ sequent empirical results suggested that the measures should depict reliability. Thus, competence was not considered a separate dimension in the SERVQUAL framework. At the same time, several items in the proposed reliability scale, such as accuracy in billing, keeping records correctly, etc., as proposed by Parasuraman, Berry, and Zeithaml,22 did not, in the authors’ opinion, fit the aca­ Explaining User Satisfaction 159 demic library setting because library us­ ers generally would be unable to assess these elements. Hence, these items were not included in the authors’ study. More­ over, because competent services was identified as important to customer sat­ isfaction, the authors chose to retain this construct, using four alternative items to measure it. Competent services focused on the library staff being good at explaining how materials are arranged, their knowl­ edge, their ability to answer questions ap­ propriately, and their ensuring that all questions are answered. The authors pro­ pose that: P 3 : The greater the perceived compe­ tence of the library staff, the greater the level of user satisfaction. The general demeanor of library staff, as perceived by library users, also can have a significant impact on user satis­ faction. Users look for staff who are friendly and approachable, but not unnec­ essarily intrusive. Several aspects of staff demeanor seemed to overlap two fac­ tors—assurance and empathy—proposed in the SERVQUAL framework. However, some modifications were necessary to represent the demeanor construct clearly. For example, two items of the assurance dimension in the original scale did not seem to be critical to the library setting. These items included feeling safe in the transaction with library staff and the ex­ tent to which library staff instilled confi­ dence in customers. Another item—em­ ployees have the knowledge to answer your questions—seemed to belong more appropriately to the group of items rep­ resenting competence. Nitecki also showed how these and other SERVQUAL items were inappropriate for the aca­ demic library setting.23 Consequently, the authors introduced the construct “de­ meanor,” which included items from empathy and assurance, in addition to several new items. The construct is de­ picted by staff sensitivity to user needs, willingness to listen to user problems, being polite, being courteous, and being sympathetic and reassuring. The authors also propose that: P 4 : The more positive the demeanor of the library staff, the greater the level of user satisfaction. Physical or “tangible” evidence that the library will be able to provide satis­ factory services has been shown to be a component of service quality. The au­ thors’ investigations suggested that this factor can influence user satisfaction judg­ ments. The items used to delineate this construct included overall cleanliness of the facilities, visually appealing environ­ ment, and appearance of the staff. The authors propose that: P5: The better the perceived overall physical appearance of the library facili­ ties, the greater the level of user satisfac­ tion. Research Method Reseearc Desitn The authors first explored secondary sources to assess the type of research con­ ducted on library service quality and re­ lated issues. The next stage involved gath­ ering information directly from users of academic libraries. This was accom­ plished in two steps. The first step in­ volved exploratory in-depth research. Interviews were conducted with a small, but representative, sample of conve­ niently chosen library users. Participants responded to open-ended questions. The in-depth nature of the interviews permit­ ted exploring the diverse issues while narrowing the factors down to several im­ portant ones that seemed to best explain user satisfaction with library services. The next step involved designing and pretest­ ing a questionnaire that was administered to ten respondents, again chosen conve­ niently, from a cross section of college stu­ dents familiar with their academic librar­ ies. The pretest was instrumental in assessing the strengths and weaknesses of the questionnaire and in ensuring that all pertinent variables were included. At this stage, several modifications were http:setting.23 160 College & Research Libraries March 1998 made to the instrument to remove ambi­ guities and to improve the flow of the questions. The final version was admin­ istered to a representative sample of col­ lege students. Measurement The questionnaire included perceptual measures that were rated on seven-point Likert scale items. This design is consis­ tent with prior studies on service quality. Each scale item was anchored at the nu­ meral 1 with the verbal statement “strongly disagree” and at the numeral 7 with the verbal statement “strongly agree.” Multiple items were used to mea­ sure each construct so that their measure­ ment properties (i.e., reliability and va­ lidity) could be evaluated. The scale items measuring the dependent variable were selected to reflect people’s overall satis­ faction with services received. Demo­ graphic data also were obtained from the respondents. It is important to note that the authors did not use the gap score approach (used in many service quality studies) that uses the difference between expectations and performance because of the problems discussed earlier. In particular, because expectations from excellent libraries will be high, these scores generally will be high and will demonstrate little variation. Hence, most of the variation in the data will be introduced by the performance scores. Thus, the authors f o c u s e d o n p e r f o r m a n c e m e a s u re s alone. This also helped the authors to keep the instrument simple. Moreover, this approach is consistent with other studies.24, 25 Sampling The sample was drawn from three aca­ demic libraries in Erie, Pennsylvania. These included the Behrend College Li­ brary, the Nash Library of Gannon Uni­ versity, and the Hammermill Library of Mercyhurst College. The population was defined as students from these three col­ leges only. Questionnaires were delivered personally to respondents as they entered the library. The purpose of the study was briefly, but adequately, explained at that time. Respondent anonymity was en­ sured by asking them not to identify themselves anywhere in the survey. In addition, respondents were asked to re­ turn the completed surveys to a box lo­ cated at the circulation desk of each li­ brary. The sample was chosen using system­ atic sampling. A total of 210 question­ naires were distributed. Of these, 188 Because expectations from excellent libraries will be high, these scores generally will be high and will demonstrate little variation. questionnaires were completed and re­ turned, resulting in a response rate of 89.5 percent. The survey was conducted over a period of one year and included data from three semesters—spring, summer, and fall of 1996. A one-year time frame allowed researchers to include the views of a large cross section of library users re­ garding the services they received on a range of needs. Moreover, survey instru­ ments were distributed at different times each semester so that user views would reflect different demand conditions. The sample demographics indicated that a broad cross section of the population re­ sponded. Analyses Table 1 contains summary statistics, as well as the matrix of zero-order correlations for the variables included in this study. The reliability of each multiple-item scale was assessed by coefficient alpha. These values are indicated in the diagonal of table 1. Multiple-regression analysis was used to test the propositions. Reliability Reliability analyses indicated that the http:studies.24 Explaining User Satisfaction 161 TABLE 1 Descriptive Statistics Zero-order Correlations and Reliability Coefficients Variables 2 3 4 5 6 7 s Satisfaction (4) .91 5.12 1.25 Responsiveness (3) .33 .75 5.05 1.31 Competence (4) .54 .51 .91 5.64 1.07 Demeanor (5) .58 .56 .78 .92 5.30 1.20 Resources (4) .71 .15* .33 .33 .76 3.73 1.23 Tangibles (2) .36 .38 .41 .44 .19 .62 5.80 1.11 Assurance (9)t .60 .57 .35 .45 .89 5.50 1.07 Figures in italics represent reliability coefficients. Figures in parenthesis indicate number of items measuring each construct. * p < .05; all other coefficients p < .0l. t Assurance reflects a combination of competence and demeanor items. internal consistency of the six con- loaded together on the first factor; the other structs in the study was reasonable. measures loaded, as expected, on three addi- The dependent variable exceeded Jum tional factors represented by responsiveness, C. Nunnally’s recommended value of resources, and tangibles. The authors .70.26 With the exception of the measure also note that the scale item regarding of “tangibles” (alpha = .62), the reliability values o f t h e o t h e r c o n s t r u c t s were relatively high and c o n s i d e r e d t o b e v e r y good. Validity Several methods were used to assess validity. The re­ sults in table 1 provide sup­ port for discriminant valid­ ity because the correlation between one scale and an­ other is not as high as each scale’s coefficient alpha.27, 28 Factor analysis with varimax rotation also was conducted to examine whether the measures loaded as expected on the selected constructs. When no constraints were im­ posed on the extraction of factors, only four factors were recovered (see table 2). In this four-dimensional structure, the measures of demeanor and competence TABLE 2 Unconstrained Factor Analysis of Independent Variables With Varimax Rotation Items Factor 1 Factor 2 Factor 3 Factor 4 ASRNC-3 .874 .10 .10 .14 ASRNC-9 .842 .08 .24 .14 ASRNC-4 .827 .17 .13 .09 ASRNC-8 .818 .19 .12 .21 ASRNC-2 .811 .09 .23 .09 ASRNC-7 .771 .05 .14 .07 ASRNC-1 .748 .16 .13 .05 ASRNC-6 .705 .16 .24 .27 ASRNC-5 .704 .08 .29 .24 RES-l .23 .818 .09 .07 RES-2 .28 .738 -.09 .03 RES-4 -.08 .703 .20 -.31 RES-3 .09 .684 -.01 .39 RESP-2 .34 -.04 .799 .11 RESP-3 .02 .22 .794 .15 RESP-1 .51 -.04 .535 .01 TAN-2 .11 .15 .19 .83 TAN-1 .37 -.11 .06 .68 Eigenvalues 7.93 2.06 1.25 1.19 Variation 44.1% 11.5% 7.0% 6.6% Cumulative 44.1% 55.6% 62.5% 69.1% http:alpha.27 162 College & Research Libraries March 1998 TABLE 3 Factor Analysis of Independent Variables With Varimax Rotation (extraction constrained to five factors) Items Factor 1 Factor 2 Factor 3 Factor 4 Factor 5 Demeanor Competence Resources Responsiveness Tangibles DEM-4 .788 .38 .19 .12 .02 DEM-3 .757 .39 .11 .23 .03 DEM-5 .753 .46 .10 .25 .08 DEM-2 .684 .32 .10 .29 .19 DEM-1 .639 .37 .17 .25 .22 COMP-3 .42 .810 .09 .15 .12 COMP-2 .29 .788 .04 .20 .07 COMP-l .3l .734 .16 .18 .05 COMP-4 .45 .707 .18 .16 .19 RES-1 .07 .24 .812 .11 .10 RES-2 .26 .13 .747 -.09 .03 RES-3 .03 -.19 .724 .18 -.29 RES-4 -.01 .17 .661 .01 .43 RESP-3 -.09 -.17 -.21 -.808 -.16 RESP-2 -.30 -.15 .04 -.802 -.09 RESP-l -.35 -.34 .04 -.553 .01 TAN-2 .08 .13 .11 .20 .843 TAN-1 .55 .03 -.11 .04 .622 Eigenvalues 7.93 2.06 1.25 1.19 .81 Variation 44.1% 11.5% 7.0% 6.6% 4.5% Cumulative 44.1% 55.6% 62.5% 69.1% 73.6% the appearance of the staff loaded with the first factor and not, as expected, with measures of tangibles. Attributing this to measurement error, the authors eliminated the item. This resulted in a cleaner factor str ucture, with items loading on the expected factors. When the factor analysis procedure was constrained to extracting five factors, the scale items loaded, as expected, on demeanor, competence, resources, re­ sponsiveness, and tangibles (see table 3). However, the latent root criterion indi­ cated that the “tangibles” factor had an eigenvalue of less than one. (In factor analysis, only factors having latent roots [eigenvalues] greater than one are consid­ ered significant. However, when the eigenvalue for a factor is close to one, the factor might be retained for inclusion in the model.29) After carefully assess­ ing the measures that loaded on the first unconstrained factor (i.e., the items measuring demeanor and com­ petence), the authors termed the factor “assurance” because when positive at­ titudes (such as demeanor) are com­ bined with competent assistance, it gives users a sense of assurance that their needs will be addressed and met ap­ propriately. Results Multiple-regression analysis was con­ ducted using both the four-factor and five-factor structures to test the proposi­ tions. The results are presented in tables 4 and 5. The full model was significant for both the four- and five-factor models as indi­ http:model.29 Explaining User Satisfaction 163 TABLE 4 Regression Results with Four Factors (Dependent Variable: Satisfaction) Independent Variables b s.e. 1 p < Assurance* .428 .070 .366 .001 Resources .584 .048 .567 .001 Responsiveness -.003 -.054 -.003 .95 n.s. Tangibles .099 .056 .089 .10 Constant .022 R2 = .65adj R2 = .64 F4,181 = 85.01, P < .001* Combination of Demeanor and ComPetence cated by the overall F statistic (p < .001). Both regression models explained about 64 percent of the variation in the depen­ dent variable as indicated by the adjusted R2 values. It may be noted from table 4 that three of the four factors had a sig­ nificant effect on user satisfaction. The sig­ nificant explanatory factors included: as­ surance (b = .428; p < .001); resources (b = .584; p < .001); and tangibles (b = .099; p < .10). Responsiveness had no effect on user satisfaction. The results based on the five-factor model are reported in table 5. Here again, three factors were significant: demeanor (b = .287; p < .001); resources (b = .588; p < .001); and tangibles ( b = .098; p < .10). Competence, when isolated as a separate TABLE 5 Regression Results with Five Factors (Dependent Variable: Satisfaction) Independent Variables b s.e. 1 p < Demeanor .287 .078 .278 .001 Competence .125 .084 .107 .15 n.s. Resources .588 .048 .570 .001 Responsiveness .004 .054 .004 .95 n.s. Tangibles .098 .055 .088 .10 Constant .144 R2 = .65adj R2 = .64Fs,18o = 67.80, P < .001. factor, would be regarded as significant only if the possi­ bility of making a Type-I er­ ror 15 percent of the time is allowed. In the research com­ munity, such an allowance generally is not made; conse­ quently, the nonsignificance of the variable may be attributed to multicollinearity as sug­ gested by the high correlation between demeanor and com­ petence (r = .78; p < .001). Al­ ternatively, because library us­ ers actually may see compe­ tence and demeanor as ele­ ments of one large factor (i.e., assurance), per­ haps the two constructs should be combined (as in tables 2 and 4). An examination of the parameter esti­ mates (especially the standardized beta values) suggests that the availability of re­ sources and the assurance provided by the library staff have the greatest impact on user satisfaction. Although the third sig­ nificant variable, tangibles, also had an impact on the dependent variable, its mag­ nitude was relatively small. Discussion This study tested an alternative model of customer satisfaction with academic librar­ ies. Although no attempt was made at repli­ cation, the authors borrowed from earlier studies, relying to a great extent on the service quality literature. Departures from the original SERVQUAL framework were predicated by studies that sug­ gested the inapplicability of the framework to all service situations. The results of this study suggest that academic librar­ ians focus on two major ele­ ments (based on the stan­ dardized betas)—resources and demeanor—if providing customer satisfaction is to be underscored in their strate­ 164 College & Research Libraries March 1998 gic vision. Resource strategy is important because academic library users frequent their libraries to find solutions to their academic problems and needs. In today’s dynamic environment of information availability, resources does not mean only the size of a library’s collections but, rather, also includes a variety of other resources that, to the users, make ac­ cess to information the key to judging resource adequacy. Consequently, aca­ demic librarians must continuously monitor the academic environment to provide customer-focused services. That means remaining connected to their academic institution’s curricu­ lum, teachers’ resource needs and re­ search agenda, student preference for how the needed information is pack­ aged (i.e., CD-ROMS, journals, micro­ fiche, audiovisuals, Internet, etc.), and related administrative use of informa­ tion (e.g., career planning and devel­ opment, etc.). This is not to suggest that the role of the librarian should be pas­ sive, reacting only to the demands placed on it. Librarians actually can play a proactive role by forging part­ nership relationships with their academic communities and developing a variety of information access options for them, se­ lecting jointly those options that meet cost and efficacy criteria. The findings of this study also suggest that library users, especially students, ac­ cord significant importance to the de­ meanor of the library staff, a multiattribute construct that must be instilled and incul­ cated, much like an attitude, among the li­ brary staff. Although easy to suggest, in­ stilling the qualities of demeanor among the service providers and gaining their commitment to these qualities can be chal­ lenging. This conclusion is supported in table 1, where the staff earned a mean score of 5.3 (on a scale of 7), with a stan­ dard deviation of 1.2. As the ratings sug­ gest, there is room for improvement. If academic librarians want to ensure customer satisfaction, they could periodi­ cally track how the staff performs on the five scale items that measure demeanor. By tracking these ratings, supervisors can direct the staff so that the appropriate demeanor is instilled. Although demeanor is important, a larger construct, assurance, also has a sig- The findings of this study also suggest that library users, especially students, accord significant impor­ tance to the demeanor of the library staff nificant impact on customer satisfaction. This construct combines the elements of demeanor and competence to influence the dependent variable. Although con­ veying an image of competence appeared to be important in the authors’ prelimi­ nary research, the weak link between this variable and satisfaction may be attrib­ uted to the possibility that competence works through some intervening vari­ able to influence user satisfaction. This conjecture should be tested in future re­ search. Alternatively, competence may be important, but only through a ge­ stalt effect in which the elements of competence are subsumed under assur­ ance—a bigger and more comprehen­ sive construct. If this is the case and as­ surance is found to be important in fu­ ture studies, library management also must rely on the experience properties of competence. This means that users generally will form an impression of the level of staff competence as they ex­ perience various services during their library encounter. Consequently, ef­ forts must be devoted to making every user experience a positive one so that an enduring impression about the compe­ tence of the staff is reinforced. In addi­ tion, training programs for library staff must continuously stress the need to pro­ vide competent library services. A second explanation regarding the marginal effect of competence is found in Explaining User Satisfaction 165 the SERVQUAL literature. For example, Parasuraman, Berry, and Zeithaml suggest that competence is important to some, but not all, services.30 They show that being competent is a principal expectation among customers of automobile repair and equipment repair but is not as important to customers of automobile insurance, ho­ tels, and rental services. The authors be­ lieve that receiving competent service is im­ portant to library users. However, when it is provided, it may not be noticed. On the other hand, when sloppy and incompetent services prevail, customers will notice and their satisfaction will be attenuated. Tangibles also play a role in explain­ ing user satisfaction, but their impact is considerably lower than the impact of the other significant variables. However, the physical condition of the library and the facilities within it that meet the eye must be managed so that negative impressions are not conveyed. Curiously, although in-depth inter­ views suggested otherwise, staff respon­ siveness did not have a significant effect on customer satisfaction as shown by the regression coefficients and the probabili­ ties of rejecting a true null hypothesis. Perhaps r esponsiveness also works through some mediating variable to ex­ plain customer satisfaction that should be explored in future research. Alternatively, perhaps, the authors’ measures of respon­ siveness may be tapping the domain of some other theoretical construct that is not directly related to user satisfaction and should be explored in future studies. The independent variables explained 64 percent of the variation in the criterion variable and suggests that the authors’ proposed model has considerable value; this also underscores that library service providers should be cognizant of two very important variables—demeanor or assurance, and resources—that must be considered in their strategic vision to en­ hance library services. On the measurement of the constructs, the authors note that the coefficient alpha values were reasonably high; the items also demonstrated discriminant validity. The correlations between the independent variables indicate, possibly, the existence of multicollinearity, although R. S. Billings and P. S. Wroten suggest that correlation coefficients lower than .8 do not indicate serious multicollinearity problems.31 The response rate (89.5%) also was re­ assuring, reducing the problems intro­ duced by nonresponse bias. This was achieved by personally delivering the sur­ veys to be self-administered and assuring respondents of confidentiality. In addition, the letterhead of a very credible educa­ tional institution was used to convey that the study represented institutional re­ search. In addition, respondents were in­ formed that key results were expected to be made public. The above considerations may explain the reasonably high percent­ age of responses without follow-up. Future research may attempt to repli­ cate the findings, which, if corroborated, would suggest that the constructs and measures developed in this study are use­ ful in the academic library setting. The applicability of the measures to public and special libraries also could be inves­ tigated and the importance of the ex­ planatory variables examined. Repeated over time, such research should help identify the key factors that explain user satisfaction across different types of librar­ ies. Strategic measures should follow to serve user needs better. The user-based model developed and presented in this paper supports and strengthens the need to provide high- quality services to academic library users. The need to provide such services is based not just on what the customers want but also on the experience of many library pro­ fessionals who have long known about these needs. In fact, many library and in­ formation professionals have considerable experience to judge what customers are able to say about what they want.32 What is needed is a delicate balance between what the users need and the tried ex­ http:problems.31 http:services.30 166 College & Research Libraries March 1998 perience of the academic librarians who can weigh the different issues and come up with solutions to ensure the provision of high-quality service that leads to user satisfaction. T h e a u t h o r s w o u l d l i k e t o t h a n k Marion F. Gallivan and Susan Thomp­ son for their assistance and support dur­ ing data collection. The comments on an earlier draft from Lorna Peterson, Rich­ ard Hart, and Melvin E. Westerman also are gratefully acknowledged. Notes 1. Marilyn D. White and Eileen G. Abels, “Measuring Service Quality in Special Libraries: Lessons Learned from Marketing,” Special Libraries 86, no. 4 (1995): 36–45. 2. Peter Hernon and Ellen Altman, Service Quality in Academic Libraries (Norwood, N. J.: Ablex Publishing Co., 1996). 3. Christopher Millson-Martula and Vanaja Menon, “Customer Expectations: Concepts and Reality for Academic Library Services,” C&RL 56 (Jan. 1995): 33–47. 4. Peter Hernon and Phillip Calvert, “Methods for Measuring Service Quality in University Libraries in New Zealand,” Journal of Academic Librarianship 22, no.1 (Sept. 1996): 387–91. 5. Danuta A. Nitecki, “Changing the Concept and Measure of Service Quality in Academic Libraries,” Journal of Academic Librarianship 22, no. 3 (May 1996): 181–90. 6. Ibid. 7. Vicki Coleman, Yi (Daniel) Xiao, Linda Blair, and Bill Chollet, “Toward a TQM Paradigm: Using SERVQUAL to Measure Library Service Quality,” C&RL 58 (May 1997): 237–51. 8. Susan Edwards and M. Browne, “Quality in Information Services: Do Users and Librar­ ians Differ in Their Expectations?” Library and Information Science Research 17, no. 2 (spring 1995): 163–82. 9. Hernon and Calvert, “Methods for Measuring Service Quality,” 387–91. 10. Nitecki, “Changing the Concept and Measure of Service Quality,” 33–47. 11. White and Abels, “Measuring Service Quality in Special Libraries,” 36–45. 12. A. Parasuraman, Leonard Berry, and Valarie Zeithaml, “Refinement and Assessment of the SERVQUAL Scale,” Journal of Retailing 67, no.4 (winter 1991): 420–50. 13. Tom J. Brown, Gilbert A. Churchill Jr., and J. Paul Peter, “Improving the Measurement of Service Quality,” Journal of Retailing 66, no.1 (spring 1993): 127–39. 14. J. Joseph Cronin Jr. and Steven A. Taylor, “Measuring Service Quality: A Reexamination and Extension,” Journal of Marketing 56, no. 3 (July 1992): 55–68. 15. James M. Carman, “Consumer Perceptions of Service Quality: An Assessment of SERVQUAL Dimensions,” Journal of Retailing 66, no.1 (spring 1990): 33–55. 16. Emin Babakus and Gregory W. Boller, “An Empirical Assessment of the SERVQUAL Scale,” Journal of Business Research 24, no. 3 (May 1992): 253–68. 17. Syed Saad Andaleeb and Amiya K. Basu, “Technical Complexity and Consumer Knowl­ edge as Moderators of Service Quality Evaluation in the Automobile Industry,” Journal of Retail­ ing 70, no.4 (winter 1994): 367–81. 18. Kenneth R. Teas, “Expectations, Performance Evaluation, and Customers’ Perceptions of Quality,” Journal of Marketing 57, no. 4 (Oct. 1993): 18–34. 19. Nitecki, “Changing the Concept and Measure of Service Quality,” 33–47. 20. Hernon and Calvert, “Methods for Measuring Service Quality,” 388. 21. A. Parasuraman, Valerie A. Zeithaml, and Leonard L. Berry, “Conceptual Model of Ser­ vice Quality and Its Implications for Future Research,” Journal of Marketing 49, no. 4 (fall 1985): 41–50. 22. A. Parasuraman, Leonard L. Berry, and Valarie A. Zeithaml, “Understanding Customer Expectations of Service,” Sloan Management Review 32, no. 3 (spring 1991): 39–48. 23. Nitecki, “Changing the Concept and Measure of Service Quality,” 33–47. 24. Cronin and Taylor, “Measuring Service Quality,” 55–68. 25. Andaleeb and Basu, “Technical Complexity and Consumer Knowledge,” 367–81. 26. Jum C. Nunnally, Psychometric Theory, 2nd ed. (New York: McGraw-Hill, 1978). 27. Gilbert A. Churchill Jr., Marketing Research Methods: Methodological Foundations (Hinsdale, Ill.: The Dryden Pr., 1976). Explaining User Satisfaction 167 28. John F. Gaski and John R. Nevin, “The Differential Effects of Exercised and Unexercised Power Sources in a Marketing Channel,” Journal of Marketing Research 22, no. 2 (May 1985): 130– 42. 29. Joseph F. Hair Jr., Rolph E. Anderson, Ronald L. Tatham, and William C. Black, Multivari­ ate Data Analysis with Readings, 6th ed. (New York: Macmillan, 1992). 30. Parasuraman, Berry, and Zeithaml, “ Refinement and Reassessment of the SERVQUAL Scale,” 420–50. 31. Robert S. Billings, and Steve P. Wroten, “Use of Path Analysis in Industrial/Organiza­ tional Psychology: Criticisms and Suggestions,” Journal of Applied Psychology 63 (1978): 677–88. 32. William A. Katz, Introduction to Reference Work, vol. 2 of Reference Services and Processes, 6th ed. (New York: McGraw-Hill, 1992).