Belliston.indd Undergraduate Use of Federated Searching: A Survey of Preferences and Perceptions of Value-added Functionality C. Jeffrey Belliston, Jared L. Howland, and Brian C. Roberts Randomly selected undergraduates at BrighamYoung University, Brigham Young University-Idaho, and Brigham Young University-Hawaii, all pri- vate universities sponsored by The Church of Jesus Christ of Latter-day Saints, participated in a study that investigated four questions regard- ing federated searching: (1) Does it save time? (2) Do undergraduates prefer it? (3) Are undergraduates satisfied with the results they get from it? (4) Does it yield higher-quality results than nonfederated searching? Federated searching was, on average, 11 percent faster than nonfeder- ated searching. Undergraduates rated their satisfaction with the citations gathered by federated searching 17 percent higher than their satisfaction using nonfederated search methods. A majority of undergraduates, 70 percent, preferred federatedsearching to the alternative.Thisstudy could not ultimately determine which of the two search methods yielded higher citation quality. The study does shed light on assumptions about feder- ated searching and will interest librarians in different types of academic institutions, given the diversity of the three institutions studied. ibrary research remains a complex, convoluted process for many undergraduates, in spite of the advances prom- ised by the digital age. In their final report, the Bibliographic Services Task Force from the University of California Libraries states, “We offer a fragmented set of systems to search for published information (catalogs, A&I databases, full text journal sites, institutional reposi- tories, etc) each with very different tools for identifying and obtaining materials. For the user, these distinctions are arbi- trary.”1 Federated searching a empts to collocate the information found in these fragmented systems and to provide one location to perform all library research. In this study, we investigated the assump- tions that have been made about feder- ated searching and studied undergradu- ates to determine if federated searching C. Jeffrey Belliston is Scholarly Communications Librarian, Jared L. Howland is Electronic Resources Librarian, and Brian C. Roberts is Process Improvement Specialist in the Harold B. Lee Library at Brigham Young University; e-mail: jeffrey_belliston@byu.edu, jared_howland@byu.edu, and brian_roberts@byu. edu, respectively. 472 mailto:jared_howland@byu.edu mailto:jeffrey_belliston@byu.edu Survey of Preferences and Perceptions of Value-added Functionality 473 resolves some of the issues discussed by the Bibliographic Services Task Force final report. In 2004, the Directors Council of the Consortium of Church Libraries and Archives (CCLA), consisting of four aca- demic libraries and four special libraries sponsored by the Church of Jesus Christ of La er-day Saints, licensed WebFeat’s federated search product for three years for all member institutions that wished to implement federated searching. About sixteen months prior to the expiration of the contract, the CCLA Directors Council requested data to assist in their decision concerning license renewal. We under- took this study to provide that data. CCLA’s eight member libraries include four academic libraries serving under- graduates. These four libraries, at Brigham Young University (BYU), Brigham Young University-Idaho (BYUI), Brigham Young University-Hawaii (BYUH), and LDS Business College (LDSBC), have been the primary users of the licensed federated search technology. The study intended to gather data from all four institutions but, due to a poor response rate, LDSBC was dropped from the study. Although all participating universities have similar names and serve undergraduates, the en- vironments are quite diverse (Table 1). For this study, we asked random un- dergraduates to undertake two hypotheti- cal research assignments using a different search method for each—one using feder- ated searching and the other performed with nonfederated searching. They were then asked to complete a questionnaire about their experience. This study was designed to answer the following ques- tions for undergraduates: 1. Does federated searching save time? 2. Does federated searching satisfy students’ information needs? 3. Do students prefer federated searching to the alternative of searching databases individually? 4. Does federated searching yield quality results? Because all of the CCLA institutions implemented federated searching differ- ently, we designed the study to be imple- mentation-neutral, thereby providing data on federated searching itself rather than on the WebFeat so ware.2 After compiling the results of this study, we presented these data as a paper at the Association of College and Research Libraries’ (ACRL) 13th National Confer- ence in March 2007. Prior to presenting our findings, we polled our audience concerning their assumptions about federated searching. This study tests the assumptions presented in the literature and, as a ma er of interest, compares the assumptions of the ACRL audience to our findings in this study. Literature Review End-user federated searching (sometimes TABLE 1 Institutional Information Library Institution Abbreviation Degrees Granted Student Population (FTE) Harold B. Lee Library Brigham Young University BYU Bachelor’s, Master’s, Doctorate 31,225 Joseph F. Smith Library Brigham Young University – Hawaii BYUH Bachelor’s 2,467 David O. McKay Library Brigham Young University – Idaho BYUI Associate’s, Bachelor’s 12,209 474 College & Research Libraries November 2007 known as broadcast searching, distributed searching, cross-search, metasearching, or parallel searching) of multiple databases stored by different companies in multiple locations is a relatively recent develop- ment. The concept of a single search of multiple databases goes back to at least 1966 when the Dialog service made possible the simultaneous searching of multiple discrete, proprietary databases. However, in contrast to the databases searched by current federated search products, the Dialog databases were (1) stored by a single company in a single location and (2) usually searched for an end-user by a librarian due to both the fee structure and the proprietary command- driven nature of the search interface. Roger K. Summit’s 1971 article on Dialog’s user interface and Stanley Elman’s vari- ous articles on the cost-benefit of Dialog examined this forerunner to federated searching.3 The majority of articles about today’s federated search technology tend to fall into four categories: (1) discussions of the desirability and/or difficulty of creating a robust federated search tool,4 (2) reports on one or more specific federated search implementations,5 (3) comparisons of federated search products currently on the market to each other and/or to Google Scholar,6 or (4) views on how to implement a subject-specific federated searching tool.7 Because these articles are theoretical, anecdotal, or comparative, they contain li le data based on quantita- tive research. The literature includes many explicit, and reasonable, assumptions about feder- ated searching. The Serials Review column, “The One-Box Challenge: Providing a Federated Search That Benefits the Re- search Process,” edited by Allan Scherlen with contributions from five academic librarians, provides a recent example of assumptions made about federated searching. The editorial introduction to the column states, “Federated searching will certainly make some aspects of re- search easier, but will it make it be er?”8 For contributor Marian Hampton, “[t]he benefit of metasearching is obvious—one simple interface for several sources …”9 Penny Pugh quotes the “minimal in- struction” on West Virginia University’s federated search: “E-ZSearch provides a quick and easy way to search multiple da- tabases at once.”10 Frank Cervone writes, “the point of federated searching is to make searching as simple as possible …”11 Federated searching, then, is assumed to make research easier, provide a simple interface, and require minimal training. Others have pointed to the inherent problems with federated searching as it is currently implemented. These problems include waiting for the slowest database to return results before all citations can be viewed, true de-duplication is impossible, and true relevancy ranking is unavailable. Rochkind (2007) comments: “Current library metasearch typically relies on searching multiple source repositories at once, in parallel, at the point of request and then merging the results.”12 The weaknesses of federated searching all stem from the choice vendors made to do federated searching in this manner. If libraries compiled and indexed the meta- data from all the third-party database subscriptions and made the data search- able, true de-duplication and relevancy ranking would be possible. Additionally, the results could be returned much faster because the system would not have to wait on the individual database vendors to return the results. Imperfect as it is, federated searching still has the significant potential benefit of saving time by requiring less searching. It also has the benefit of serendipitous discovery. Students may not know which databases to search for a particular topic, so a federated search engine that automat- ically selects appropriate databases helps students find materials they would not likely have found otherwise. Our study tested these assumptions by determining how much time is saved using a feder- ated search, if undergraduates preferred it to traditional searching, if it satisfied Survey of Preferences and Perceptions of Value-added Functionality 475 their information needs, and if federated searching yielded higher or lower quality results than nonfederated searching. Methodology Research participants and data gathering. A random sample of currently enrolled undergraduate students at BYU, BYUI, and BYUH received e-mail invitations to participate in a research project. To ensure a consistent delivery of expectations for the study, participants received wri en, rather than oral, directions (Appendix A). Each student was randomly assigned to one of two biology-related topics for a hypothetical research assignment. The wri en directions indicated which topic and search method (federated or non- federated) they were to use first to locate citations of journal articles that they felt best addressed the topic. Then, using the same user interface and the same set of seven databases, each student compiled a set of citations, copying and pasting them into the Google scratch pad avail- able to the right of their Internet browser on the screen. A proctor noted the time a participant began researching the first topic. When the participant indicated he or she had completed the research for the assigned topic, the proctor recorded the ending time, captured the collected citations into a Microso Word document with a filename indicating participant, topic, and method, and cleared the scratch pad. The process was then repeated for the other research topic, using the other search method, so each student created two citation sets for analysis (Appen- dix B). Finally, participants completed a questionnaire that asked about their TABLE 2 Summary of Participants (n = 95) Question 1 First Question 2 First Federated First 26 24 Nonfederated First 24 21 satisfaction with the citations gathered by each method, along with the method they preferred and why (Appendix C). A total of ninety-five undergraduates from the three schools participated (Table 2). Neutral interface. For both topics, and both of the search methods, the students were presented with the same set of seven databases. We selected databases that (1) were available at all three institu- tions participating in the study and (2) would include biology information. Ad- ditionally, we noted that, on their subject pages, subject librarians at BYU included, on average, just over six databases to be searched using a federated search. As- suming that this number likely represents close to the optimal number of databases to be searched simultaneously based on the subject librarian’s experience, we in- cluded seven databases in our research protocol. These were Academic Search Premier (EBSCO), BIOSIS Previews (ISI), CINAHL (EBSCO), Health Source: Nursing/Academic Edition (EBSCO), MEDLINE (EBSCO), Research Library (ProQuest) and Web of Science (ISI). For the nonfederated searches, the stu- dents were simply given a bulleted list of links to databases that included the name of the database to be searched. For the federated searches, the list of resources being searched appeared beneath the search box. The default se ings for the federated search, as well as the defaults for the individual databases, were the same for all participants so that all would use the same interface and receive the same results for an identical search. Citation set handling. We created a master spreadsheet in Microso Excel in which we recorded all data on par- ticipant, topic-method combinations, start/stop times, and questionnaire data. The citation sets were in a variety of dif- ferent formats due to having been copied from the federated search results pages, the native interface brief results pages, or the native interface detailed display pages. We normalized the citation sets by entering each of them into its own folder 476 College & Research Libraries November 2007 in the RefWorks bibliographic manager program. As we did so, we removed du- plicate citations and cited resources other than articles. To facilitate grading, we exported the citation sets from RefWorks back to Mi- croso Word, forma ed according to a custom RefWorks format created for that purpose. In addition to printing the Word files for the grader, we also used macros to create a master list of journals used in the citations and to parse the citation sets completely into a comma-delimited text file. We imported the master journal list into one Excel file and the parsed citation sets into another. Analysis of citations. To gather different perspectives on quality, each citation set was judged using two rubrics: one created by librarians consisting of quantitative measures and a more qualitative one approved by a faculty member in BYU’s Physiology and Developmental Biology department (Appendices D and E). The quantitative criteria in the librarian-cre- ated rubric included the journal impact factor, the proportion of citations from peer-reviewed journals to total citations, and the timeliness of the articles. While timeliness is not critical for all subject areas, it was deemed to be important to writing an adequate research paper on the two biology-related topics used in the study. The impact factor, as reported by ISI’s Journal Citation Reports, and peer-review status, as determined by consulting Ul- richsweb, were recorded in the master journal list spreadsheet. We used Excel macros and formulas to calculate the average impact factor, the proportion of peer-reviewed to total citations, and the average timeliness of each citation set. Each of the three criteria was weighted equally by normalizing the data for each criterion to a maximum value of ten. Each citation set received a final score by sum- ming the points assigned to each criterion to reach a composite quantitative quality score that was then transferred to the master spreadsheet. The qualitative faculty-approved ru- bric was designed to follow more closely the practices used by faculty members in a college or university setting. The three criteria used in this rubric included relevance to the topic, quality of the indi- vidual citations, and quantity of citations. Using the rubric, one undergraduate, a se- nior majoring in biology, assigned points to each of the 190 citation sets for each of the three criteria and summed them to create a composite quality score that we input into the master spreadsheet. Statistical analysis. A er gathering the data, we analyzed it using analysis of vari- ance (ANOVA) and multivariate analysis of variance tests (MANOVA). We selected these tests to permit accurate observa- tion of the variance due to the different variables under study. To be consistent, the factors under study included school (BYU, BYUH, BYUI), method (federated versus nonfederated), order (the order in which a given student was asked to use federated and nonfederated searching), and type of question (to ascertain if the topic itself—though both were biologi- cal in nature—made a difference in the responses). A er controlling for those factors, the analyzed data included the amount of time to complete the hypo- thetical research assignment, participant satisfaction rating of citations found, preference for search method, and the two composite quality scores. Results Time savings. Statistically significant differences exist between BYU and the other two schools in the time required to complete the hypothetical assignments by the two search methods. While all schools recorded time savings in research by us- ing federated searching, the results were widely dispersed. BYUI students saved an average of only 11 seconds and BYUH students saved an average of 26 seconds. BYU students, on the other hand, saved an average of 4 minutes, 11 seconds. Only the BYU results showed a statistically sig- nificant difference between time required Survey of Preferences and Perceptions of Value-added Functionality 477 TA B L E 3 C om pa ri so n of R es ul ts B et w ee n Sc ho ol s N um be r of P ar ti ci pa nt s % P re fe rr ed F ed er at ed A ve ra ge T im e to C om pl et e R es ea rc h (i n m in ut es )5 Sa ti sf ac ti on o f R es ul ts — A ve ra ge R at in g (S ca le of 1 –7 ; 7 b ei ng h ig he st )5 L ib ra ri an -c re at ed R u- br ic — A ve ra ge Q ua lit y Sc or es (S ca le o f 0 –3 0; 3 0 be in g hi gh es t) 5 F ac ul ty -c re at ed R ub ri c — A ve ra ge Q ua lit y Sc or es (S ca le o f 0 –9 ; 9 be in g hi gh es t) 5 F ed er at ed N on - fe de ra te d F ed er at ed N on - fe de ra te d F ed er at ed N on - fe de ra te d F ed er at ed N on - fe de ra te d B Y U H 27 81 % 21 .1 7 22 .1 4 5. 57 4. 13 1, 2 17 .7 1 19 .3 54 5. 59 1 5. 74 3 B Y U I 21 52 % 23 .1 0 23 .5 4 5. 41 1 5. 48 17 .6 7 18 .0 8 6. 38 5. 79 B Y U 47 72 % 16 .7 61 21 .1 42 5. 77 4. 78 2 18 .1 0 19 .2 04 6. 15 6. 31 A L L 95 70 % 20 .3 4 22 .7 2 5. 59 4. 80 2 17 .8 3 18 .8 82 6. 04 5. 59 1 St at is tic α = .0 5) 2 St at is tic α = .0 5) 3 M ar gi na α = .1 0) 4 M ar gi na α = .1 0) al ly s ig ni fi c an t d if fe re nc e be tw ee n sc ho ol s ( al ly s ig ni fi c an t d if fe re nc e be tw ee n m et ho ds ( lly s ig ni fi c an t d if fe re nc e be tw ee n sc ho ol s ( lly s ig ni fi c an t d if fe re nc e be tw ee n m et ho ds ( e ad ju st ed m ea ns , n ot p ur e m ea ns . A le as t- sq ua re s m ea n w as u til iz ed to c re at e m or e ro bu st re su lts d ue to d iff er in g sa m pl e si ze s be tw ee n th e sc ho ol s. 5 T he se a r for research and the search method used (Table 3). All undergraduates, on average, completed their hypothetical assignments 11 percent more quickly using a federated search rather than searching databases individually. Comments about reasons for a choice of preferred meth- od clearly indicate that time savings influenced some, but not all, students’ prefer- ences. One BYU student who preferred federated search stated, “[Federated search] definitely saved time and was more convenient to use than the [nonfederated search].” However, another BYU stu- dent who saved time with federated search but pre- ferred nonfederated search commented, “While [feder- ated search] did go faster (which to many will be a plus and will sway them to choose [federated search]), I think if I did lean to one or the other, I’d actually pick [nonfeder- ated search]” (emphasis in original). Satisfaction level of meeting information needs. Only BYU and BYUH showed a statisti- cally significant difference in the satisfaction with citations found using the different search methods. Even includ- ing data from BYUI, where no statistically significant dif- ference existed, participants were, on average, 17 percent more satisfied with the results found through federated searching. When asked to explain the stated preference for a partic- ular method, one BYUH stu- dent wrote, “I found that both were not very user friendly… 478 College & Research Libraries November 2007 I was frustrated and very tempted to just go back to good old ‘Google’!” Another BYUH student stated, “[Federated search] was much more understanding of the search terms I entered in. Instead of run- ning into continuous blocks while search- ing[,] all of the results were posted from several search engines and I therefore did not feel nearly as frustrated… Having to only use one search engine at a time is annoying, simultaneously is definitely much be er.” Preferences. All three schools showed a preference for federated searching over nonfederated searching, though BYUI showed only a marginal preference (52%). Overall, 70 percent of study participants preferred federated searching to nonfed- erated searching. There was a statistically significant (α=.05), but insignificant in practice, negative correlation (–0.18) between time to complete research and preference for search method. Although this is the ex- pected correlation, it is interesting that the correlation was not stronger. One would expect that the less time it takes a student to find citations, the more likely the student would be to prefer the method that took less time, but the correlation is actually very small. Reasons given by study participants who preferred federated search routinely included that it is faster, easier, simpler, and more efficient.13 One participant’s rea- son for preferring federated search begins with “Save time.” For this participant, it must have only seemed faster because the time spent using each search method was actually the same. Extended comments included the following differing viewpoints. A BYUI undergraduate wrote, “… [Federated search] got right to the point. I found more useful information. [With the non- federated search] I had to do a longer search.” A BYU student who preferred nonfederated search stated, “I felt like I had more options to choose from. Also [nonfederated search] lent itself to more abstracts so you could see what the ar- ticle was about without having to read it. With [federated search] I was relying more on the title which can sometimes be misleading.” Quality of citations. Analysis of citation set quality using the librarian-created rubric revealed that, on average, citation sets gathered by using federated search scored a statistically significant 6 percent lower than those gathered by searching databases individually. Analysis using the faculty-approved rubric revealed no significant difference, statistically or in practice, in the quality of citation sets generated by the two methods. More than one participant expressed a view that will surely resonate with librarians. When invited to provide ad- ditional comments, a BYUH student (who preferred nonfederated search) wrote, “It was weird not being able to use the normal [search engines] I use such as Al- tavista, Google or Ask. Seems as if these web sites had more relevant info for my topic….” A fellow BYUH student (who preferred federated search) also answered the additional comments question by writing, “I love Google, but this certainly helps to narrow your information down to ‘good’ resources.” ACRL 13th National Conference Presenta- tion. We presented the results of this study at the ACRL 13th National Conference in Baltimore, both at a face-to-face session and at a virtual conference session. The face-to-face audience was polled using an i-Clicker personal response system. The sample respondents included about one-third of the audience. The virtual conference audience was polled using the built-in polling features of the Learn- ingTimes so ware. The a endees were asked about their assumptions related to federated searching saving time, meet- ing undergraduates’ information needs, undergraduate preferences for searching and the quality of citations found using federated searching compared to non- federated searching. The results of the polling in the two sessions were combined into one data set to formulate an informal http:efficient.13 Survey of Preferences and Perceptions of Value-added Functionality 479 picture of librarian assumptions about federated searching and compare the as- sumptions to our findings in this study. The ACRL conference audience agreed with the literature’s assumption that fed- erated searching is “quick.” When asked if they believed federated searching saves time, 59 percent of the audience answered “Yes.” Data from our study indicated that students saved, on average, 17 percent more time doing a federated search than a nonfederated search. We asked the ACRL conference audi- ence to predict the undergraduate satis- faction ratings using the same seven-point scale where one means “Unsatisfied” and seven means “Very satisfied.” Figures 1 and 2 show the differences between what we found in the study and the assump- tions made by the audience. The audience correctly anticipated that undergraduates prefer federated searching over the alternative. They may well have expected it to be even stronger than the 70 percent we found, given that 97 percent of the audience assumed this preference. When it comes to quality of citations generated, 50 percent of the ACRL audi- ence indicated that they expected the two search methods to be comparable. This expectation seems quite reasonable given that the same databases were available through both search methods. Only 11 percent expected federated search to yield higher quality results, while 39 percent expected be er results from searching in the native database interfaces. Generally, the assumptions made by librarians and the literature seem to cor- respond closely with the findings of this study. Despite the weaknesses of feder- ated searching, the strengths appear to outweigh the weaknesses in the minds of undergraduates. Discussion Overall, undergraduates appear to strongly prefer federated searching, to be more satisfied with the results found via federated searching, and to save time by using federated searching. In the final analysis, the quality of the citations found using different search methods can be FIGURE 1 Actual Ratings Given By Undergraduates 0% 5% 10% 15% 20% 25% 30% 35% 40% P er ce n t of U n d er gr ad u at es 0% 1% 1 Actual Ratings (on a scale of 1-7 where 7 is very satisfied) 4% 4% 9% 16% 38% 6% 13% 15% 33% 20% 2 3 4 5 6 Rating Federated Non-federated 29% 11% 7 480 College & Research Libraries November 2007 FIGURE 2 Predicted Ratings Given by Librarians P er ce n t of L ib ra ri an s Predicted Ratings by Librarians (on a scale of 1-7 where 7 is very satisfied) 55% 50% 45% 40% 35% 30% 25% 20% 15% 10% 5% 0% <3 Between 3 and 4 Between 4 and 5 Between 5 and 6 Between 6 and 7 0% 4% 16% 48% 32% 9% 23% 33% 28% 7% Predicted Ratings Federated Non-federated considered ambiguous. The librarian-cre- ated rubric showed that searching data- bases individually yields higher quality citations than does federated searching. However, that finding depends entirely on the definition of quality used in the rubric. Although the quantitative criteria themselves were meant to be objective, the selection of the criteria was not. In the end, quality is in the eye of the beholder.14 Because real-world educators are more likely to make a subjective judgment of quality—like the faculty-created qualita- tive rubric—than they are to check impact factors of the journals cited by students, it seems reasonable to give greater credence to the finding that both search methods produce citation sets of similar quality. Future Studies The statistical models employed in the analysis of data reported here can only be extrapolated to the populations at the participating schools. However, we speculate that our results will hold when applied to the general population. This study controlled for, but did not address, the effect of implementation of a federated search engine on time savings, satisfaction, preferences, or citation quality. It is plausible that specific implementations could affect the results and either help or hinder a student’s experience. A study examining the effect of various possible implementations of federated searching is needed to determine an optimal implemen- tation. We would suggest that the presence of an abstract with federated search results be an aspect of such a study since 12 percent of the participants specifically mentioned the usefulness of abstracts in more effi- ciently selecting be er resources. Finally, this study addressed under- graduate students only. More research is needed to determine the value of feder- ated searching to graduate students and faculty. It is also probable that the results would vary depending on the discipline chosen for the hypothetical research top- ics, as some disciplines may lend them- selves more readily to federated search capabilities than other disciplines. Conclusion It is clear that students prefer federated searching over traditional searching, are http:beholder.14 Survey of Preferences and Perceptions of Value-added Functionality 481 more satisfied with the results they get from federated searching and save time when doing a federated search. Also, the quality of results seems comparable to what they would get by searching data- bases individually. However, federated searching is not the panacea that many were looking toward to resolve all research hurdles placed in front of undergraduate students. Hopefully, metasearching tech- nologies will continue to improve and will solve the problems with the current systems. Then maybe our undergraduates will not always feel like they have to get back to “good old Google” to find what they are looking for. Notes 1. John Riemer and others, “Rethinking How We Provide Bibliographic Services for the University of California” (Bibliographic Services Task Force Final Report, December 2005) 2. Available online at h p://libraries.universityofcalifornia.edu/sopag/BSTF/Final.pdf. [Accessed 2 May 2007]. 2. We gratefully acknowledge the assistance of WebFeat in se ing up the implementation- neutral interface used in the data gathering. 3. Roger K. Summit, “Dialog and the User: An Evaluation of the User Interface with a Major Online Retrieval System,” Interactive Bibliographic Search: The User/Computer Interface, ed. Donald E. Walker (Montvale, N.J.: AFIPS Press, 1971), 83–94; Stanley A. Elman, “Cost-Benefit Experience with Dialog Full-Text Retrieval,” Proceedings of the American Society for Information Science, Volume 10. 36th Annual Meeting, Los Angeles, California, October 21–25, 1973, eds. Helen J. Waldron and F. Raymond Long (Westport, Conn.: Greenwood Press, 1973), 54–55; Stanley A. Elman, “Cost Comparison of Manual and On-Line Computerized Literature Searching,” Special Libraries 66, no. 1 (1975): 12–18. 4. For example: Donna Fryer, “Federated Search Engines,” Online 28, no. 2 (2004): 16–19; Roy Tennant, “Cross-Database Search: One-Stop Shopping,” Library Journal 126, no. 17 (2001): 29–30; Rachel L. Wadham, “Federated Searching,” Library Mosaics 15, no. 1 (2004): 20. 5. For example: Frank Cervone, “What We’ve Learned from Doing Usability Testing on OpenURL Resolvers and Federated Search Engines,” Computers in Libraries 25, no. 9 (2005): 10–14; Doris Small Helfer and Jina Choi Wakimoto, “Metasearching: The Good, the Bad, and the Ugly of Making it Work in Your Library,” Searcher 13, no. 2 (2005): 40–41; Anne L. Highsmith and Benne Claire Ponsford, “Notes on Metalib® Implementation at Texas A&M University,” Serials Review 32, no. 3 (2006): 190–94. 6. For example: Xiaotian Chen, “MetaLib, WebFeat, and Google: The Strengths and Weak- nesses of Federated Search Engines Compared with Google,” Online Information Review 30, no. 4 (2006): 413–27. 7. For example: Debbie Campbell, “Federating Access to Digital Objects: PictureAustralia,” Program: Electronic Library and Information Systems 36, no. 3 (2002): 182–87; Geoff Daily, “A Case of Clustered Clarity,” EContent 28, no. 10 (2005): 44–45. 8. John Boyd and others, “The One-Box Challenge: Providing a Federated Search That Benefits the Research Process,” Serials Review 32, no. 4 (2006): 247 (emphasis ours). 9. Ibid., 249 (emphasis ours). 10. Ibid., 251 (emphasis ours). 11. Ibid., 253 (emphasis ours). 12. Jonathan Rochkind, “(Meta)search Like Google,” Library Journal (Feb. 15, 2007). Available online at www.libraryjournal.com/article/CA6413442.html. [Accessed 28 February 2007]. 13. Boyd, 252. Our participants’ terms closely parallel those of West Virginia University stu- dents as reported by Penny Pugh in her contribution to the cited column. 14. A report on the Quality Metrics project at Emory University states, “There was a categorical rejection of the value—and, of the very possibility—of substantive quality indicators presented in the ratings system, in particular as these applied to books and journals. One philosophical objection was to the notion of the quantification of quality in such a reductive manner.” The “quantification” referred to the creation of a rating of search results “hypothetically conceptualized as computed through numerically weighing various factors such as academic peer comments, non-academic comments, number of times cited, and the like.” Rohit Chopra and Aaron Krowne, “Disciplining Search/Searching Disciplines: Perspectives from Academic Communities on Metasearch Quality Indicators,” First Monday 11, no. 8 (2006): under “A. Thematic explication of key findings, 4. Quality as User Empowerment to Make Judgments about Quality. Available online at www.firstmonday. org/issues/issue11_8/chopra/index.html. [Accessed 2 January 2007]. www.firstmonday www.libraryjournal.com/article/CA6413442.html 482 College & Research Libraries November 2007 Appendix A: Participant Directions Directions – Forms 1-F and 2-F • During this study you will be asked to conduct the research necessary to complete the research portion of 2 hypothetical research paper assignments. o You will conduct the research for the first assignment by searching multiple databases simultaneously. You will not be able to change the selection of data- bases. o You will conduct the research for the second assignment by searching databases individually. You may search as few or as many of the available databases as you choose. • Use only the tools that will be provided to you on the computer screen to conduct the research necessary to complete these assignments. o Do not consult Google or any other outside research service or aid such as the library catalog or a database not included on the list of resources provided for the study. o For your information, the “Scratch Pad” of Google Desktop appears on the screen to the right of the Web browser. You will copy citations to the “Scratch Pad” as instructed below. • Take as much time as you need to compile a list of enough citations to journal articles to complete each hypothetical assignment to write a 10-page research paper. The cita- tions will be copied as you see them on screen. They do not have to be in a particular format such as APA, MLA, Turabian, etc. o Do not include citations to books, videos, websites, etc. o A typical journal article citation looks something like this. (Some citations include an abstract or short summary such as this example. Others do not.) The criminology of genocide: The death and rape of Darfur Hagan, J.; Rymond-Richmond, W. Criminology, vol. 43, no. 3, pp. 525-561, 2005 This study examines Sudanese government involvement in the racially motivated murders of nearly 400,000 Africans from the Darfur region of Sudan. Data were obtained from a victimization survey of Darfurian survivors living in refugee camps in Chad … o There is no set number of citations you need to gather. You alone determine what “enough citations” means. Simply gather a sufficient number of usable citations that you feel confident you would be able to complete each hypothetical assign- ment. 2 3 Survey of Preferences and Perceptions of Value-added Functionality 483 • When you find a citation you want to use, copy all of the citation information avail- able on the screen for the journal article of interest. o Highlight text with your mouse as shown in picture 1 below. o Press CTRL+C as shown in picture 2 below to copy the highlighted text. o Click in the “Scratch Pad” to the right of your screen. o Press CTRL+V as shown in picture 3 below to paste the copied text into the “Scratch Pad” as shown in picture 4. 1 4 484 College & Research Libraries November 2007 Appendix B: Hypothetical Assignments 1 – F INTERNAL USE ONLY Start Time 1: _________ Start Time 2: _______ Net ID: _____________ End Time 1 : _________ End Time 2 : _______ Hypothetical Research Assignment #1 You’ve been given an assignment to write a 10-page research paper on the topic out- lined below: Ignoring any ethical issues involved, what is the current status of stem cell research for the treatment of diabetes? Using only the resources available to you on screen when the study proctor tells you to begin, find enough citations to journal articles to enable you to complete this assign- ment. Copy the citations to the “Scratch Pad” on the right-hand side of your screen as shown on the “Directions” sheet If you lose your place • press the Web browser’s home bu on • click on the “Form 1-F” link • click on the “Hypothetical Research Assignment #1” link A er you have compiled a list of enough citations to complete this hypo- thetical assignment, stop your work and notify the study proctor. Hypothetical Research Assignment #2 You’ve been given an assignment to write a 10-page research paper on the topic out- lined below: According to recent research, what are the health risks associated with being overweight? Using only the resources available to you on screen when the study proctor tells you to begin, find enough citations to journal articles to enable you to complete this assign- ment. Copy the citations to the “Scratch Pad” on the right-hand side of your screen as shown on the “Directions” sheet If you lose your place • press the Web browser’s home bu on • click on the “Form 1-F” link 1 click on the “Hypothetical Research Assignment #2” link 1 A er you have compiled a list of enough citations to complete this hypo- thetical assignment, stop your work and notify the study proctor. Survey of Preferences and Perceptions of Value-added Functionality 485 Appendix C: Participant Questionnaire Questionnaire 1. How satisfied were you with the citations you were able to discover using the first research method (Hypothetical Assignment #1)? (Circle One: 1=Unsatis- fied to 7=Very satisfied) 1 2 3 4 5 6 7 2. How satisfied were you with the citations you were able to discover using the second research method (Hypothetical Assignment #2)? (Circle One: 1=Unsat- isfied to 7=Very satisfied) 1 2 3 4 5 6 7 3. Which method did you prefer? (First)____ (Second)____ 1 Why? 1 4. What other comments do you have about your searching experiences? Appendix D: Librarian-Created Quality Rubric Average Impact Factor Proportion of Peer Reviewed Average Timeliness TOTAL Student #1 Federated Student #1 Nonfederated Student #2 Federated Etc. 1. Average Impact Factor: The impact factor of the journal from each citation was gath- ered from the Institute for Scientific Information’s Journal Citation Reports database. Theimpactfactorsforthesetofcitationsthestudentsubmi edwereaveraged.Anycita- tion without an impact factor was assigned a value of zero and included in the average. The data was then normalized to a maximum value of 10. 2. Proportion of Peer Reviewed: Whether the journal from each citation is peer reviewed was determined by checking Ulrich’s Periodicals Directory. The proportion of peer-reviewed articles cited by the student differs qualitatively from the impact factor because not all journals with impact factors are peer reviewed. The data was then normalized to a maximum value of 10. 486 College & Research Libraries November 2007 3. Average Timeliness: The average timeliness of the articles in the citations sub- mi ed by each student was recorded. a. 0–1 years old = 10 points b. 2 years old = 9 points c. 3 years old = 8 points d. 4 years old = 7 points e. 5 years old = 6 points f. 6 years old = 5 points g. 7 years old = 4 points h. 8 years old = 3 points i. 9 years old = 2 points j. ≥ 10 years old = 1 point Appendix E: Faculty-Approved Quality Rubric SCORE Relevance All citations are related to the topic 3 Over half of the citations are related to the topic 2 1 or more citations are related to the topic 1 Quality* No citations are related to the topic All citations are of good quality 0 3 Over half of the citations are of good quality 2 1 or more citations are of good quality 1 Quantity No citations are of good quality There are enough citations to write a 10-page research paper 0 3 There are enough citations to write a 5–9-page research paper 2 There are enough citations to write a 1–4-page research paper 1 There are not enough citations to write a research paper 0 TOTAL: *Good Quality: Citations reporting primary research results would be considered of higher quality than review articles or other types of articles. Citations from “scholarly” or peer- reviewed sources would be considered of higher quality than citations from “popular” or non–peer-reviewed sources.