Autocomplete as a Research Tool: A Study on Providing Search Suggestions David Ward, Jim Hahn, and Kirsten Feist INFORMATION TECHNOLOGY AND LIBRARIES | DECEMBER 2012 6 ABSTRACT As the library website and its online searching tools become the primary “branch” many users visit for their research, methods for providing automated, context-sensitive research assistance need to be developed to guide unmediated searching toward the most relevant results. This study examines one such method, the use of autocompletion in search interfaces, by conducting usability tests on its use in typical academic research scenarios. The study reports notable findings on user preference for autocomplete features and suggests best practices for their implementation. INTRODUCTION Autocompletion, a searching feature that offers suggestions for search terms as a user types text in a search box (see figure 1), has become ubiquitous on both larger search engines as well as smaller, individual sites. Debuting as the “Google Suggest” feature in 20041, autocomplete has made inroads into the library realm through inclusion in vendor search interfaces, including the most recent ProQuest interface and in EBSCO products. As this feature expands its presence in the library realm, it is important to understand how patrons include it in their workflow and the implications for library site design as well as for reference, instruction, and other library services. An analysis of search logs from our library federated searching tool reveals both common errors in how search queries are entered, as well as patterns in the use of library search tools. For example, spelling suggestions are offered for more than 29 percent of all searches, and more than half (51 percent) of all searches appear to be for known items.2 Additionally, punctuation such as commas and a variety of correct and incorrect uses of Boolean operators are prevalent. These patterns suggest that providing some form of guidance in keyword selection at the point of search- term entry could improve the accuracy of composing searches and subsequently the relevance of search results. This study investigates student use of an autocompletion implementation on the initial search entry box for a library’s primary federated searching feature. Through usability studies, the authors analyzed how and when students use autocompletion as part of typical library research, asked the students to assess the value and role of autocompletion in the research process, and noted any drawbacks of implementing the feature. Additionally, the study sought to analyze how David Ward (dh-ward@illinois.edu) is Reference Services Librarian, Jim Hahn (jimhahn@illinois.edu) is Orientation Services and Environments Librarian, Undergraduate Library, University of Illinois at Urbana-Champaign. Kirsten Feist (kmfeist@uh.edu) is Library Instruction Fellow, M.D. Anderson Library, University of Houston. INFORMATION TECHNOLOGY AND LIBRARIES | DECEMBER 2012 7 Figure 1. Autocomplete Implementation implementing autocompletion on the front end of a search affected providing search suggestions on the back end (search result pages). LITERATURE REVIEW Autocomplete as a plug-in has become ubiquitous on site searches large and small. Research on autocomplete includes a variety of technical terms that refer to systems using this architecture. Examples include Real Time Query Expansion (RTQE), interactive query expansion, Search-as- you-Type (SayT), query completion, type-ahead search, auto-suggest, and suggestive searching/search suggestions. The principal research concerns for autocomplete include issues related to both back-end architecture and assessments of user satisfaction and systems for specific implementations. Nandi and Jagadish present a detailed system architecture model for their implementation of autocomplete, which highlights many of the concerns and desirable features of constructing an index that the autocomplete will query against.3 They note in particular that the quality of suggestions presented to the user must be high to compensate for the user interface distraction of having suggestions appear as a user types. This concern is echoed by Hanmin et al. in their analysis of how the results offered by their autocomplete implementation met user expectations.4 Their findings emphasize configuring systems to display only keywords that bring about successful searches, noting “precision [of suggested terms] is closely related with satisfaction.” An additional analysis of their implementation also noted that suggesting search facets (or “entity types”) is a way to enhance autocomplete implementations and aid users in selecting suitable keywords for their search.5 Wu also suggests using facets to help group suggestions by type, which improves comprehension of a list of possible keyword combinations.6 In defining important design characteristics for AUTOCOMPLETE AS A RESEARCH TOOL | WARD, HAHN, AND FEIST 8 autocomplete implementations, Wu advocates building in a tolerance for misplaced keywords as a critical component. Chaudhuri and Kaushik examine possible algorithms to use in building this type of tolerance into search systems. Misplaced keywords include typing terms in the wrong field (e.g., an author name in a title field), as well as spelling and word order errors.7 Systems that are tolerant in this manner “should enumerate all the possible interpretations and then sort them according to their possibilities,” a specification Wu refers to as “interpret-as-you-type.”8 Additionally, both Wu and Nandi and Jagadish specify fast response time (or synchronization speed) as a key usability feature in autocomplete interfaces, with Nandi and Jagadish indicating 100ms as a maximum.9,10 Speed also is a concern in mobile applications, which is part of the reason Paek et al. recommend autocomplete as part of mobile search interfaces, in which reducing keystrokes is a key usability feature.11 On the usability end, White and Marchionini12 assess best practices for implementation of search- term-suggestion systems and users’ perceptions of the quality of suggestions and search results retrieved. They find that offering keyword suggestions before the first set of results has been displayed generated more use of the suggestions than displaying them as part of a results page, even though the same terms were displayed in both cases. Providing suggestions at this initial stage also led to better-quality initial queries, particularly in cases where users may have little knowledge of the topic for which they are searching. The researchers also warn that, while presenting “query expansion terms before searchers have seen any search results has the potential to speed up their searching . . . it can also lead them down incorrect search paths.”13 METHOD Usability Study We conducted two rounds of usability testing on a version of University of Illinois at Urbana- Champaign’s Undergraduate Library website that contained a search box for the library’s federated/broadcast search tool with autocomplete built in. The testing followed Nielsen’s guidelines, using a minimum of five students for each round, with iterative changes to the interface made between rounds based on feedback from the first group.14 We conducted the initial round in summer 2011 with five library undergraduate student workers. The second round was conducted in September 2011 and included eight current undergraduate students with no affiliation to the library. By design, this method does not allow us to state definitive trends for all autocomplete implementations. It is not a statistically significant method by quantitative standards—rather, it gives us a rich set of qualitative data about the particular implementation (Easy Search) and specific interface (Undergrad Library homepage) being studied. The study’s questions were approved by the campus institutional review board (IRB), and each participant signed an IRB waiver before participating. Students for the September round were recruited via advertisements on the website and flyers in the library. Gift certificates to a local coffee shop provided the incentive for the study. INFORMATION TECHNOLOGY AND LIBRARIES | DECEMBER 2012 9 The procedure for each interview focused on two steps (see appendix). First, each participant was asked to use the search tool to perform a series of common research tasks, including three queries for known item searches (locating a specific book, journal, and movie), and two searches that asked the student to recall and describe a current or previous semester’s subject-based search, then use the search interface to find materials on that topic. Participants were asked to follow a speak-aloud protocol, dictating the decision-making process they went through as they conducted their search, including noting why they made each choice that they made along the way. Researchers observed and took notes, including transcribing user comments and noting mouse movements, clicks, and other choices made during the searches. Because part of the hypothesis of the study was that the autocomplete feature would be used as an aid for spelling search queries correctly, titles with possibly challenging spelling were chosen for the known item searches. Participants were not told about or instructed in the use of autocomplete; rather, it was left to each of them to discover it and individually decide whether to use it during each of the searches they conducted as a part of the study. In the second part of the interview, researchers asked students questions about their use (or lack thereof) of the autocomplete feature during the initial set of task-based questions. This set of questions focused on identifying when students felt the autocomplete feature was helpful as part of the search process, why they used it when they did, and why they did not use it in other cases. Students also were asked more general questions about ways to improve the implementation of the feature. In the second round of testing (with students from the general campus populace), an additional set of questions was asked to gather student demographic information and to have the participants assess the quality of the choices the autocomplete feature presented them with. These questions were based in part on the work of White and Marchionini, who had study participants conduct a similar quality analysis.15 Autocomplete Implementation The autocomplete feature was JavaScript and based on the jQuery autocomplete plugin (http://code.google.com/p/jquery-autocomplete/). Autocomplete plugins generally pull results either from a set of previous searches on a site or from a set of known products and pages within a site. For the study, the initial dataset used was a list of thousands of previous searches using the library’s Easy Search federated search tool. However, this data proved to be extremely messy and slow to search. In particular, a high number of problematic searches were in the data, including entire citations pasted in, misspelled words, and long natural-language strings. Constructing an algorithm to clean up and make sense of these difficult queries would have required too much time and overhead, so we investigated other sources. Researchers looked at autocomplete APIs for both Bing (http://api.bing.com/osjson.aspx?query=test) and Google (the Suggest toolbar API: http://google.com/complete/search?output=toolbar&q=test). Both worked well and produced AUTOCOMPLETE AS A RESEARCH TOOL | WARD, HAHN, AND FEIST 10 similar relevant results for the test searches. Significantly, the search algorithms behind each of these APIs were able to process the search query into far more meaningful and relevant results than what was achieved through the test implementation using local data. These algorithms also included correcting misspelled words entered by users by presenting correctly spelled results from the dropdown list. We ultimately chose the Google API on the basis of its XML output. FINDINGS The study’s findings were consistent across both rounds of usability testing. Notable themes include using autocomplete to correct spelling on known-item searches (specific titles, authors, etc.), to build student confidence with an unfamiliar topic, to speed up the search process, to focus broad searches, and to augment search-term vocabulary. The study also details important student perceptions about autocomplete that can guide the implementation process in both library systems and instructional scenarios. These student perceptions include themes of autocomplete’s popularity, desire for local resource suggestions, various cosmetic page changes, and user perception of the value of autocomplete to their peers. Spelling “It definitely helps with spelling,” said one student, responding to a prompt of how they would explain the autocomplete feature to friends. Correcting search-term spelling is a key way in which students chose to make use of the autocomplete feature. For known-item searches, all eight students in the second round of testing selected suggestions from auto-complete at least two times out of the three searches conducted. Of those eight students, four (50 percent) used auto-complete every time (three out of three opportunities), and four (50 percent) used it 67 percent of the time (two out of three opportunities). We found that of this latter group who only selected auto-complete suggestions two out of the three opportunities presented, three of them did in fact refer to the dropdown selections when typing their inquiries, but did not actively select these suggestions from the dropdown all three times. In choosing to use autocomplete for spelling correction, one student noted that autocomplete was helpful “if you have an idea of a word but not how it’s spelled.” It is interesting to note, with regard to clicking on the correct spellings, that students do not always realize they are choosing a different spelling than what they had started typing. An example is the search for Journal of Chromatography, which some students started spelling as “Journal of Chormo,” then picked the correct spelling (starting “Chroma”) from the list, without apparently realizing it was different. This is an important theme: if a student does not have an accurate spelling from which to begin, the search might fail, or the student will assume the library does not have any information on the chosen topic. This is particularly true in many current library catalog interfaces, which do not provide spelling suggestions on their search result pages. Locating Known Items INFORMATION TECHNOLOGY AND LIBRARIES | DECEMBER 2012 11 Another significant use of the autocomplete feature was in cases where students were looking for a specific item but had only a partial citation. In one case, a student used autocomplete to find a specific course text by typing in the general topic (e.g., “Africa”) and then an author’s name that the course instructor had recommended. The Google implementation did an excellent job of combining these pieces of information into a list of actual book titles from which to choose. This finding also echoes those of White and Marchioni, who note that autocomplete “improved the quality of initial queries for both known item and exploratory tasks.”16 The study also found this to be an important finding because overall, students are looking for valid starting points in their research (see “Confidence” below), and autocomplete was found to be one way to support finding instructor-approved items in the library. This echoes findings from Project Information Literacy, which shows students typically turn to instructor-sanctioned materials first when beginning research.17 This use case typically arises when an instructor suggests an author or seminal text on a research topic to a student, often with an incomplete or inaccurate title. One participant also mentioned that they wanted the autocomplete feature to suggest primary or respected authors based on the topic they entered. Confidence “[Autocomplete is] an assurance that it [the research topic] is out there . . . you’re not the first person to look for it.”—student participant There were multiple themes related to the concept of user confidence discovered in the study. First, some participants noted that when they see the suggestions provided by autocomplete it verifies that what they are searching is “real”—validating their research idea and giving them the sense that others have been successful previously searching for their topic. When students were asked the source of the autocomplete suggestions, most thought that results were generated based on previous user searches. Their response to this particular question highlighted the notion of “popularity ranking,” in that many were confident that the suggestions presented were a result of popular local queries. In addition, one participant thought that results generated were based on synonyms of the word they typed, while another believed that the results generated were included only if the text typed matched descriptions of materials or topics currently present in the library’s databases. Some students did indicate the similarity of search results to Google suggestions, but they did not make an exact connection between the two. This assumption that the terms are vetted seems to lend authority to the suggestions themselves and parallels the research of Jung et al., who investigated satisfaction based on the connection between user expectations on selecting an autocomplete keyword and results.18 The benefit of autocomplete-provided suggestions in this context was noted even in cases when participants did not explicitly select items from the autocomplete list. Students’ confidence in their own knowledge of a topic also factored into when they used autocomplete. Participants reported that if they knew a topic well (particularly if the topic chosen was one that they had previously completed a paper on), it was faster to just type it in without AUTOCOMPLETE AS A RESEARCH TOOL | WARD, HAHN, AND FEIST 12 choosing a suggestion from the autocomplete suggestion list. One participant also noted that common topics (e.g., “someone’s name and biography”) would also be cases in which they would not use the suggestions. After the first round of usability testing, a question was added to the post–test assessment asking students to rate their confidence as a researcher on a five-point scale. All participants in the second round rated themselves as a four or five out of five. While this confirms findings on student confidence from studies like Project Information Literacy, this assessment question ultimately had no correlation to actual use of autocomplete suggestions during the subject-based research phase of the study. Rather, confidence in the topic itself seemed to be the defining factor in use. Speed The study also showed that speed is a factor in deciding when to use autocomplete functionality. Specifically, autocomplete should be implemented in a way in which they are not perceived as slowing down the search process. This includes having results displayed in a way that is easily ignored if students want to type in an entire search phrase themselves, and having the presentation and selection of search suggestions done in a way that is easy to read and quick to be selected. Autocomplete is perceived as a time-saver when clicking on an item will shorten the amount of typing students need to do. However, some students will ignore autocomplete altogether; they do this when they know what they want, and they feel that speed is compromised if they need to stop and look at the suggestions when they already know what they want to search. In the study, different participants would often cite speed as a reason for both selecting and not selecting an item for the same question, particularly with the known-item searches. This finding indicates that a successful implementation should include both a speedy response (as noted above in Nandi and Jagadish’s research on delivering suggestions within 100ms, Paek et al.’s research on reducing keystrokes, and White and Marchioni’s finding that providing suggested words was “a real time-saver”),19 as well as an interface which does not force users to select an item to proceed, or obscure the typing of a search query. Focusing Topics “It helps to complete a thought.” “[Autocomplete is] extra brainstorming, but from the computer.”— participant responses The above quotes indicate the use of autocomplete as a tool for query formulation and search- term identification, a function closely related to the Association of College and Research Libraries (ACRL) Information Literacy Standard Two, which includes competencies for selecting appropriate search keywords and controlled vocabulary related to a topic.20 This quote also parallels a similar finding from White and Marchioni, 21 who had a user comment that autocomplete “offered words (paths) to go down that I might not have thought of on my own.” The use of autocomplete for scoping and refining a topic also parallels elements of the reference interview, specifically the open and closed questions typically asked to help a student define what INFORMATION TECHNOLOGY AND LIBRARIES | DECEMBER 2012 13 aspects of a topic they are interested in researching. This finding has many exciting implications for how elements and best practices from both classroom instruction and reference methodologies can be injected directly into search interfaces, to aid students who may not consult with a librarian directly during the course of their research. Autocomplete was used at a lower rate, and in different ways, for subject searching compared to kown-item searching. Three out of eight participants (38 percent) from the second round of testing did not use autocomplete at all for subject-based searching (zero of two opportunities). Five out of eight participants (62 percent) used autocomplete on one of two search opportunities (50 percent). No participants used autocomplete on both of the search opportunities. The stage of research a student was in helped to indicate where and how autocomplete could be useful in topic formulation and search-term selection for subject searches. Participants indicated that they would use autocomplete for narrowing ideas if they were at a later stage in a paper, when they knew more about what they wanted or needed specifics on their topic. However, early in a paper, some participants indicated they just wanted broad information and did not want to narrow possible results too early. This finding also supports previous research from Project Information Literacy, which describes student desire to learn the “big-picture context” as a key function in the early part of the research process.22 At this topic-focusing stage, some participants told us that the search suggestions reminded them of topics that were discussed in class. Further, the study showed that autocomplete suggests aspects of topics to student that they had not previously considered, and one participant indicated that she might change her topic if she saw something interesting from the list of suggestions, particularly something she had not thought of yet. Interface Implementation Though students who opted to utilize the autocomplete feature were generally satisfied with the results generated, some students recommended increasing the number of autocomplete suggestions in the dropdown menu to increase the probability of finding their desired topic or known item or to potentially lead to other related topics to narrow their search. In addition, students recommended increasing the width of the autocomplete text box, as its present proportions are insufficient for displaying longer suggestions without text wrapping. Some students also noted that increasing the height of the dropdown menu containing the autocomplete suggestions might help reduce the necessity to scroll through the results and may help to draw user attention to all results for those who elect not to use the scroll bar. Beyond the suggested improvements for the functionality of the autocomplete feature, students also noted a few cosmetic changes they would like to see implemented. In particular, students would prefer to see larger text and a better use of fonts and font colors when using autocomplete. One student noted that if different fonts and colors were used in this feature, the results generated might stand out more and better attract users, or better draw users’ attention to the recommended search terms. AUTOCOMPLETE AS A RESEARCH TOOL | WARD, HAHN, AND FEIST 14 Perceived Value to Peers Most students who participated in the study stated that they would recommend that their fellow classmates utilize the autocomplete feature for two primary purposes: known-item searches and locating alternative options for research topics. One student noted that she would recommend using this feature to search keywords “easily and efficiently,” while another student indicated that the feature helps to link to other related keywords. This finding also revealed that users were not intimidated by the feature and did not see it as a distraction from the search process, an initial researcher concern. CONCLUSION AND FUTURE DIRECTIONS Implementation Implications Implementing autocomplete functionality that accounts for the observed research tendencies and preferences of users makes for a compelling search experience. Participant selection of autocomplete suggestions varied between the types of searches studied. Spelling correction was the one universally acknowledged use. For subject-based searching, confidence in the topic searched and the stage of research emerged as indicators of the likelihood of autocomplete suggestions being taken. The use and effectiveness of providing subject suggestions requires further study, however. Students expect suggestions to produce usable results within a library’s collections, so the source of the suggestions should incorporate known, viable subject taxonomies to maximize benefits and not lead students down false search paths. There is an ongoing need to investigate possible search-term dictionaries outside of Google, such as lists of library holdings, journal titles, article titles, and controlled vocabulary from key library databases. The “brainstorming” aspect of autocomplete for subject searching is an intriguing benefit that should be more fully explored and supported. In combination with these findings, participant’s positive responses to some of the assessment questions (including first impressions of autocomplete and willingness to recommend it to friends) indicate that autocomplete is a viable tool to incorporate site-wide into library search interfaces. Instruction Implications Traditional academic library instruction tends to focus on thinking of all possible search terms, synonyms, and alternative phrasing before the onset of actual searching and engagement with research interfaces. This process is later refined in the classroom by examining controlled vocabulary within a set of search results. However, observations from this study (as well as researcher experience with users at the reference desk) indicate that students in real-world situations often skip this step and rely on a more trial-and-error method for choosing search terms, beginning with one concept or phrasing rather than creating a list of options that they try sequentially. The implication for classroom practice is that instruction on search-term formulation should include a review of autocomplete suggestions as well as practical methods for integrating these suggestions into the research process. This is particularly important as vendor databases INFORMATION TECHNOLOGY AND LIBRARIES | DECEMBER 2012 15 move toward making autocomplete a default feature. Proper instruction in its use can help advance ACRL Information Literacy goals and provide a practical, context-sensitive way to explain how a varied vocabulary is important for achieving relevant results in a research setting.23 Reference Implications As with classroom instruction, traditional reference practice emphasizes a prescriptive path for research that involves analyzing which aspects of a topic or alternate vocabulary will be most relevant to a search before search-term entry. Open and closed questioning techniques encourage users to think about different facets of their topic, such as time period, location, and type of information (e.g., statistics) that might be relevant. An enhanced implementation of autocomplete can incorporate these best practices from the reference interview into the list of suggestions to aid unmediated searching. One way this might be incorporated is through presenting faceted results that change on the basis of user selection of the type and nature of information they are looking for, such as a time period, format, or subject. For broadcast and federated searching interfaces, this could extend into the results users are then presented with, specifically attempting to use items or databases on the basis of suggestions made during the search entry phase, rather than presenting users with a multitude of options for users to make sense of, some of which may be irrelevant to the actual information need. Finally, the findings on use of autocomplete also have implications for search-results pages. Many of the common uses (e.g., spelling suggestions and additional search-term suggestion) also should be standard on results pages. This, too, is a common feature of commercial interfaces. Bing, for example, includes a Related Searches feature (on the left of a standard results page), that suggests context-specific search terms based on the query. This feature is also part of their API (http://www.bing.com/developers/s/APIBasics.html). Providing these reference-without-a- librarian features is essential both in establishing user confidence in library research tools and in developing research skills and an understanding of the information literacy concepts necessary to becoming better researchers. Our autocomplete use findings draw attention to user needs and library support across search processes; specifically, autocomplete functionality offers support while forming search queries and can improve the results of user searching. For this reason, we recommend that autocomplete functionality be investigated for implementation across all library interfaces and websites to provide unified support for user searches. The benefits that can be realized from autocomplete can be maximized by consulting with reference and instruction personnel on the benefits noted above and collaboratively devising best practices for integrating autocomplete results into search- strategy formulation and classroom-teaching workflows. http://www.bing.com/developers/s/APIBasics.html AUTOCOMPLETE AS A RESEARCH TOOL | WARD, HAHN, AND FEIST 16 REFERENCES 1. “Autocomplete—Web Search Help,” Google, support.google.com/websearch/bin/answer.py?hl=en&answer=106230 (accessed February 7, 2012). 2. William Mischo, internal use study, unpublished, 2011. 3. Arnab Nandi and H. V. Jagadish, “Assisted Querying Using Instant-Response Interfaces,” in Proceedings of the 2007 ACM SIGMOD International Conference on Management of data (New York: ACM, 2007), 1156–58, doi: 10.1145/1247480.1247640. 4. Hanmin Jung et al., “Comparative Evaluation of Reliabilities on Semantic Search Functions: Auto-complete and Entity-centric Unified Search,” in Proceedings of the 5th International Conference on Active Media Technology (Berlin, Heidelberg: Springer-Verlag, 2009), 104–13, doi: 10.1007/978-3-642-04875-3_15. 5. Hanmin Jung et al., “Auto-complete for Improving Reliability on Semantic Web Service Framework,” in Proceedings of the Symposium on Human Interface 2009 on Human Interface and the Management of Information. Information and Interaction. Part II: Held as part of HCI International 2009 (Berlin, Heidelberg: Springer-Verlag, 2009), 36–44, doi: 10.1007/978-3-642- 02559-4_5. 6. Hao Wu,“Search-As-You-Type in Forms: Leveraging the Usability and the Functionality of Search Paradigm in Relational Databases,” VLDB 2010, 36th International Conference on Very Large Data Bases, September 13–17, 2010, Singapore, p. 36–41, www.vldb2010.org/proceedings/files/vldb_2010_workshop/PhD_Workshop_2010/PhD%20Wor kshop/Content/p7.pdf (accessed February 7, 2012). 7. Surajit Chaudhuri and Raghav Kaushik, “Extending Autocompletion to Tolerate Errors,” in Proceedings of the 35th SIGMOD International Conference on Management of Data (New York,: ACM, 2009), 707–18, doi: 10.1145/1559845.1559919,. 8. Wu, “Search-As-You_Type in Forms,” 38. 9. Wu, “Search-As-You-Type in Forms.” 10. Ibid. 11. Tim Paek, Bongshin Lee, and Bo Thiesson, “Designing Phrase Builder: A Mobile Real-Time Query Expansion Interface,” in Proceedings of the 11th International Conference on Human- Computer Interaction with Mobile Devices and Services (New York: ACM, 2009), 7:1–7:10, doi: 10.1145/1613858.1613868. http://support.google.com/websearch/bin/answer.py?hl=en&answer=106230 http://www.vldb2010.org/proceedings/files/vldb_2010_workshop/PhD_Workshop_2010/PhD%20Workshop/Content/p7.pdf http://www.vldb2010.org/proceedings/files/vldb_2010_workshop/PhD_Workshop_2010/PhD%20Workshop/Content/p7.pdf INFORMATION TECHNOLOGY AND LIBRARIES | DECEMBER 2012 17 12. Ryen W. White and Gary Marchionini, “Examining the Effectiveness of Real-Time Query Expansion,” Information Processing and Management 43, no. 3 (2007): 685–704, doi: 10.1016/j.ipm.2006.06.005. 13. White and Marchionini, “Examining the Effectiveness of Real-Time Query Expansion,” 701. 14. Jakob Nielsen, “Why You Only Need to Test with 5 Users,” Jakob Nielsen’s Alertbox (blog), March 19, 2000, www.useit.com/alertbox/20000319.html (accessed February 7, 2012). See also Walter Apai, “Interview with Web Usability Guru, Jakob Nielsen,” Webdesigner Depot (blog), September 28, 2009, www.webdesignerdepot.com/2009/09/interview-with-web-usability-guru- jakob-nielsen/ (accessed February 7, 2012). 15. White and Marchionini, “Examining the Effectiveness of Real-Time Query Expansion.” 16. Ibid. 17. Alison J. Head and Michael B. Eisenberg, “Lessons Learned: How College Students Seek Information in the Digital Age,” Project Information Literacy Progress Report, December 1, 2009, projectinfolit.org/pdfs/PIL_Fall2009_finalv_YR1_12_2009v2.pdf (accessed February 7, 2012). 18. Jung et al., “Comparative Evaluation of Reliabilities on Semantic Search Functions.” 19. Jung et al., “Comparative Evaluation of Reliabilities on Semantic Search Functions”; Paek, Lee, and Thiesson, “Designing Phrase Builder”; White and Marchionini, “Examining the Effectiveness of Real-Time Query Expansion.” 20. Association of College and Research Libraries (ACRL), “Information Literacy Competency Standards for Higher Education,” http://www.ala.org/acrl/standards/informationliteracycompetency (accessed February 7, 2012). 21. White and Marchionini, “Examining the Effectiveness of Real-Time Query Expansion.” 22. Head and Eisenberg, “Lessons Learned.” 23. Association of College and Research Libraries (ACRL), “Information Literacy Competency Standards for Higher Education.” http://www.useit.com/alertbox/20000319.html http://www.webdesignerdepot.com/2009/09/interview-with-web-usability-guru-jakob-nielsen/ http://www.webdesignerdepot.com/2009/09/interview-with-web-usability-guru-jakob-nielsen/ http://projectinfolit.org/pdfs/PIL_Fall2009_finalv_YR1_12_2009v2.pdf http://www.ala.org/acrl/standards/informationliteracycompetency AUTOCOMPLETE AS A RESEARCH TOOL | WARD, HAHN, AND FEIST 18 APPENDIX. Questions Task-Based Questions 1. Does the library have a copy of “The Epic of Gilgamesh?” 2. Does the library own the movie “Battleship Potempkin?” 3. Does the library own the journal/article “Journal of Chromatography?” 4. For this part, we would like you to imagine you are doing research for a recent paper, either one you have already completed or one you are currently working on. a. What is this paper about? (What is your research question?) b. What class is it for? c. Search for an article on YYY 5. Same as 4, but different class/topic, and search for a book on YYY Autocomplete-Specific Questions 1. What is your first impression of the autocomplete feature? 2. Have you seen this feature before? a. If so where have you used it? 3. Why did you/did you not use the suggested words? (words in the dropdown) 4. Where do you think the suggestions are coming from? Or, How are they being chosen? 5. When would you use this? 6. When would you not use it? 7. How can it be improved? 8. Overall, what do you like/not like about this option? 9. Would you suggest this feature to a friend? 10. If you were to explain this feature to a friend how might you explain it to them? Assessment and Demographic Questions Autocomplete Feature 1. [KNOWN ITEM] Rate the quality/appropriateness of each of the first five autocomplete dropdown suggestions for your search: (5 point scale) 1—Poor Quality/Not Appropriate 2—Low Quality 3—Acceptable 4—Good Quality –5—High Quality/Very Appropriate INFORMATION TECHNOLOGY AND LIBRARIES | DECEMBER 2012 19 2. [SUBJECT/TOPIC SEARCH] Rate the quality/appropriateness of each of the first five autocomplete dropdown suggestions for your search: (5 point scale) 1—Poor Quality/Not Appropriate 2—Low Quality –3—Acceptable 4—Good Quality –5—High Quality/Very Appropriate 3. Please indicate how strongly you agree or disagree with the following statement: “The autocomplete feature is useful for narrowing down a research topic.” (5 point scale): 1—Strongly Disagree 2—Disagree –3—Undecided –4—Agree –5—Strongly Agree Demographics 1. Please indicate your current class status a.  Freshman b.  Sophomore c.  Junior d.  Senior 2. What is your declared or anticipated major? 3. Have you had a librarian come talk to one of your classes or give an instruction session in one of your classes? If yes, which class(es)? 4. Please rate your overall confidence level when beginning research for classes that require library resources for a paper or assignment. (5 point scale): 1—No Confidence 2—Low Confidence 3—Reasonable Confidence 4—High confidence –5—Very High Confidence 5. What factors influence your confidence level when beginning research for classes that require library resources for a paper or assignment?