colaric.p65 Instruction for Web Searching: An Empirical Study 111 111 Instruction for Web Searching: An Empirical Study Susan M. Colaric Susan M. Colaric is an Assistant Professor in the Department of Librarianship, Educational Technology, and Distance Instruction at East Carolina University; e-mail: colarics@mail.ecu.edu. Users searching the Web have difficulty using search engines and de- veloping queries. Searches tend to be simple, and Boolean operators are used infrequently and incorrectly. Users also are unaware that search engines operate differently from other information retrieval systems. Yet, there is little research on effective instructional methods for teaching users how to search the Web. Research has looked at instructional methods for other types of information retrieval, but these systems differ a great deal from the Web. The purpose of this study was to determine what undergraduate students know about search engines and to exam- ine instructional treatments to aid searchers in using a search engine. esearch has shown that users looking for information on the World Wide Web have a diffi- cult time developing search queries and using a search engine. 1�6 Searches tend to be simple, and Boolean operators are used infrequently and incor- rectly.7,8 Users also appear to be unaware that search engines operate differently from other information retrieval systems they may use, such as a library online cata- log, and this appears to contribute to in- appropriate search queries.9�11 How to use a search engine has been taught primarily through examples and short procedural descriptions. In instruc- tion by example, a learner is given a se- ries of worked-out problems and then asked to solve a new problem on his or her own.12 A review of the help sections of six search engines (AltaVista, Excite, Go, Google, Hotbot, and Northern Lights; December 2000) showed that instruction by example is used to explain how to use the engine. This method focuses on two types of knowledge: declarative and syn- tactic. Declarative knowledge refers to un- derstanding facts, in this case, facts about search engines.13 Syntactic knowledge refers to knowledge of the language units and rules when working with a computer sys- tem, in this case, how to structure a search query using terminology the search en- gine can interpret correctly.14 When users understand the appropri- ate declarative and syntactic knowledge by studying the example and procedural description, they then can develop a query to fit their information need. This may involve incorporating elements de- scribed in the help paragraph that were not included in the example or transfer- ring the example to a completely differ- ent domain. Instruction by example pre- sumes the learner will be able to match a new problem situation to a formerly en- 112 College & Research Libraries March 2003 countered situation, retrieve the solution to the previously solved problem, and map the retrieved information onto the new problem. 15 To date, no research has investigated whether this method is effective in teach- ing users to search the Web. Recent stud- ies examining user interactions with the Web have identified factors associated with successful searching, including de- clarative knowledge and syntactic knowl- edge. But semantic knowledge also may play a role in successfully retrieving in- formation.16�21 Semantic knowledge refers to the user �s understanding of the major lo- cations, objects, and actions inside a com- puter system.22,23 Sometimes referred to as system knowledge, semantic knowledge represents how learners choose to use system features based on an awareness of their functions and capabilities.24 Al- though earlier findings have demon- strated the importance of semantic knowl- edge when using other information re- trieval systems, research into its role in using search engines is lacking.25�29 Instruction to increase semantic knowl- edge has been used successfully in other domains, such as computer programming and automobile brakes, to increase under- standing and efficient use of these sys- tems.30,31 The focus is on explaining how the system works so that users will better understand how it reacts to input and why particular output occurs.32,33 This has been done with conceptual models, a depiction of the system that helps learners mentally represent the elements of a system while facilitating the construction of associative links between cause-and-effect relation- ships. As the learner builds a more com- plete mental image of how a system works, his or her prediction and inference skills develop and strengthen.34�36 Richard E. Mayer described a concep- tual model as words and/or diagrams of a system that highlight the major objects and actions, as well as the causal relations among them, to assist learners in build- ing a mental model. 37 Illustrations are of- ten used to represent the interactions among elements of the system. 38�39 The use of illustrations allows the user to pic- ture the critical elements of the system while reading explanations of how those elements interact. In all of the studies, instruction to increase semantic knowl- edge resulted in better inferencing skills and reasoning about how the system op- erates. However, these studies were done with closed systems that were not diffi- cult to depict visually. Hypotheses The goal of this study was to investigate three instructional methods to determine differences in knowledge acquisition re- lated to three types of knowledge associ- ated with using a search engine. The three instructional methods were instruction by example, conceptual models without il- lustrations, and conceptual models with illustrations. The three types of knowl- edge were declarative knowledge, syntac- tic knowledge, and semantic knowledge. Based on information from the literature review, three hypotheses were developed: � Hypothesis #1: There will be sig- nificant differences in semantic knowl- edge acquisition among participants re- ceiving different instructional treatments. Participants who receive conceptual mod- els with illustrations should have the highest scores on the posttest, those who receive conceptual models without illus- trations should have the next highest scores, and participants who receive in- struction by example should have the lowest scores. � Hypothesis #2: Semantic knowl- edge will correlate with syntactic knowl- edge. � Hypothesis #3: There will be sig- nificant differences in syntactic knowl- edge acquisition among participants re- ceiving different instructional treatments. Participants who receive conceptual mod- els with illustrations should have the highest scores on the posttest, those who receive conceptual models without illus- trations should have the next highest scores, and participants who receive in- struction by example should have the lowest scores. Instruction for Web Searching: An Empirical Study 113 Methodology This study was a pretest/treatment/ posttest study using print-based materi- als, with the pretest administered dur- ing one class period and the treatment and posttest administered during the next class period. Participants were un- dergraduate students at a major research university. A cluster sample of ten classes was identified based on whether the cur- riculum for the course included learning to search the Web. A total of 195 students completed the pretest and were ran- domly assigned to one of the three in- structional groups. Class groups were kept intact, and random assignment to treatments was within each class. Nine- teen students were not present for the in- structional materials and posttest portion of the study, so their scores were re- moved from all analyses. This resulted in an unequal number of participants in each group: instruction by example had fifty-nine participants, conceptual mod- els without illustrations had sixty-one, and conceptual models with illustrations had fifty-six. All the materials were developed us- ing published sources of information on how search engines operate.40�43 The re- searcher administered the materials to all groups by attending the class during its normal time and day in the second or third week of classes in the spring semes- ter of 2001. Participation in the study was voluntary; informed consent was ob- tained and no extra credit was given to the students for participating. The pre- and posttest scoring was done by the re- searcher; no second scorer was used. However, scoring was dichotomous in nature with no room for disagreement. The independent variable was the in- structional method with three levels (in- struction by example, instruction by con- ceptual models without illustrations, and instruction by conceptual models with il- lustrations). The dependent variable was posttest scores divided into three sections: (1) declarative knowledge of search en- gines as measured by questions testing the participant�s factual knowledge of search engines; (2) syntactic knowledge of search engines as measured by the elements of a search query with regard to a provided search problem; and (3) semantic knowl- edge of search engines as defined by the participant�s explanation of how a search engine works. The pretest served as a baseline measure of prior knowledge. Analysis of covariance (ANCOVA) was used to analyze posttest scores for each type of knowledge across the different in- structional materials with the pretest as the covariate. This allowed an examination of treatment effects without the participants� prior knowledge affecting the analysis. Demographic data for participants were analyzed to determine whether the three instructional groups were similar in terms of gender, age, major area of study, semesters completed, computer owner- ship, hours per day spent searching the Web, and hours per day spent using e- mail. Chi-square analyses were per- formed to test for dependence, but no sig- nificant differences were found. The ho- mogeneity of these characteristics sup- ported the assumption that differences in posttest scores would most likely be re- lated to instructional materials. When beginning the data analysis, Levene�s test of equality for error vari- ances showed unequal variance. Even with standardizing to a z-score, three par- ticipants reported an average of 8 hours per day on e-mail (compared to the group mean of 1.0 hours) and 2.6 hours per day searching the Web (compared to the group mean of 1.7 hours). This was con- sidered significantly different as to dis- tinguish them from the sample on a criti- cal factor. To address the problem, these extreme cases were removed from all analyses allowing an equal variance as- sumption to be met. The final number of participants used for data analysis was 173. Results and Analyses What Did They Already Know? The study participants appeared to have some prior knowledge of search engines. As shown in table 1, most (67%) under- 114 College & Research Libraries March 2003 stood that each search engine operates differently, although almost 10 percent of participants thought search engines were all the same and 23 percent did not know whether they were all the same. Eighteen percent believed that search engines pe- ruse all sites on the Web, and another 18 percent were unsure whether they do. Only 58 percent of participants knew that terms typed into a search engine need to match the indexed sites of that engine in order to be returned. The majority of re- spondents (62%) understood that the Boolean operator OR retrieves more re- sults than the operator AND. Unfortu- nately, this means that more than a third of the participants answered incorrectly or did not know. Two questions asked the participants to describe what a search engine would do when given a particular search query. These questions assessed the partici- pants� semantic knowledge of a search engine by asking them to describe, in their own words, what goes on inside the system when a command is executed. 44 On the pretest, participants generally were unsuccessful in describing their se- mantic knowledge, scoring a group mean of 2.87 points out of a possible 12 (stan- dard deviation = 2.93; range = 1, 10). Sixty participants (35%) received no points for this section. Participants who did re- spond were slightly more likely to in- clude a description of AND as an inter- sect (45%) than to include OR as a join (41%). Most participants (61%) did not include all of the terms included in the question, opting, instead, to describe what the engine would do with just one or two terms. Syntactically, participants tended to construct very simple queries with a mean of three terms per query. Boolean operators were used by 31 percent of the participants, with AND used more often than OR. The majority of participants (87%) failed to include any variable terms (terms not included in the question) in their queries. Pretest/Posttest Comparison A series of paired t-tests was run to com- pare pre- and posttest scores for declara- tive knowledge, syntactic knowledge, and semantic knowledge across all instruc- tional materials. Scores for declarative knowledge could range from 0 to 5; scores averaged 2.63 on the pretest and 3.98 on the posttest. The difference between the means was statistically significant (t = 12.675, df = 172, p < .05). Scores for syntac- tic knowledge could range from 0 to 18; scores averaged 5.54 on the pretest and 8.88 on the posttest. The difference between the means was statistically significant (t = 13.751, df = 172, p < .05). Scores for seman- tic knowledge could range from 0 to 12; scores averaged 2.87 on the pretest and 5.54 on the posttest. The difference between the means was statistically significant (t = TABLE 1 Pretest: Declarative Knowledge Question Correct Incorrect Don�t know n % n % n % All engines work the same way 116 67.1 17 9.8 40 23.1 Engines look at all sites 110 63.6 32 18.5 31 17.9 Term needs to match index 99 57.6 32 18.6 42 23.8 Gathers sites by using a ____ 25 14.6 16 9.4 131 76.0 Or retrieves _____ than and 108 62.4 31 17.9 34 19.7 The majority of respondents (62%) understood that the Boolean opera- tor OR retrieves more results than the operator AND. Unfortunately, this means that more than a third of the participants answered incorrectly or did not know. Instruction for Web Searching: An Empirical Study 115 TABLE 2 Pretest And Posttest Scores by Types of Knowledge Test results Pretest Posttest Type of Knowledge n Mean S.D. Mean S.D. Declarative knowledge 173 2.63 1.34 3.98 1.05 Syntactic knowledge 173 5.54 2.48 8.88 2.87 Semantic knowledge 173 2.87 2.93 5.54 2.91 Type of Knowledge n Mean S.D. t Declarative knowledge 173 1.36 1.41 12.675* Syntactic knowledge 173 3.34 3.19 13.751* Semantic knowledge 173 2.67 3.18 11.043* S.D. = Standard Deviation *p < .05 TABLE 3 ANCOVA for Semantic Knowledge Covariate (Pretest) Posttest Unadjusted Adjusted Group n Mean S.D. Mean S.D. Mean Example 59 1.52 1.36 5.14 2.65 5.07 Conceptual models without illustrations 61 1.16 1.48 5.69 2.86 5.88 Conceptual models with illustrations 53 1.72 1.55 5.81 3.25 5.63 S.D. = Standard Deviation 11.043, df = 172, p < .05). These findings lead to the conclusion that the instructional method, regardless of the type of instruc- tion, served to increase participants� scores on declarative, syntactic, and semantic knowledge. (See table 2.) Hypothesis #1: Semantic Knowledge Acquisition To test the first hypothesis, �There will be significant differences in semantic knowledge acquisition among partici- pants receiving different instructional treatments,� two questions on the pre- test and two questions on the posttest asked the participants to describe what a search engine would do when given a particular search query. Analysis of co- variance (ANCOVA) was used to ana- lyze posttest scores for semantic knowl- edge across the differ- ent instructional mate- r i a l s u s i n g p r i o r s e - mantic knowledge as a covariate (Model F = 7.69, df 5/172, p < .05). T h e r e s u l t s f a i l e d t o s u p p o r t H y p o t h e s i s #1; there were no sig- nificant differences in s e m a n t i c k n o w l e d g e acquisition among par- ticipants receiving dif- f e r e n t i n s t r u c t i o n a l t r e a t m e n t s w h e n a d - j u s t i n g f o r p r e t e s t scores. (See table 3.) Hypothesis #2: Correla- tions among Types of Knowledge For the pretest, there was a positive cor- relation (r = .335) between declarative scores and syntactic scores, a positive cor- relation (r = .278) between declarative scores and semantic scores, and a posi- tive correlation (r = .298) between syntac- tic scores and semantic scores. The corre- lations are considered moderate to low, and all are significant (p < .05).45 For the posttest, there was a positive correlation (r = .310) between declarative scores and semantic scores and a positive correlation (r = .279) between syntactic scores and semantic scores. The correla- tions are considered moderate to low, and all are significant (p < .05).46 No statisti- cally significant correlation was found be- tween posttest declarative scores and posttest syntactic scores. 116 College & Research Libraries March 2003 TABLE 4 ANCOVA for Syntactic Knowledge Covariate (Pretest) Posttest Unadjusted Adjusted Group n Mean S.D. Mean S.D. Mean Example 59 5.54 2.34 9.51 2.82 9.51 Conceptual models without illustrations 61 5.41 2.25 8.57 2.73 8.60 Conceptual models with illustrations 53 5.70 2.90 8.53 3.00 8.45 Source of Variation df Sum of Squares Mean Square F Pretest (covariate) 1 104.58 104.58 14.19* Treatment 2 39.55 19.78 2.68 Residual 168 1230.41 7.37 *p < .05 S.D. = Standard Deviation The positive correlations between syn- tactic knowledge and semantic knowl- edge on both the pretest and the posttest support the second hypothesis, �Syntac- tic knowledge will correlate with seman- tic knowledge.� Hypothesis #3: Syntactic Knowledge Acquisition To test the third hypothesis, �There will be significant differences in syntactic knowl- edge acquisition among participants receiv- ing different instructional treatments,� three questions on the posttest asked the partici- pants to write down what they would type into a search engine given a particular topic. Each query was assessed 0�2 points in three categories: accuracy of concepts identified, inclusion of variable concepts, and accuracy of Boolean expression. Analysis of covari- ance (ANCOVA) was used to analyze posttest scores for syntactic knowledge across the different instructional materials using prior syntactic knowledge as a covariate (Model F = 5.00, df 5/172, p < .05). The results supported Hypothesis #3; there were significant differences in syntactic knowledge acquisition among participants receiving different instructional treatments when adjusting for pretest differences. Post hoc analysis using Scheffe pair- wise comparisons showed significant dif- ferences between the Examples group and the Conceptual Models with Illustrations group (F[2, 167] = 14.19, p < .05) when test- ing for syntactic knowledge. The Examples group had an adjusted mean of 9.51 on the posttest; the Conceptual Models with Il- lustrations group had an adjusted mean of 8.45; and the Conceptual Models with- out Illustrations group had an adjusted mean of 8.60. These results failed to sup- port the proposition that participants re- ceiving instruction incorporating a concep- tual model with illustrations would have the highest scores on the posttest, those who received instruction incorporating conceptual models without illustrations should have the next highest scores and participants who receive instruction by ex- ample are expected to have the lowest scores. (See table 4.) Declarative Knowledge in Relation to Instructional Materials Although declarative knowledge was not hypothesized to affect syntactic knowl- edge, it may serve as an indicator of gen- eral system knowledge; therefore, analy- ses were run to determine what participants knew about search engines. Answers to each declarative knowledge question on the pre- and posttest were analyzed to evaluate the participants� knowledge of search engines before and after the instruction. The total number of Instruction for Web Searching: An Empirical Study 117 TA BL E 5 Co mp ari son of Pr ete st/P ost tes t A nsw ers fo r D ecl ara tiv e K no wle dg e Pre tes t Po stt est Co rre ct Inc orr ect Do n�t kn ow Co rre ct Inc orr ect Do n�t kn ow Qu est ion n % n % n % n % n % n % All en gin es wo rk the sa me wa y 116 67. 1 17 9.8 40 23. 1 158 91. 3 15 8.7 0 0.0 En gin es loo k a t al l si tes 110 63. 6 32 18. 5 31 17. 9 123 71. 1 38 22. 0 12 6.9 Ter m n eed s to ma tch ind ex 99 57. 6 32 18. 6 42 23. 8 130 75. 6 30 17. 4 12 6.9 Ga the rs s ites by us ing a _ __ 25 14. 6 16 9.4 131 76. 0 124 71. 7 21 12. 1 28 16. 2 Or ret rie ves __ __ tha n a nd 108 62. 4 31 17. 9 34 19. 7 157 90. 8 16 9.2 0 0.0 correct answers to every question in- creased between the pre- and posttest, and the number of participants respond- ing �I don�t know� decreased. An area of caution here is that use of the pretest may have conditioned the participants to look for particular pieces of information and this may be reflected in the declarative knowledge posttest scores. (See table 5.) ANCOVA was used to analyze posttest scores for declarative knowledge across the different instructional materials using prior declarative knowledge as a covariate (Model F = 14.82, df 5/172, p < .05). Post hoc analysis using Scheffe pair- wise comparisons showed significant dif- ferences among the Examples group (ad- justed mean of 3.36) and both the Con- ceptual Models without Illustrations (ad- justed mean of 4.33) and the Conceptual Models with Illustrations (adjusted mean of 4.27) groups when testing for declara- tive knowledge (F[2, 168] = 25.81, p <.05). These results are not surprising because the information needed to answer two of the questions was not included in the in- struction by example (terms typed into a search engine need to match the indexed sites of that engine in order to be returned and the name commonly given to the pro- gram that gathers Web sites and returns them to the search engine). (See table 6.) Discussion The purpose of this study was to compare three instructional methods to assist under- graduate students in learning to search the Web. Current methods of Web-searching instruction focus on the use of examples and short procedural descriptions. In instruction by example, a learner is given a series of worked-out problems and then asked to solve a new problem on his or her own.47 Most existing search engine instruction is structured similarly. This provided the first instructional method to be tested: instruc- tion by example. Research based on obser- vations as users searched the Web showed that users who understood how a search engine worked (semantic knowledge) made better use of it and used more appro- priate syntax than those who did not have this knowledge.48,49 This led to an examina- tion of the literature in other domains for ways to increase semantic knowledge and the identification of conceptual models in instruction�the second instructional method tested.50�53 Participants in some of these studies were found to benefit the most from conceptual models when illustrations of the system were incorporated into the model.54 This provided the final instruc- tional treatment�conceptual models with illustrations. 118 College & Research Libraries March 2003 TABLE 6 ANCOVA for Declarative Knowledge Covariate (Pretest) Posttest Unadjusted Adjusted Group n Mean S.D. Mean S.D. Mean Example 59 2.66 1.37 3.37 1.11 3.36 Conceptual models without illustrations 61 2. 54 1.37 4.31 0.85 4.33 Conceptual models with illustrations 53 2.61 1.31 4.28 0.89 4.27 Source of Variation df Sum of Squares Mean Square F Pretest (co-variate) 1 20.22 20.22 25.81* Treatment 2 18.99 9.49 12.11 Residual 168 130.88 0.78 *p < .05 S.D. = Standard Deviation The results obtained failed to support the first hypothesis that there would be significant differences in semantic knowl- edge acquisition among participants re- ceiving different instructional treatments. There are at least three possible explana- tions for the lack of difference. First, all instructional methods may have contrib- uted to an understanding of how the sys- tem works; second, the method for assess- ing semantic knowledge may not have been sensitive enough; and, third, seman- tic knowledge for a search engine may need to be acquired through interaction with the system, not just reading about it. Although it is clear that the conceptual model instruction (both with and with- out illustrations) included information on how the system works, it also is possible that instruction by example included enough information for the participants to infer how the system works. The in- struction by example went into detail about Boolean searching and the differ- ences between AND and OR. This de- scription alone may provide enough in- formation for participants to describe the system in a rudimentary manner, which is supported by the low adjusted-mean posttest scores of 5.07 out of 12 for the Examples group (the Conceptual Models without Illustrations group had an ad- justed-mean posttest score of 5.88, and the Conceptual Models with Illustrations group had an adjusted mean posttest score of 5.66). It is more likely, however, that the scoring method for semantic knowledge was too heavily favored to- ward an understanding of Boolean opera- tors. Two of the six points contributing to the total semantic score for each question were awarded to describing Boolean op- eration and accounted for the majority of points scored on the posttest. This is not to imply that an understanding of Bool- ean operators is not important for search- ers to have; most retrieval systems, in- cluding the Web, use Boolean logic dur- ing a search, so understanding these con- cepts is critical to search success. 55,56 Rather, it may be that the scoring method is not sensitive enough. The low adjusted-mean scores for se- mantic knowledge mentioned above sug- gest another explanation for the lack of difference. It may be that semantic knowl- edge for search engines is best acquired by using the system rather than simply reading about it. Mayer�s series of experi- ments for acquiring system knowledge The fundamental goal of the study was to investigate ways for under- graduates to more easily retrieve information from a search engine. Instruction for Web Searching: An Empirical Study 119 generally focused on systems other than computers, for example, radar, camera, density, brakes, the nitrogen cycle. 57 In those studies, paper-based materials were used to describe the systems. This was the basis for the present study. To date, there are no studies that compare print-based instruction with practice and print-based instruction without practice. However, several researchers in the field of infor- mation science have suggested that sys- tem knowledge is best acquired during use of the system and that prolonged use increases proficiency.58�61 In particular, Cecilia Katzeff found that practice in re- trieving information from a database sys- tem resulted in increased proficiency in using the system and that participants reporting increased comfort levels with the system.62 Marvin Wiggins also has re- ported that librarians who develop infor- mation retrieval instruction recommend at least one individualized search session at the computer after an introduction to searching fundamentals. 63 More research is needed in this area. The findings of this study do not pro- vide evidence that a conceptual model of a search engine is more effective than in- struction by example in contributing to semantic knowledge, although a concep- tual model may be effective in increasing understanding of data sets. Further re- search is needed on whether semantic knowledge of a search engine is best ac- quired through practice with the system and whether this scoring rubric is an ac- curate way to determine semantic knowl- edge. The fundamental goal of the study was to investigate ways for undergraduates to more easily retrieve information from a search engine. This ultimately comes down to being able to interact with the system in an effective manner�to be able to formulate a syntactically appropriate search query to enter into an engine. Af- ter instruction, participants in the study increased the number of search terms used and the number of Boolean opera- tors used. The number of search terms in- creased with a mean of 3.19 terms (s.d. = .71) for the first question, 4.05 (s.d. = 1.25) for the second question, and 4.55 (s.d. = 1.28) for the third question. Boolean op- erators were used in the posttest by 79 percent of the participants (n = 137), with AND (n = 130) used more often than OR (n = 72). Of these, only four percent were used incorrectly. This indicates that across all instructional materials, participants increased in the number of appropriate terms included in the query and were more likely to include Boolean operators. Both of these tactics would lead to a more precise search query and more relevant sites returned. In this study, it appears that instruc- tion by example was the most effective method for increasing syntactic knowl- edge. There are three possible explana- tions for this finding. First, participants in the conceptual model treatments did not attend to all of the relevant informa- tion; second, participants in the concep- tual model treatments were presented with too much novel information, result- ing in cognitive overload; and third, in- struction using worked examples is suf- ficient for syntactic knowledge acquisi- tion. With the design of the study involv- ing no direct contact between researcher and participants other than through the written materials, it is impossible to as- certain what parts of the instruction the users attended to. It is possible that par- ticipants in the conceptual model treat- ments may have perceived information that described how the search engine handled the search string as extraneous and chose to ignore it. However, two fac- tors make this unlikely. First, participants who received the conceptual model ma- terials spent more time on task than did those who received instruction by ex- ample. The Conceptual Models without Illustrations group spent a mean of 14.9 minutes on the instruction and posttest (s.d = 3.20) and the Conceptual Models with Illustrations group spent a mean of 15.9 minutes on the instruction and posttest (s.d = 3.64) whereas the Instruc- tion by Example group spent 13.21 min- 120 College & Research Libraries March 2003 utes on the instruction and posttest (s.d. = 3.21). Had the conceptual model groups ignored the extra information provided, the mean times would have been closer together. Second, the scores on the posttest differ. If the participants had sim- ply paid attention to the examples por- tion of the instructional materials, the scores on the posttest would not have been significantly different. A second possible explanation is that information included in the conceptual model treatments on how a search engine works interfered with the acquisition of the syntactic information from the ex- ample provided. In describing his cogni- tive load theory, John Sweller referred to this as dividing the learner �s attention between the acquisition of two separate schemas that would result in working memory overload.64 The participants may have split their attention between what needed to be entered into the engine and what was going to happen after that. One then would expect the semantic knowl- edge posttest scores for the participants in the conceptual model treatments to be higher; this was, in fact, the case. For se- mantic knowledge, the Conceptual Mod- els without Illustrations group had an adjusted-mean posttest score of 5.88 and the Conceptual Models with Illustrations group had an adjusted-mean posttest score of 5.66 whereas the Examples group had an adjusted-mean posttest score of 5.07. This may indicate that the concep- tual model groups were working to ac- quire two different types of knowledge that are not closely related. This is sup- ported by the low correlation between semantic and syntactic knowledge found in this study. Another possible explanation for the differences between the groups is that in- struction by example may be sufficient for acquiring syntactic knowledge. There is a long history of research involving worked examples. 65 Worked examples include a problem statement and a pro- cedure for solving the problem, which are meant to show how other similar prob- lems might be solved. Most of this re- search has been done with mathematics and physics instruction. In general, worked examples are associated with early stages of skill development.66 A pre- liminary exploration of this was done in the present study. Participants with low syntactic knowledge on the pretest (those who scored in the lowest 10% of partici- pants) were broken out, and ANCOVA was used to analyze posttest scores across the different instructional materials using the pretest as a covariate. Significant dif- ferences were found (F[2,24] = 3.696, p <.05). Post hoc analysis using Scheffe pair- wise comparisons revealed that partici- pants in the Examples group had a higher syntactic score (adjusted mean = 8.81) than participants in the Conceptual Mod- els with Illustrations group (adjusted mean = 5.57). It may be that instruction by example may be most effective for low prior knowledge learners who are at the beginning of their learning and that later emphasis on semantic knowledge may be beneficial. This area needs further re- search. Conclusions This study is a critical first step in re- searching effective instructional strategies for Web searching. The number of users accessing the Web is increasing, as is the amount of information on the Web. Al- though efforts are being made to increase the usability of the search engine itself, progress has been slow.67,68 Effective in- struction may be the key to increasing the return of relevant results and decreasing user frustration while searching the Web. The dearth of knowledge on effective methods of instruction for Web searching led to this study. Although the results are not conclusive, suggestions can be made as research continues in this area. First, partnering with instructional designers who have a broader understanding of learning theories and instructional meth- ods may result in innovative instructional methods. In pairing educational theorists with librarians and information special- ists, each party can bring strengths to the table that the other does not have. Sec- Instruction for Web Searching: An Empirical Study 121 Notes 1. Hsichun Chen, Andrea L. Houston, Robin R. Sewell, and Bruce R. Schatz, �Internet Brows- ing and Searching: User Evaluations of Category Map and Concept Space Techniques,� Journal of the American Society for Information Science 49 (1998): 582�603. 2. Janette Hill, �The World Wide Web as a Tool for Information Retrieval: An Exploratory Study of Users� Strategies in an Open-ended System,� School Library Media Quarterly 25 (1997): 229�36. 3. Bernard, J. Jansen, Amanda Spink, and Tefko Saracevic, �Real Life, Real Users, and Real Needs: A Study and Analysis of User Queries on the Web,� Information Processing and Manage- ment 36 (2001): 207�27. 4. Susan M. Land and Barbara A. Greene, �Project-based Learning with the World Wide Web: A Qualitative Study of Resource Integration,� Educational Technology, Research and Develop- ment 48 (2000): 45�67. 5. Ard W. Lazonder, Harm J. A. Biemans and Iwan, G. Wopereis, �Differences between Nov- ice and Experienced Users in Searching Information on the World Wide Web,� Journal of the American Society for Information Science 51 (2000): 576�81. 6. Amanda Spink, Judy Bateman, and Bernard J. Jansen, �Searching the Web: A Survey of EXCITE Users,� Internet Research: Electronic Networking Applications and Policy 9 (1999): 117�28. 7. Ibid. 8. Jansen, Spink, and Saracevic, �Real Life, Real Users, and Real Needs.� 9. Chen, Houston, Sewell, and Schatz, �Internet Browsing and Searching.� 10. Hill, �The World Wide Web as a Tool for Information Retrieval.� 11. Peiling Wang, William B. Hawk, and Carol Tenopir, �User�s Interaction with World Wide Web Resources: An Exploratory Study Using a Holistic Approach,� Information Processing and Management 36 (2000): 229�51. 12. Richard E. Mayer, Thinking, Problem Solving, Cognition, 2nd ed. (New York City: W. H. Freeman and Company, 1992). 13. John R. Anderson, �Acquisition of Cognitive Skill,� Psychological Review 89 (1982): 369� 406. 14. Mayer, Thinking, Problem Solving, Cognition. 15. Lauretta Reeves and Robert W. Weisberg, �The Role of Content and Abstract Information in Analogical Transfer,� Psychological Bulletin 115 (1994): 381�400. 16. Chen, Houston, Sewell, and Schatz, �Internet Browsing and Searching,� 582�603. 17. Hill, �The World Wide Web as a Tool for Information Retrieval.� 18. Janette Hill and Michael J. Hannafin, �Cognitive Strategies and Learning from the World Wide Web,� Educational Technology, Research and Development 45, no. 4 (1997): 37�64. 19. Land and Greene, �Project-based Learning with the World Wide Web.� 20. Lazonder, Biemans, and Wopereis, �Differences between Novice and Experienced Users in Searching Information on the World Wide Web.� 21. Wang, Hawk, and Tenopir, �User�s Interaction with World Wide Web Resources.� 22. Piraye Bayman and Richard E. Mayer, �Using Conceptual Models to Teach BASIC Com- puter Programming,� Journal of Educational Psychology 80 (1988): 291�98. 23. Richard E. Mayer, �Models for Understanding,� Review of Educational Research 59 (1989): 43�46. 24. Hill and Hannafin, �Cognitive Strategies and Learning from the World Wide Web.� 25. F. R. Campagnoni and Kate Ehrlich, �Information Retrieval Using Hypertext-based Help System,� Proceedings of the 12th Annual International ACMSIGIR Conference (1989): 212�20. 26. Alexandra Dimitroff, �Mental Models Theory and Search Outcome in a Bibliographic Re- trieval System,� Library and Information Science Research 14 (1992): 141�56. 27. Ingrid Hsieh-Yee, �Effects of Search Experience and Subject Knowledge on the Search Tactics of Novice and Experienced Searchers,� Journal of the American Society for Information Sci- ence 44 (1993): 161�74. 28. Thomas L. Jacobson and David Fusani, �Computer, System, and Subject Knowledge in Novice Searching of a Full-text, Multifile Database,� Library and Information Science Research 14 (1992): 97�106. 29. Gary Marchionini, �Information-seeking Strategies of Novices Using a Full-text Electronic Encyclopedia,� Journal of American Society of Information Science 40 (1989): 54�66. 30. Bayman and Mayer, �Using Conceptual Models to Teach BASIC Computer Programming.� ond, research in the area should continue. Only through effective testing and shar- ing of results will the field be able to move toward identifying best practices. 122 College & Research Libraries March 2003 31. Richard E. Mayer and Joan Gallini, �When Is an Illustration Worth Ten Thousand Words?� Journal of Educational Psychology 81 (1990): 452�56. 32. Diane Nahl-Jakobovits, �CD-ROM Point-of-Use Instructions for Novice Searchers: A Com- parison of User-centered Affectively Elaborated and System-centered Unelaborated Text,� Dis- sertation Abstracts International 2368A (University Microfilms No. 9334930). 33. Donald A. Norman, �Cognitive Engineering,� in User-centered System Design: New Per- spectives on Human�Computer Interaction, comp. Donald A. Norman and Stephen W. Draper (Hillsdale, N.J.: Erlbaum Pr., 1986). 34. Phil N. Johnson-Laird, Mental Models (Cambridge, Mass.: Harvard University Pr., 1983). 35. Donald A. Norman, �Some Observations on Mental Models,� in Mental Models, comp. D. Gentner and A. L. Stevens (Hillsdale, N.J.: Erlbaum Pr., 1983), 131�53. 36. Norman, �Cognitive Engineering.� 37. Mayer, �Models for Understanding.� 38. Ibid. 39. Mayer and Gallini, �When Is an Illustration Worth Ten Thousand Words?� 40. Randolph Hock, The Extreme Searcher�s Guide to Web Search Engines: A Handbook for the Serious Searcher (Medford, N.J.: Cyberage Books, 2000). 41. Preston Grala, How the Internet Works, Millennium Edition (Indianapolis: QUE, 1999). 42. David Green, �The Evolution of Web Searching,� Online Information Review 24 (2000): 124� 37. 43. Tham Yoke Chun, �World Wide Web Robots: An Overview,� Online & CD-ROM Review 23 (1999): 135�42. 44. Mayer, Thinking, Problem Solving, Cognition. 45. James A. Davis, Elementary Survey Analysis (Englewood Cliffs, N.J.: Prentice-Hall, 1971). 46. Ibid. 47. Mayer, Thinking, Problem Solving, Cognition. 48. Chen, Houston, Sewell, and Schatz, �Internet Browsing and Searching.� 49. Wang, Hawk, and Tenopir, �User�s Interaction with World Wide Web Resources.� 50. Bayman and Mayer, �Using Conceptual Models to Teach BASIC Computer Programming.� 51. Mayer, �Models for Understanding.� 52. ���, �Teaching for Thinking: Research on the Teachability of Thinking Skills,� volume 9 in The G. Stanley Hall Lecture Series, comp S. Cohen (Washington, D.C.: American Psychological Association, 1989). 53. Mayer and Gallini, �When Is an Illustration Worth Ten Thousand Words?� 54. Ibid. 55. Valerie Frants, Jacob Shapiro, Isak Taksa, and Vladimir G. Voiskunskii, �Boolean Search: Current State and Perspective,� Journal of the American Society for Information Science 50 (1999): 86�95. 56. Karen Sparck Jones and Peter Willett, Readings in Information Retrieval (San Francisco: Morgan Kaufman, 1998). 57. Mayer, �Models for Understanding.� 58. Campagnoni and Ehrlich, �Information Retrieval Using Hypertext-based Help System.� 59. Hsieh-Yee, �Effects of Search Experience and Subject Knowledge on the Search Tactics of Novice and Experienced Searchers.� 60. Jacobson and Fusani, �Computer, System, and Subject Knowledge in Novice Searching of a Full-text, Multifile Database.� 61. Cecilia Katzeff, �System Demands on Mental Models for a Full-text Database,� Interna- tional Journal of Man�Machine Studies 32 (1990): 483�509. 62. Ibid. 63. Marvin Wiggins, �Information Literacy at Universities: Challenges and Solutions,� in In- formation Literacy: Developing Students as Independent Learners, comp. D. W. Farmer and T. F. Mech (San Francisco: Jossey-Bass, 1992), 73�81. 64. John Sweller, Instructional Design in Technical Areas (Victoria, Australia: Australian Educa- tion Review, 1999). 65. Robert K. Atkinson, Sharon J. Derry, Alexander Renkl, and Donald Wortham, �Learning from Examples: Instructional Principles from the Worked Examples Research,� Review of Educa- tional Research 70 (2001): 181�214. 66. Ibid. 67. Kate Erlich, �Applied Mental Models in Human�Computer Interaction,� in Mental Mod- els in Cognitive Science: Essays in Honour of Phil Johnson-Laird, comp. Jane Oakhill and Alan Garnham (Hove, East Sussex, U.K.: Psychology Pr., 1996). 68. Candy Schwartz, �Web Search Engines,� Journal of the American Society for Information Science 49 (1998): 973�82.