kraemer.indd The Librarian, the Machine, or a Li le of Both: A Comparative Study of Three Information Literacy Pedagogies at Oakland University Elizabeth W. Kraemer, Shawn V. Lombardo, and Frank J. Lepkowski Each year, Oakland library faculty provide information literacy instruc- tion for Rhetoric 160, a first-year writing course, through a combination of WebCT-based online tutorials and in-class teaching. For this study, twelve sections of RHT 160 during the winter 2005 term were selected to compare three instructional methods: online instruction only, live instruc- tion, and the current “hybrid” combination of live instruction and online tutorials.The sections were assigned to one of the instructional methods and, to assess student learning, all students (n=224) completed identical pre- and post-tests. Results of the study, including differences in student performance in relation to pedagogy, are discussed. t Oakland University (OU), a doctoral institution of 17,000 students in Rochester, Michi- gan, course-integrated library instruction for freshman composition is a long-standing joint commitment be- tween the faculty from the Department of Rhetoric, Communication and Jour- nalism and those at Kresge Library. The goal of this partnership is to introduce students to the library research process early in their academic careers. In a busy instruction program that has reached approximately one-third of the total student population in recent years, infor- mation literacy instruction for Rhetoric 160: Composition II (RHT 160) comprises about half of the total teaching load of the library. Oakland University’s student popula- tion has grown by more than 40 percent over the past 20 years.1 Unfortunately, while the library’s instructional load has had to increase to keep pace with enrollment, the number of library faculty actually has diminished over the same time period, leaving fewer librarians to teach more instruction sections. As a result, librarians at OU found it difficult to maintain the traditional full-week’s worth of instruction for RHT 160 stu- dents, even as the increasingly complex process of finding and retrieving sources called for more, not less, instructional Elizabeth W. Kraemer is Assistant Professor, Shawn V. Lombardo is Associate Professor, and Frank J. Lepkowski is Associate Professor in the Kresge Library at Oakland University; e-mail: kraemer@oakland. edu, lombardo@oakland.edu, and lepkowsk@oakland.edu, respectively. The authors gratefully acknowledge Patrick Howell, MLIS, for his assistance with the statistical analysis in this study. 330 mailto:lepkowsk@oakland.edu mailto:lombardo@oakland.edu The Librarian, the Machine, or a Li le of Both 331 time. By 2000, the library had reached a breaking point. At about this time, Oakland Univer- sity administrators were promoting to faculty the use of the WebCT course management online system. In 2001, the library decided to supplement informa- tion literacy instruction with an online library instruction module that students could access by logging into their RHT 160 course in WebCT. By presenting some material online in this fashion, librarians were able to reduce the num- ber of contact hours with each section of RHT 160. A series of tutorials and quiz- zes in this online module covered the physical layout of the library, essential policies and procedures, and the basics of searching the Voyager online catalog; a er completing these tutorials, the RHT 160 classes received two contact hours of instruction from a librarian. A pre- test and post-test in the WebCT module allowed for some assessment of student learning. The transition to this hybrid instruc- tional model was accompanied by a great deal of apprehension among members of the library faculty. Some librarians felt that its rigid tutorials and multiple-choice questions would be no substitute for per- sonal instruction; others felt that the ba- sics of library information and procedures could be taught successfully online. Still others thought that library instruction for RHT 160 could be presented completely via WebCT, thus freeing up librarian time for other purposes. From this internal debate, based on librarians’ deeply held commitment to teaching research skills to students at Oakland University, the present study was conceived and conducted. Specifi- cally the investigators wished to answer the following questions: Research Question 1: Does library instruction, regardless of format, improve student learning outcomes? Research Question 2: Does the format of library instruction impact student learning outcomes? Research Question 3: Does instruction method affect students’ satisfaction with their library instruction? Research Question 4: Does instruction method influence students’ assessment of their own research abilities? The following study reports on the first year of an ongoing project to address these areas of inquiry. Literature Review Advantages and Drawbacks of Computer- Assisted Instruction Computer-assisted instruction (CAI) describes the use of computers to teach students without the intercession of a hu- man instructor. A review of the literature reveals over and over again that o en libraries turn to some form of CAI to help manage one inescapable problem: the growing need for library instruction outpaces the capabilities of overstretched staff at many institutions.2 Beyond that, CAI can also be used to allay librarian burnout, as noted by Dixon and colleagues at the University of Tennessee, Knoxville: “Even with changes over the years, main- taining the enthusiasm necessary to make the classes interesting became increasingly difficult.”3 Computer-assisted instruction has also been used to standardize the li- brary instruction given to students.4 In addition to the benefits that can be gained by libraries and librarians, CAI holds a number of advantages for students as well, all of which can improve under- standing and retention of the material presented in electronic tutorials. First, computer-assisted instruction “offers an alternative learning approach that may be appealing to students who have expe- rienced only the lecture method.”5 Several authors have also praised the opportunity to integrate “bells and whistles” into CAI tutorials to keep students’ interest;6 simi- larly, the interactivity that can be worked into electronic tutorials helps maintain fo- cus and a ention by allowing students to individualize their learning experiences.7 Instruction developers also laud the fact that in a CAI environment, users can work 332 College & Research Libraries July 2007 through the lessons at their own speed.8 Author Don Tapsco endorses this feature from the student perspective, commenting of his own experience in an online class: “…unlike traditional courses, I could stop and review something I didn’t understand or fast-forward through material I felt I had grasped.”9 Another significant ben- efit to the student is the opportunity for contact with technology; in fact, in out- lining the ways in which CAI lends itself to library instruction, Ann Turner notes, “Using the computer to teach computer skills is an efficient approach.”10 Offering online library instruction in a courseware environment can add yet another layer of usability to an instruc- tional program. Major advantages of course management tools such as WebCT or Blackboard are the data storage and analysis features of these tools, which al- low researchers to track completion rates and grades all in one place.11 Additionally, courseware allows for built-in student evaluation and immediate feedback on quizzes and tests, which allows students to check their progress.12 While CAI provides a number of ben- efits, libraries have repeatedly reported on problems arising from the addition of computer-assisted instruction to their teaching programs. One concern of librar- ians and teaching faculty alike is the fact that CAI tends to reduce personal contact time between student and librarian.13 Students may also lose the chance to con- nect with “the physical and social space of the library.”14 These losses may result in students feeling uncomfortable coming to the library or approaching librarians to ask for help when needed. Finally, research shows that the suc- cess of computer-assisted instruction depends in large part on the motivation of students: many libraries have found that if students do not receive some form of external motivation—such as a class requirement or an assignment grade—to complete the electronic component of a li- brary instruction program, the majority of students simply will not do it.15 One study further suggests that students’ isolation while working through the electronic library instruction tutorial may negatively impact their motivation and a itude.16 How Does CAI Fare Against Other Teaching Methods? For years, institutions have conducted studies comparing the effectiveness of computer-assisted instruction to more tra- ditional modes of delivery. Most studies comparing computer-assisted instruction to librarian-led sessions find few signifi- cant differences in post-test performance between instructional formats.17 For example, Germain, Jacobson, and Kaczor compared the effectiveness of a Web-based module to that of live in- struction at the University of Albany for the students in the first-year experience program there. Using an identical pre- and post-test, the researchers compared outcomes for each teaching method with a sample size of 284 students. They found no significant difference in post-test mean scores between Web-based and live in- struction, indicating that both delivery methods were equally effective. Due to the results of their study, the authors decided to use the Web-based tutorial for library instruction, but to have students complete it in class under the guidance of their classroom teacher.18 In a similar study, conducted at SUNY Oswego, Nichols, Shaffer, and Shockey found no significant difference in either student satisfaction or performance regardless of instruction format.19 Dissenting results were found in the study conducted by Marion Churkov- ich and Christine Oughtred at Deakin University. In this study, which included 174 first-year sociology students, the re- searchers sought to determine if an online tutorial could be used to successfully train students without the presence of a librarian. One distinct difference in this institution’s approach was the purchase of a customized online instruction tool rather than in-house tutorial design. Stu- dents were randomly assigned to one of http:format.19 http:teacher.18 http:formats.17 http:a�itude.16 http:librarian.13 http:progress.12 http:place.11 The Librarian, the Machine, or a Li le of Both 333 three instruction methods: independent online tutorial completion, librarian- mediated online tutorial completion, or traditional face-to-face instruction with a librarian. The researchers were surprised to find that the face-to-face group showed significantly greater improvement from pre-test to post-test over the other two methods of instruction. Additionally, they found that students completing the tuto- rial independently were less confident a er library instruction than students in the other two groups. Churkovich and Oughtred concluded that “contact with and instruction by a librarian is desir- able for the best learning outcomes and confidence in development of information literacy skills.”20 Based on the findings of this study, the librarians at Deakin University planned to continue to use face-to-face instruction for subject-specific classes, while relying on an updated ver- sion of the online tutorial only to teach basic catalog skills training to First Year Introduction Program students.21 Very few studies have examined the effectiveness of library instruction specifi- cally delivered via course management so ware. One such study was conducted in 1998 at Western Kentucky University, where online library instruction delivered via WebCT was compared to classroom instruction for a one-credit library re- search skills course. 22 Study participants self-selected their groups, as 45 students registered for the traditional group and 43 enrolled in the Web group for a total sample size of 88 people. Web students were not required to come to campus to complete the work, but had access to their instructor via e-mail if needed. The researchers, Linda B. Alexander and Rob- ert C. Smith, chose a causal-comparative model for this study, using only a post-test to measure student performance. T-test comparisons for the two groups showed no significant difference on post-test scores, which suggests that the qual- ity of the two instructional formats was very similar. In a comparison of ratings of activities and resources by students in each group, Web students consistently indicated more positive a itudes in how beneficial they found the course to be. Further analysis of the data displayed a higher preference among nontraditional students than traditional students for Web-based courses. Methodology Study Design The target population for this study con- sisted of students enrolled in Rhetoric 160: Composition II (RHT 160) at Oakland University for the 2005 winter semester. The researchers opted to conduct a three- way comparative study of pedagogical methods based upon the library’s cur- rent RHT 160 instructional model.23 To facilitate this model, four faculty mem- bers from the Department of Rhetoric, Communication and Journalism, teaching three sections each that term, agreed to have their classes participate in the study. Each instructor’s three RHT 160 sections were randomly assigned to receive library instruction using one of three delivery formats: all-online using WebCT; a hybrid of face-to-face and online; and all-live in- struction by a librarian. Numerous stud- ies have shown that library instruction, regardless of format, improves student performance;24 therefore, the authors elected not to employ a noninstructional control group in this study. In an effort to avoid variation in student performance due to differing instructional styles among the three librarians conducting the study, it was determined that one librar- ian would work with all sections for a particular Rhetoric instructor. Random as- signments of librarian to instructor were made early in the winter semester. The authors received approval from Oakland University’s Institutional Review Board (IRB) in late 2004 to undertake the project. In February of 2005, the authors visited the classes to which they were as- signed in order to outline the study and distribute the study guidelines and con- sent forms. Students who did not turn in signed consent forms were still required http:model.23 http:students.21 334 College & Research Libraries July 2007 TABLE 1 Content in WebCT for Each Instructional Delivery Method Non experimental Hybrid Face-to-face Online Pre-Test X X X X Library Virtual Tour X X X Top Ten Facts X X X Voyager Basics X X X Keyword Searching X Evaluating Periodicals X FirstSearch X InfoTrac X LexisNexis X Post-Test X X X X to complete the course of bibliographic instruction, with the understanding that their scores would not be included in the study. The study began with 247 subjects in the twelve experimental sections. Subjects were required to complete all as- signed quizzes and to a end all assigned instruction sessions; students who did not complete all of the requirements of their library instruction model were eliminated from the study. The final sample consisted of 224 students, distributed approxi- mately equally into the three instructional groups—live, hybrid, and online. In addi- tion, data were available for comparative purposes from 963 students from all other sections of RHT 160 who completed their library instruction during the winter 2005 semester. All subjects in this group, la- beled the nonexperimental group in table 1, received their library instruction via the typical hybrid instruction model. Procedure Table 1 illustrates the content that was conveyed via WebCT for the three experi- mental versions of RHT 160, as well as the nonexperimental sections; note that the mixed instructional model for the hybrid experimental sections was the same as that of all the nonexperimental sections for the semester. Instructional content was delivered to students either in the classroom or online through Web-based tutorials that were accessed via WebCT. The tutorials were designed to deliver as closely as possible both the information and hands-on op- portunities experienced by those students working with a librarian. For example, the FirstSearch, InfoTrac, and LexisNexis tu- torials were constructed in a split-screen format, allowing students to conduct live searches in the database while reading instructions and tips in a separate frame, as illustrated in figure 1. The students in the experimental groups, regardless of instructional method, met in a library computer classroom over the course of a week. Subjects assigned to the live group at- tended a total of three hours of library instruction, which combined librarian- led demonstrations and lectures with active learning exercises that students completed in class. Subjects in the hybrid group first completed their Web-based instruction via WebCT, then attended two hours of instruction by a librarian. Finally, the online group completed all of their instruction—consisting of eight online tutorials and accompanying quiz- zes—in WebCT. Students in the online group completed all of their work in the The Librarian, the Machine, or a Li le of Both 335 computer classroom, and a librarian was available to address technological issues; however, neither the librarian nor the Rhetoric instructor answered questions about the content of the tutorials. For the hybrid and online groups, quizzes followed each tutorial in WebCT; these quizzes measured student understanding of the tutorial and also permi ed them access to the next quiz within WebCT. All study subjects completed an identi- cal 15-point pre-test and final exam (post- test) in WebCT. Success of instruction was measured by matching pre-test and final exam scores for each student. Results The data were analyzed using the SPSS statistical package, and a significance level of .05 was chosen because of its widespread acceptance in data analysis. The mean scores on the pre-test and final exam for each of the experimental samples, as well as the nonexperimental group, are included in table 2; as illus- trated in the table, the live group scored highest on both the pre-test and the final exam, although final exam performance for the live and hybrid groups appears almost indistinguishable from one an- other. A one-way analysis of variance (ANOVA) of pre-test scores, conducted to identify baseline disparities among the instructional groups prior to instruction, did not reveal any initial significant dif- ference among the group means (F=1.888, p=.154). In addition, the researchers per- formed a t-test of independent samples to compare the pre-test and final exam results of the entire experimental sample (N=224) with the nonexperimental group (N=963). For both the pre-test (t=.845, p=.399) and the final exam (t=1.529, p=.127), no significant differences in the means were found, suggesting that per- formance on the pre-test and final exam was not impacted by subject involvement in the study. FIGURE 1 Screen Grab of Kresge Library’s Split-screen Tutorial Format 336 College & Research Libraries July 2007 Research Question One: Does Library Instruction, Regardless of Format, Improve Student Learning Outcomes? For the entire experimental group, 76.8 percent of the subjects showed improve- ment from their pre-test to final exam scores. Another 15.2 percent scored the same on each test, and the final 8.0 percent of students actually scored lower on the fi- nal exam following library instruction than they had on the pre-test. A paired-samples t-test performed on the entire experimen- tal group revealed that, overall, students in the experimental group showed sig- nificant improvement (t=15.403, p=.000) in their test performance. The researchers also conducted paired- samples t-tests on the three instructional groups to determine if significant im- provement in test performance occurred for each sample following library instruc- tion. For all of the experimental groups, the average improvement in test scores after students completed their library instruction was 1.906 points; the hybrid group showed the greatest improvement in scores, with a mean difference of 2.278 points. For each of the samples, table 2 contains the difference in means between the pre-test and final exam scores as well as the t-values generated from the paired- samples t-test; as indicated in the table, the t-values for all of the experimental groups were found to be significant at the .000 confidence level. Therefore, the research- ers conclude with a high degree of con- fidence that significant improvement in test performance occurred for all subjects following library instruction, regardless of the format of that instruction. A correlation analysis was conducted as well, to investigate the relationship between pre-test and final exam scores. Table 2 provides the Pearson’s correlation coefficient (r) for each of the samples. With the exception of the hybrid group, all groups displayed a significant positive correlation between pre-test and final exam scores; that is, students who scored high on the pre-test tended to score high on the final exam. Data from the hybrid group was explored more closely to identify possible causes for the lack of significant positive correlation between pre-test and final exam scores for these subjects. A sca erplot of the pre-test and final exam results for the hybrid group reveals a number of students whose scores improved drastically following library instruction: these students scored well below the mean on the pre-test and well above the mean on the final exam, adding support to the finding above that the hybrid group showed the greatest improvement in test scores. Although subjects did not know their scores on the exams until a er the final exam was completed, the lack of correlation of test scores for the hybrid group may indicate a higher level of motivation or engagement during their library instruction, or it may indicate that some of these students were TABLE 2 Paired Samples T-Tests: Exam Results n Pre-test Mean Final Exam Mean Difference of Means r All experimental sections 224 10.795 12.700 1.906 .391* Live 74 11.108 12.960 1.851 .596* Hybrid 72 10.639 12.917 2.278 .114** Online 78 10.641 12.256 1.615 .412* Nonexperimental sections 963 10.685 12.507 1.821 .446* *p<.000 **n.s. t 15.403* 10.395* 9.184* 7.727* 29.451* The Librarian, the Machine, or a Li le of Both 337 less familiar than their peers with the library and information literacy topics before participating in the study. Clearly, the dramatic improvement in these students’ scores demonstrates the value of library instruction in teaching information literacy skills to undergradu- ates. Research Question Two: Does the Format of Library Instruction Impact Student Learning Outcomes? As noted above and shown in figure 2, the hybrid group demonstrated the greatest improvement in test perfor- mance following instruction. And, in fact, both groups that had contact with a librarian—that is, the live and hybrid groups—scored higher on the final exam than the online group, which received no substantial librarian interaction. To determine if these differences among the experimental groups were significant, however, the researchers conducted a number of additional statistical analyses. Because the study design incorporates three levels of the independent variable (instruction type), a one-way ANOVA was chosen to compare the means of the experimental groups on final exam FIGURE 2 Pre-test and Final Exam Mean Scores, by Instruction Type 10 10.5 11 11.5 12 12.5 13 13.5 14 14.5 15 Live Hybrid Online Pre-test Final exam scores. The analysis revealed a significant difference (F=4.477, df=2, p=.012) in final exam means of the samples. The post-hoc Scheffé multiple comparisons procedure was then conducted to determine where the differences existed. In fact, the Scheffé comparison found significant differences between both the live and online groups (p=.031) and the hybrid and online groups (p=.048). The online group, then, per- formed significantly lower on the final exam than the other two groups. The researchers also wanted to deter- mine whether the inclusion of pre-test scores in the analysis of variance would impact these findings. To accomplish this, the researchers chose to conduct an analysis of covariance (ANCOVA), with final exam scores serving as the depen- dent variable, instruction method as the between-subjects independent variable, and pre-test scores serving as the covari- ate. According to Cribbie and Jamieson, the use of ANCOVA in a pre-test/post-test design adjusts for variance that occurs within individual subjects, thereby mak- ing it more powerful to detect differences that occur as a result of the treatment being studied.25 In addition, ANCOVA is useful particularly in post-test analysis where a “ceiling ef- fect” occurs,26 as in the case of this study, in which students could not score higher on the final exam than a perfect score of fi een points. The ANCOVA measure of between- subjects effects in the current study (F=4.103, df=2, p=.018) indicates that, indeed, significant differences still exist in final exam scores when pre-test scores are taken into account. Post-hoc pairwise com- parisons conducted through SPSS, howev- er, reveal a significant http:studied.25 338 College & Research Libraries July 2007 difference only between the hybrid and online groups (p=.023); a er adjusting for disparities in pre-test performance, the hybrid group scored significantly higher than the online group on the final exam. Interestingly, the post-hoc tests did not measure a significant distinction between the live and online groups, although the live group scored the highest mean of the three samples and performed almost identically to the hybrid group. It may be that a larger sample size would reveal a difference between these groups, but it may also be that the ANCOVA test simply reflects the observation that the hybrid group displayed the largest improvement in test performance of the three samples. The results of these analyses ultimately suggest that the differences in means dis- played by the three instructional groups cannot be attributed to chance but to the pedagogical model by which they received their library instruction. Research Questions Three and Four: Does Instruction Method Affect Students’ Satisfaction with their Library Instruction? Does Instruction Method Influence Students’ Assessment of their own Research Abilities? The pre-test and final exam included ad- ditional qualitative questions designed to assess student satisfaction with their library instruction and to measure changes in students’ perceptions of their own research skills a er they had completed library instruction. To measure student satisfaction with their library instruction experience, the researchers asked students to rate their level of satisfaction on a five- point scale (from “very unsatisfied” to “very satisfied”). A chi-square analysis of student responses to the satisfaction question in relation to their instruction type reveals that a significantly lower number of students in the online group reported being “very satisfied” with their library instruction than students in the other two samples. Here again, the hy- brid and live groups performed similarly on this question, with 41.9 percent of the live group and 40.3 percent of the hybrid group reporting being “very satisfied” with their library instruction, compared with only 16.7 percent of the online group answering in the same manner. However, when responses are grouped into broader categories of satisfaction (“very unsatisfied or unsatisfied”; “neutral”; and “satisfied or very satisfied”), the chi-square analysis does not reveal any significant variations in satisfaction levels among the three groups; the online group responded positively to their library instruction format (“satisfied or very satisfied”) at approximately the same rate as the other two groups. There- fore, the researchers conclude that student satisfaction with each of the instructional models was approximately equal, al- though students in the online group were, perhaps, not quite as enthusiastic as stu- dents in the other two samples. The researchers also asked students, on both the pre-test and final exam, to evaluate their own research skills on a four-point scale (from “I need a lot of help ge ing started” to “I am thoroughly able to do research in the library”) to determine whether their self-assess- ment would improve following library instruction. As shown in table 3, prior to instruction the majority of all experi- mental subjects (64.3 percent) reported being “somewhat able to do research in the library” prior to instruction, with only 14.7 percent reporting that they were “thoroughly able to do research in the library.” A er receiving library instruction, students’ assessment of their own research abilities improved markedly, with 50.4 percent of subjects reporting that they felt “thoroughly able to do research in the library” and only .9 percent of students responding that they still “needed a lot of help ge ing started.” Overall, 52.7 percent of subjects scored themselves higher on the self-assessment of their research skills following library instruction, with 43.3 percent rating themselves the same on both the pre-test and final exam. Chi-square analyses of students’ self-assessment responses on The Librarian, the Machine, or a Li le of Both 339 TABLE 3 Self-Assessment of Research Skills, All Experimental Groups Pre-test Final Exam Percent ChangeN (percent of total) N (percent of total) I need a lot of help getting started. 11 (4.9%) 2 (.9%) –4% I need some help getting started. 36 (16.1%) 4 (1.8%) –14.3% I am somewhat able to do research in the library. 144 (64.3%) 105 (46.9%) –17.3% I am thoroughly able to do research in the library. 33 (14.7%) 113 (50.4%) +35.7% the final exam revealed no significant differences by instruction type, nor was there any real distinction in the number of students who rated their self-assess- ment higher following instruction for any of the groups (R=9.025, p=.06). Overall, then, the three experimental groups expressed comparable levels of satisfaction with their library instruction experience as well as similar assessments of their own research skills a er receiving library instruction. Discussion From this preliminary study of RHT 160 library instruction, the researchers con- clude that contact with a librarian is an important component of student learning. The hybrid group showed the greatest improvement in performance, perhaps due to the combination of instructional methods, which likely appeal to diverse learning styles. However, each of the three experimental groups—the live, hybrid, and online pedagogical models—did show definite improvement from their pre-test to final exam scores; in fact, the difference in mean final exam scores of the highest-performing (live) and the lowest-performing (online) groups was less than a single point, or one question. This suggests that online instruction can be effective. Because RHT 160 library instruction focuses on introductory infor- mation literacy skills and the mechanics of searching library databases, it may be that many of these topics lend themselves well to both live and online instruction. A limitation of this study was the rela- tive easiness of the pre-test and final ex- ams. Subjects scored well on the pre-test, answering, on average, almost 70 percent of the questions correctly even before beginning any form of library instruction. The ease of the tests, then, combined with the ceiling effect on the pre-test and final exam, resulted in relatively low levels of variance among the groups, as well as a skewed data distribution. A empts to normalize the data by transforming vari- ables yielded a more normal distribution but did not produce different statistical results than described in this paper. It should be noted, too, that ANOVA is a robust test in the face of minor departures from normality27 and the ANCOVA test mitigates skewness caused by the ceiling effect present in this study. Nonparamet- ric tests run on the data also revealed the same results. In addition, the results of this study support what is both revealed graphically (see figure 2) and theorized by librarians at Oakland University—that librarian contact is a valuable part of in- formation literacy instruction. Therefore, the researchers are confident in the valid- ity of these preliminary results. Benefits of Course Management Software The library has gained many advantages in using the WebCT course management package for library instruction. Of pri- mary value is the so ware’s data storage capability through its integrated grade book, which allows librarians to down- load and analyze student results to assess 340 College & Research Libraries July 2007 learning. Another benefit of offering the instruction modules in WebCT is that, conceptually at least, library instruction is more fully integrated into students’ other RHT 160 assignments; the presence of the library instruction icon alongside class materials in their online course environment may encourage students to approach library instruction in a more serious manner. And because many Oakland students are required in other classes to use WebCT, there is o en no so ware learning curve involved in com- pleting the library instruction modules and tests. Finally, although the librarians at Oakland have had to adapt library instruction materials for use in WebCT, this has resulted in a more flexible online instruction program. The library tutorials are housed on the library’s server, rather than in WebCT, to facilitate updates of content.28 Because of this, each tutorial is portable and can be easily adapted for other courses. For example, one of the re- searchers has adapted the basic RHT 160 FirstSearch tutorial for use in an online nursing class to teach students how to use CINAHL, a health sciences database available through FirstSearch. Future Plans During the course of this study, subjects assigned to the online group provided informal feedback about their experiences in completing the WebCT-based tutorials. Overall, students found the experience positive, although they wanted the tutori- als to be “flashier,” incorporating more in- teractivity and visual interest to enhance their online experience. Oakland will, in the near future, move away from WebCT courseware as a result of the merger of Blackboard and WebCT. This transition will provide an excellent opportunity to overhaul the online instructional mod- ules for RHT 160 in response to student comments. In brief discussions with the Rhetoric faculty participating in the study, the in- structors expressed their reservations with the all-online instruction format. Lack of student motivation plays a large role in their concerns, as does lack of contact with a librarian. Because progress through the online tutorials cannot be tracked, students may not actually be reading the tutorials before a empting the quizzes; unfortunately, WebCT offers no effective way to make quizzes available to students based on tutorial completion. As a result, it is impossible to determine if students are actively working through the tutorials. However, it is also difficult to ensure that students are actively engaged when they are receiving live instruction. It should be noted that this study did not a empt to measure the intangible benefits of class- room contact with a librarian, such as a greater comfort level in visiting the library and willingness to ask for assistance. The researchers will take into account RHT fac- ulty concerns in any changes made to the RHT 160 library instruction program. This study currently is being repli- cated. For this second phase, the study incorporates a larger sample size, and the pre-test/final exam instrument has been revised to increase difficulty and be er re- flect what is being taught both online and in the classroom. Although data have not yet been analyzed, the researchers believe that these changes will increase variance and make an analysis of variance more ro- bust. Other planned enhancements to the study include an exploration of students’ a itudes toward the three library instruc- tion pedagogical models and a review of subjects’ final paper bibliographies to explore whether instruction method impacts the quality of sources selected by students. In addition, the researchers will conduct a question-by-question analysis of both pre-test and final exam results to investigate whether some information literacy topics are be er suited to online instruction; with the transfer of additional content into the online environment, more in-class time could be allo ed to advanced topics in information literacy instruction. The collective results of these endeavors will guide the revision of instructional content—whether taught online or in http:content.28 The Librarian, the Machine, or a Li le of Both 341 the classroom—for Oakland University’s RHT 160 library instruction program. Concluding Remarks In the first iteration of the library’s WebCT- based hybrid model in 2001, some fairly advanced content—such as a module on keyword searching techniques—was offered online. From the observations of librarians over the course of an academic year, however, it did not appear that stu- dents grasped these difficult concepts when delivered electronically; therefore, in the following year this content was once again taught face-to-face by librarians. The empirical results of this study support the informal observations of classroom fac- ulty and librarians that in-person contact between librarians and students signifi- cantly enhances student understanding of complex information literacy topics. Also, these results strengthen the findings of other studies noted above that argue for the intangible benefits that students receive from face-to-face instruction. Unfortunately, like most institutions, Kresge Library faces an increasing de- mand for instruction each year without an equivalent increase in library faculty; as a result, resource constraints may force the library faculty to move even more instructional content online. Further, the university administration is strongly encouraging the development of more online courses, which will create a need for additional online library instruction modules. Consequently, the researchers expect that Kresge Library’s online pres- ence will have to expand rapidly in the next few years. Because of this, librarians must devise new ways of connecting students with the physical space of the library in this increasingly anonymous, technology-driven world. The findings of this preliminary study suggest that online instruction should be but one component of a comprehensive information literacy program, which must also include librarian-student interac- tion. Using a hybrid instructional model, librarians can address multiple learning styles, engage students with the latest technology, respond to external pressures to move into an online teaching environ- ment, and still maintain the physical con- tact that is vital to student learning. Notes 1. Oakland University Office of Institutional Research and Assessment, “Fall Headcount & FYES 1959–2005,” Oakland University. Available online from h ps://www2.oakland.edu/secure/ oira/FYES1.htm. [Accessed 28 March 2006]. 2. For examples of libraries using CAI to help manage teaching demand, see Elaine Ander- son Jayne, Judith M. Arnold, and Patricia Fravel Vander Meer, “Casting a Broad Net: The Use of Web-based Tutorials for Library Instruction,” in The Eighth Off-Campus Library Services Confer- ence Proceedings Held in Providence, Rhode Island 22-24 April 1998, edited by P. Steven Thomas and Maryhelen Jones (Mount Pleasant, Mich.: Central Michigan University, 1998), 197–205; Elizabeth W. Kraemer, “Developing the Online Learning Environment: The Pros and Cons of Using WebCT for Library Instruction,” Information Technology and Libraries 22 (June 2003): 87–92; Stephanie Mi- chel, “What Do They Really Think? Assessing Student and Faculty Perspectives of a Web-based Tutorial to Library Research,” College & Research Libraries 62 (July 2001): 317–32. 3. Lana Dixon, Marie Garre , Rita Smith, and Alan Wallace, “Building Library Skills: Com- puter-Assisted Instruction for Undergraduates,” Research Strategies 13, no. 4 (1995): 198. 4. For examples, see Carol Anne Germain, Trudi E. Jacobson, and Sue A. Kaczor, “A Compari- son of the Effectiveness of Presentation Formats for Instruction: Teaching First-Year Students,” College & Research Libraries 61 (Jan. 2000): 65–72; Lucy Holman, “A Comparison of Computer-As- sisted Instruction and Classroom Bibliographic Instruction,” Reference & User Services Quarterly 40 (Fall 2000): 53–60; Neosha Mackey and others, “Teaching with HyperCard in Place of a Textbook,” Computers in Libraries 12 (Oct. 1992): 22–25. 5. Joan Kaplowitz and Janice Contini, “Computer-Assisted Instruction: Is It an Option for Bibliographic Instruction in Large Undergraduate Survey Classes?” College & Research Libraries 59 (Jan. 1998): 20. 6. Jayne, Arnold, and Vander Meer, “Casting a Broad Net”; Michel, “What Do They Really 342 College & Research Libraries July 2007 Think?”; Don Tapsco , Growing Up Digital: The Rise of the Net Generation (New York: McGraw-Hill, 1998). 7. Judith M. Pask, “Computer-Assisted Instruction for Basic Library Skills,” Library So ware Review 7 (Feb. 1988): 6–11. 8. Holman, “A Comparison of Computer-Assisted Instruction”; Kaplowitz and Contini, “Computer-Assisted Instruction”; Pierina Parise, “Information Power Goes Online: Teaching Information Literacy to Distance Learners,” Reference Services Review 26 (Fall/Winter 1998): 51–52, 60; Pask, “Computer-Assisted Instruction for Basic Library Skills”; Jeffrey M. Wilhite, “Internet Versus Live: Assessment of Government Documents Bibliographic Instruction,” Journal of Govern- ment Information 30, no. 5/6 (2004): 561–74. 9. Tapsco , Growing Up Digital, 139. 10. Ann Turner, “Computer-Assisted Instruction in Academic Libraries,” The Journal of Academic Librarianship 15 (Jan. 1990): 353. 11. Kraemer, “Developing the Online Learning Environment.” 12. Holman, “A Comparison of Computer-Assisted Instruction.” 13. Germain, Jacobson, and Kaczor, “A Comparison of the Effectiveness of Presentation Formats for Instruction”; Pask, “Computer-Assisted Instruction for Basic Library Skills.” 14. James Nichols, Barbara Shaffer, and Karen Shockney, “Changing the Face of Instruction: Is Online or In-Class More Effective?,” College & Research Libraries 64 (Sept. 2003): 385. 15. For examples, see Ge y and others, “Using Courseware to Deliver Library Instruction via the Web”; Holman, “A Comparison of Computer-Assisted Instruction”; Kraemer, “Developing the Online Learning Environment”; Michel, “What Do They Really Think?”; Jo Parker and others, “Is a standalone IL course useful?,” Library & Information Update 4 (Jan./Feb. 2005): 34–35. 16. Marion Churkovich and Christine Oughtred, “Can an Online Tutorial Pass the Test for Library Instruction? An Evaluation and Comparison of Library Skills Instruction Methods for First Year Students at Deakin University,” Australian Academic & Research Libraries 33 (March 2002): 25–38. 17. Kaplowitz and Contini, “Computer-Assisted Instruction”; Denise Madland and Marian A. Smith, “Computer-Assisted Instruction for Teaching Conceptual Library Skills to Remedial Students,” Research Strategies 6, no. 2 (1988): 52–64. 18. Germain, Jacobson, and Kaczor, “A Comparison of the Effectiveness of Presentation Formats for Instruction.” 19. Nichols, Shaffer, and Shockey, “Changing the Face of Instruction.” 20. Churkovich and Oughtred, “Can an Online Tutorial Pass the Test for Library Instruction?” 34–35. 21. Ibid. 22. Linda B. Alexander and Robert C. Smith, “Research Findings of a Library Skills Instruction Web Course,” portal: Libraries and the Academy 1, no. 3 (2001): 309–28. 23. Many past studies have relied on a two-group design to measure the effectiveness of differ- ent instructional methods. Notable exceptions include: Holman, “A Comparison of Computer-As- sisted Instruction,” which employed a three-group design to measure student performance: CAI, in-class instruction, and a noninstructional control group; Wilhite, “Internet Versus Live,” in which a study of three student groups—online, live, and a noninstructional control—was used to assess the effectiveness of online government documents library instruction; and William A. Orme, “A Study of the Residual Impact of the Texas Information Literacy Tutorial on the Information-Seek- ing Ability of First Year College Students,” College & Research Libraries 65 (May 2004): 202–15, in which four cohort groups were used to compare student achievement: noninstructional control, Web-based instruction, classroom instruction, and Web-based plus classroom instruction. 24. For examples of these results, see Alexander and Smith, “Research Findings of a Library Skills Instruction Web Course”; Churkovich and Oughtred, “Can an Online Tutorial Pass the Test for Library Instruction?”; Dorothy F. Davis, “A Comparison of Instruction Methods on CD- ROM Databases,” Research Strategies 11, no. 3 (1993): 156–63; Germain, Jacobson, and Kaczor, “A Comparison of the Effectiveness of Presentation Formats for Instruction”; Mackey and others, “Teaching with HyperCard in Place of a Textbook”; Orme, “A Study of the Residual Impact of the Texas Information Literacy Tutorial.” 25. Robert A. Cribbie and John Jamieson, “Decreases in Pos est Variance and the Measurement of Change,” Methods of Psychological Research Online v. 9, no. 1 (2004): 37–55. Available online from www.mpr-online.de. [Accessed 1 March 2006]. 26. Ibid. 27. Geoffrey Keppel and William H. Saufley, Jr., Introduction to Design and Analysis (New York: W.H. Freeman and Company, 1980), 97. 28. For an explanation of the complexity of making changes in WebCT, see Kraemer, “Devel- oping the Online Learning Environment,” 89. http:www.mpr-online.de