139 What Are They Learning? Pre- and Post-Assessment Surveys for LIBR 1100, Introduction to Library Research Jon R. Hufford Jon R. Hufford is an Information Services Librarian in the University Library at Texas Tech University; e-mail: Jon.hufford@ttu.edu. ©Jon R. Hufford Articles reporting the experiences of librarians in assessing what stu- dents are learning in information literacy classes are as of yet not as well represented in the professional literature as they should be. This is especially the case for library skills courses that are for-credit. Librarians who have experience assessing student learning should share what they have learned with colleagues who, in turn, need to know what methods are working and how the assessment process can be used to improve teaching and learning. This article reports on the experience gained by librarians at Texas Tech University Libraries while developing and imple- menting pre- and post-assessment surveys that were administered in eleven sections of a library research course taught in the fall of 2008. n recent years, higher educa- tion in the United States has been endeavoring to prove through empirical evidence that it is committed to improving student learning. Much of this effort is directed at learning-outcomes assessment. Margaret Spellings, United States Secretary of Education, created the Commission on the Future of Higher Education in late 2005 and charged it “with developing a strategy for higher education to meet the needs of America’s population and ad- dress the economic and workforce needs of the future.”1 In 2006, the Commission issued its final report, A Test of Leadership: Charting the Future of United States Higher Education. The report makes several recommendations for reform, grouped into five categories. “Transparency and Accountability” is one of the categories. The Commission noted in its summary of this category that “improved account- ability is vital to ensuring the success of all the other reforms we propose. Col- leges and universities must become more transparent about cost, price, and student [learning] success outcomes, and must willingly share this information with students and families.”2 Though the Com- mission’s report was not necessarily an early wake-up call for learning-outcomes assessment, since articles on the need and importance of outcomes assessment in higher education had been published in the professional literature of higher education for at least six years before 2006, it was nevertheless an important document on the topic from a political and administrative perspective and has 140 College & Research Libraries March 2010 had a significant impact on campuses across the nation.3 Responding to the Department of Education’s report, the regional accredi- tation organizations made changes in their standards, and these have been pri- marily responsible for the trend toward outcomes assessment. Not unexpectedly, some of the standards of several of these regional organizations relate to academic libraries and have changed the way they are assessed, especially their information literacy programs. Because of this new emphasis on student learning-outcomes assessment and the inclusion of informa- tion literacy in the efforts of many colleges and universities to assess their programs and courses, librarians are now using outcomes assessment methods in their information literacy classes, whether these are credit courses or traditional one-shot sessions that support a course taught in an academic department. The resulting data collected from these assess- ment efforts is being used to improve the content of information literacy courses and sessions and the teaching skills of li- brarians. Unfortunately, articles reporting the experiences of librarians in assessing what students are learning in informa- tion literacy classes are as of yet not well represented in the professional literature. This is especially the case for library skills courses that are for-credit. Librarians with this experience should share their assess- ment findings with colleagues who need to know what methods are working and how the assessment process can be used to improve teaching and learning. This article reports on the experience librarians at Texas Tech University Libraries gained while developing and implementing pre- and post-assessment tests that were ad- ministered in eleven sections of a library research course taught in the fall of 2008. Literature Review Searches in several bibliographic re- sources yield a large number of articles, books, documents, and other materials on the assessment of information literacy skills. Many of these materials are guides, manuals, or action plans; articles on the need to integrate information literacy assessment into general education; or reports of accreditation trends in higher education. Also, some of these materials discuss strategies used to gain support for or to develop information literacy assess- ment programs, or report on statewide assessment programs of higher educa- tion curricula without the details of any particular assessment projects. However, the author found a number of articles that, to one degree or another, report the experiences of librarians in actually as- sessing student learning of information literacy competencies either in academic department courses or in the information literacy programs of their libraries. Only one of the articles assesses learning in a for-credit course dedicated to teaching library skills. A majority of these articles report the results of pre- and/or post-assessment surveys. Pamela Jackson reports in an article published in 2006 on pre- and post- test results that revealed students’ under- standing of plagiarism. The 2,829 students studied were Computer Science majors at San Jose State University. The results of the study indicated that the students had “difficulty grasping concepts related to paraphrasing.”4 Also, the post-test results showed a 6 percent improvement in stu- dent scores over the pre-test results. Karl Woodworth and Linda Markwell report in a 2005 article on the results of an Ovid MEDLINE pre-test given to incoming residents at Emory University’s School of Medicine.5 Several of the students received low scores, and the librarians administering the pre-test hoped that these low scores would motivate the students to learn more about library research. Similarly, Jonathan Heimke and Brad Matthies discuss their library’s attempt at pre-instruction assessment of incoming Butler University (Indianapolis) freshmen in an article published in 2004.6 Librarians developed a survey that was administered in all sections of English Pre- and Post-Assessment Surveys for LIBR 1100 141 102, an English composition course. It did not assess what the students would learn in the course, but what they had learned about library research before enrolling in the course. The librarians used the results as a baseline measurement of the students’ basic library skills and attitudes. The findings led to the development of instructional goals for an online tutorial that would introduce students to basic library skills and services. James Nichols, Barbara Shaffer, and Karen Shockey report in their article published in 2003 on pre- and post-tests that compared student learning in an online tutorial with learning in a tradi- tional lecture and demonstration class. The students were enrolled in a freshman English composition course that offered basic information literacy instruction.7 The results of the tests in terms of both student learning and student satisfaction were comparable for both methods of instruction. In her 2002 article, Elizabeth Carter discusses the experiences of librar- ians at the Citadel, a military institution of higher education in South Carolina. The librarians developed pre- and post-tests that measured the information-seeking skills of first-year students.8 The tests were administered in a required one-hour freshman-experience course that included sessions where information literacy skills were taught. Earlier studies used similar methods for assessing student library skills. Heidi Julien and Stuart Boon report in an ar- ticle published in 2001 on data collected during an ongoing study of information literacy instruction in Canadian academic libraries.9 In phase one of the study, in- structional librarians and library adminis- trators were interviewed at two universi- ties and one college in different regions of Canada, and instructional documentation was collected and analyzed. Phase two included pre- and post-tests of students’ information literacy skills. The tests were administered at the three institutions to groups of students who had attended relatively short “one-shot” information literacy classes, ranging from 50 minutes to three hours long. Julie Rabine and Catherine Cardwell, the authors of an arti- cle published in 2000, describe two assess- ment tools used at Bowling Green State University for several years as a means of fulfilling the university’s requirements for program and classroom-level outcomes assessment. The tools were a brief survey given to a large number of students and an in-depth, multipart tool used with a limited number of students.10 The survey, referred to in the article as a five-minute mini quiz, was administered in English 112, the freshman composition course. Though librarians did not teach any sec- tions of this course, they developed ma- terials that were included in the students’ course packet. In another article published in 2000, Carol Anne Germain, Trudi E. Jacobson, and Sue A. Kaczor report the results of pre- and post-tests taken by students en- rolled in “First-Year Experience” classes at SUNY, Albany. In an attempt to find an effective way to meet the demands placed on SUNY Albany’s librarians to teach li- brary skills to large numbers of students enrolled in these classes, the library devel- oped a Web-based instructional module and used it in one section of the course taught by a faculty member. Another sec- tion received a lecture on library instruc- tion from a librarian. The test results of the students in these two sections were comparable. Analysis of the scores also showed that library instruction, regard- less of format, made a big difference.11 In a 1998 article, Barbara Bren, Beth Hille- mann, and Victoria Topp discuss research that focused on the effectiveness of using hands-on instruction in a workstation laboratory. Two groups of students—one receiving hands-on instruction and the other a lecture and demonstration—were tested at the conclusion of the class.12 The researchers found that students receiving hands-on instruction did better on the test. In an article published in 1986, Joan Kaplowitz reports the results of pre- and post-tests taken by students enrolled in 142 College & Research Libraries March 2010 the English-3 Library Instruction Pro- gram at the University of California at Los Angeles.13 Changes in library usage, attitudes toward libraries and librarians, and understanding of basic library skills were studied. Analysis revealed that the students scored significantly higher on the post-test than on the pre-test. Pre- and post-assessment testing has also been used to assess learning in distance courses. Lana Ivanitskaya and others review the results of tests that assessed the information literacy skills of off-campus students in their article published in 2008.14 A “Research Readi- ness Self-Assessment” survey was used as a pre- and post-test in an off-campus master’s degree class at Central Michigan University. In particular, the authors of the survey investigated the impact that pre-tests have on the effectiveness of li- brary instruction when students are given feedback on their pre-test performance. Similarly, Elizabeth Mulherrin and others review the results of pre- and post-tests taken by distance students in their article published in 2005.15 Unlike all the tests re- ported in the previous articles, the tests re- viewed in Mulherrin’s article were taken by distance students enrolled in LIBS 150, a one-hour credit, elective library skills course offered at the University of Mary- land. The tests were administered as one phase in the development of the course and proved to be an important factor in its eventual success. Ten of the pre- and/ or post-assessment tests discussed in the preceding paragraphs measured the library skills students learned in a course. Only one was administered in a for-credit library research course. Articles reporting on the use of rather unique tools or methods for assessing information literacy have also been published. Donald Gilstrap and Jason Dupree report in their 2008 article on the use of a critical incident questionnaire at the Southwest Oklahoma State Univer- sity Libraries as a qualitative instrument for assessing information literacy skills throughout the institution’s curriculum. The questionnaire was administered to a sample population of 348 students enrolled in English Composition II. The results of their study showed that the questionnaire was an effective instrument for assessing critical reflection.16 In an article published in 2007, Lynn Cameron, Steven Wise, and Susan Lottridge report on the Information Literacy Test (ILT) developed at James Madison University Library. The test was developed to “meet the need for a standardized instrument to measure student proficiency regarding the ACRL Information Literacy Compe- tency Standards for Higher Education.”17 The authors of the test expected that it would eventually be adapted for use at other institutions. It made frequent use of graphics, documents, and Web page images. “The One-Minute Paper and the One-Hour Class,” published in 2006 and authored by Elizabeth Choinski and Michelle Emanuel, investigates the use of a technique that had students write brief answers to specific questions, “thus providing instant feedback from students on the lesson of the day.”18 The authors maintain that this technique is an effec- tive way to assess outcomes in one-shot library instruction sessions. In her article published in 2005, Margy MacMillan reports on an assignment tool that asked students to create individual resumes that listed their information skills.19 The purpose of this resume project was to en- courage students to reflect on and assess their library research skills. Additionally, the use of portfolios and rubrics in assessment is reported in the lit- erature. In their 2008 article, Karen Diller and Sue Phelps investigate the use of port- folios with rubrics for evaluation in the beginning phase of an outcomes assess- ment program undertaken at Washington State University at Vancouver.20 Librarians participated with faculty in assessing the university’s General Education Program, which is based on the institution’s learn- ing goals, including information literacy goals. Similarly, Davida Scharf and others investigate writing portfolio assessment Pre- and Post-Assessment Surveys for LIBR 1100 143 of student information literacy skills in an article published in 2007.21 In this study, graduating seniors taking a capstone seminar in the Humanities at the New Jersey Institute of Technology were re- quired to create a writing portfolio. These portfolios included term papers that were assessed for this study. In addition, Lorrie Knight discusses, in an article published in 2005, the use of a scoring rubric based on course learning objectives and the ACRL Information Literacy Competency Standards in Higher Education.22 The rubric was used to score students’ course bibliographies. As with the great majority of studies discussed in this article, none of the unique assessment tools and methods discussed above was used in a library skills course. However, most did assess, at least to some extent, the library skills students learned in a particular class. Aim and Scope For several years now, Texas Tech Uni- versity has offered a one-hour credit course titled “Introduction to Library Research” (LIBR 1100) to undergraduates. The course teaches the basics of library research and targets freshmen, though sophomores, juniors, seniors, and even an occasional graduate student enroll in the course. Teaching the course is voluntary, and most of the Information Services librarians participate. Several sections are offered each fall semester, and two or three sections in the spring. Early on, each section was taught by two librar- ians; but, starting in the fall of 2008, the number of sections offered was increased, and one librarian was assigned to teach each section. Every year since LIBR 1100 was first offered, each section has been evaluated by its students in terms of the course con- tent and instructor. However, the student evaluations have always been subjective, and what students were learning in the course was never objectively assessed. The librarian instructors of LIBR 1100 decided to begin measuring student learning outcomes with pre- and post- assessment surveys in the fall of 2008. The purpose of the pre- and post- assessment surveys was to determine as objectively as possible whether students enrolled in LIBR 1100 were learning what the instructors teaching the course intended for them to learn. Though there were several hands-on practicums in the course that required the performance of skills, and students had to compile an extensive annotated bibliography on a topic of their choice, the assessment surveys focused on determining what students had learned or, more precisely, what they knew. Background In addition to the practicums and anno- tated bibliography, LIBR 1100 had several assignments that involved reading docu- ments available on the course’s WebCT site. The documents were titled “Campus Libraries and an Introduction to the Re- search Process,” “Writing a Thesis State- ment,” “Search Strategies,” “Controlled Vocabulary,” “Proper Citing,” “Ethical Use of Information,” “Introduction to the Information Cycle,” “Newspaper Ar- ticles,” “Popular Magazines and Scholarly Journals,” “Documents and Books,” “En- cyclopedias,” and “Critical Evaluation of Sources.” The “Information Cycle” docu- ment was used as a means of providing a structure for the three readings that followed it. These readings provided information and instruction on how to search databases. The students used these databases to find sources on the topic they chose for their annotated bibliography. The LIBR 1100 instructors authored all of these reading assignments.23 Short quizzes following the required readings were used not only to assess comprehen- sion but to reinforce course content. The questions in the pre- and post-assessment surveys also addressed the content of the readings. (The questions in the survey are available in Appendix 1 at the end of this article.) The goals and outcome objectives of LIBR 1100 addressed the ACRL Informa- 144 College & Research Libraries March 2010 tion Literacy Competency Standards for Higher Education. The course’s four goals served as a general framework for the out- come objectives.24 Each of these objectives more specifically addressed one or more of the Standards. (See Appendix 2 for the ACRL Information Literacy Competency Standards for Higher Education and their performance indicators.) Objective 1, “Students will understand the general principles and procedures associated with library research and the proper use of information…,” was large in scope and covered all five Standards, including most, if not all, of the performance indica- tors listed under the Standards. Objective 2, “Students will apply effective search strategies and techniques and cite infor- mation sources properly…,” was meant to respond to all of the performance indica- tors in Standard Two, and performance indicator 5.3 in Standard Five. Objective 3, “Students will effectively use both print and online resources to find appropriate materials for their assignments…,” ad- dressed Standard Two, performance indi- cator 2.3. Objectives 4, “Students will use critical thinking skills and will effectively employ evaluative criteria in the selection of sources…,” and 5, “Students will show evidence of interpreting information and revising queries…,” were meant to respond to all the performance indicators of Standard Three. Finally, Objective 6, “Students will understand and practice the ethics of information use, including copyright and intellectual property rights, and the need for proper citation of sources to avoid plagiarism…,” addressed all of the performance indicators of Standard Five.25 Similarly, each pre- and post-as- sessment survey question addressed particular ACRL Information Literacy Competency Standards, their perfor- mance indicators, and course outcome objectives. Questions 1 and 5 addressed Standard Two, performance indicator 2.2 (course outcome objectives 1 and 2). (See table 1 for the relationships between the survey questions, course outcome objectives, and the Standards, and for the measurements of success in teaching the course content, as delimited by the outcome objectives and based on what the student answers to the survey ques- tions indicated they had learned.) Ques- tions 2, 8, 9, 10, and 11 were meant to respond to Standard Two, performance indicator 2.1 (course outcome objectives 1 and 2). Questions 3 and 7 addressed Standard Three, performance indica- tor 3.2 (course outcome objectives 1, 4, and 5). Questions 4 and 15 responded to Standard Two, performance indica- tor 2.3 (course outcome objectives 1, 2, and 3). Questions 6 and 13 addressed Standard One, performance indicator 1.2 (course outcome objective 1). In addition, questions 12 and 14 addressed Standard Two, performance indicator 2.5 (course outcome objectives 1 and 2). Table 1 shows the relationships of the assessment survey questions to the course outcome objectives and the ACRL Information Literacy Competency Stan- dards for Higher Education, along with their performance indicators. Each pair of pre- and post-assessment scores (the pre-assessment score before the slash, followed by the post-assessment score) corresponding to the question number in that row is meant to serve as a rough measure of how well the students knew or had learned a particular outcome objec- tive and standard. A higher score on the post-assessment survey question than on the pre-assessment question would indicate that the students had learned this outcome objective and standard per- formance indicator through classroom teaching. The course instructors determined by consensus what questions to include in the survey and what course outcome objective and ACRL Standard each ques- tion would be matched to. They did not develop questions for Standards Four and Five. In the future, the instructors will de- termine a more pedagogically sound way to develop the questions and will select questions that address all the standards. Pre- and Post-Assessment Surveys for LIBR 1100 145 Table 1 Relationships of aCRl Information literacy Competency Standards, Course Objectives, and Survey Questions with Students’ Pre-Test and Post-Test Scores ACRL Competency Standards 1 2 3 4 5 Standard Performance Indicators 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 7 1 2 3 1 2 3 LIBR 1100 Outcome Objectives x indicates the Standards and Performance Indicators addressed by the outcome objective. 1 x x x x x x x x x x x x x x x x x x x x x x 2 x x x x x x 3 x 4 x x x x x x x 5 x x x x x x x 6 x x x Assessment Survey Questions 1 25/ 76 2 6/ 16 3 84/ 95 4 7/ 12 5 95/ 97 6 23/ 27 7 24/ 57 8 4/5 9 18/ 13 10 47/ 93 11 74/ 82 12 78/ 74 13 52/ 79 14 90/ 88 15 18/ 23 146 College & Research Libraries March 2010 Methodology The findings of this study are based on analysis of the input of all those students who took both the pre- and post-assess- ment surveys. Eleven sections of LIBR 1100 were taught in the fall of 2008. A total of 176 of the 310 students enrolled in the course’s eleven sections at the beginning of the semester took both the pre- and post-assessment surveys. All of these stu- dents were treated as a single group, with the resulting frequencies and percentages of correct and incorrect answers pertain- ing to the entire group of participating students. The students’ answers on both surveys were downloaded from each course section to a Microsoft Excel spread- sheet, and formulae available on the Excel software were used to tabulate all the data and determine the averages. The instructors involved in teaching the course developed the survey questions as a team and agreed to have their students take the pre-assessment survey on the first day of class and the post-assessment survey on the last day of class. The surveys were not graded and therefore were not a factor in the determination of final grades. However, the students in all sections were told that they had to take both surveys and answer the questions in a conscien- tious manner to complete the course. The instructors felt that the possibility of an incomplete grade would be sufficient motivation to take the surveys. Whether the questions were answered conscien- tiously is problematic. Better and more astute participation may have occurred if the surveys had been graded. The fact that they were not graded may possibly be considered a weakness of the study. Both the pre- and post-assessment surveys contained the same questions. The instructors felt that the fourteen weeks between taking the surveys was a sufficient length of time for their students to forget the questions answered in the survey at the beginning of the semester. They plan to update the survey regularly and use it every semester. The order of the questions will also be regularly changed. Each question in the survey has one cor- rect answer. The instructors agreed upon the correct answers before the survey was implemented. Because the study’s findings are based on comparisons of pre- and post- assessment answers, both individually and in the aggregate, and since no cross-tabu- lation tables are used to test relationships between variables, no statistical analysis other than the determination of totals and averages is necessary for this study. The level of participation varied from one course section to another. Several students took the pre-assessment survey but not the post-assessment survey, and there were a few students who took the post-assessment survey but not the pre- assessment survey. The data of these students had to be deleted from the Excel spreadsheets containing the data of all the students who participated in the surveys. Reports from the WebCT-based surveys enabled the author to identify and delete the data of those students. Though pre- and post-assessment of student learning outcomes, like other testing instruments, has been criticized by some for not being a dependable method for measuring learning, it has more generally been recognized as a le- gitimate way to measure what students are learning in class.26 Often this method has been administered as surveys with questions, whether multiple choice, true or false, short-answer, or open-ended, and with the purpose of testing students’ skills or what they know. Such surveys have often been referred to as tests in the literature. Also, some studies employing this method have used the same set of questions in both pre- and post-tests to evaluate a single class, and the instructors made the effort to administer the pre-test before the course content was taught and the post-test at the very end of the course. As with all testing instruments, the reliability and validity of the pre- and post-test method for determining ac- curate measurements of what students have learned is entirely dependent on the test itself. Such things as the integrity Pre- and Post-Assessment Surveys for LIBR 1100 147 of the questions, the research design for the survey study, and its methodology affect the reliability and validity of a testing instrument. In his 1993 article “Evaluating Library Instruction: Doing the Best You Can with What You Have,” Donald Barclay provides an interesting examination of pre- and post-surveys used as tests and the kinds of questions that instructional librarians could include in such surveys.27 He concludes his article with the observation that, though library instruction evaluations may not always meet the highest standards of scientific rigor, this should not deter librarians from implementing evaluative studies. Early attempts at evaluation should serve as a spur to begin the process of continuous improvement in the quality of the evalu- ation method. Findings The average score of the group of students who took both surveys was determined by adding the percentages of the 176 stu- dents who answered each question cor- rectly and then determining the average of the total. The average score on the pre- assessment survey was 43 percent, and the average score on the post-assessment survey was 56 percent. Thus the group of students increased their average score by 13 points from pre- to post-assessment. If the averages are interpreted as scores, they are not very encouraging. Forty- three (43) percent and 56 percent are both low scores. The librarians who taught the course had hoped for higher scores, especially on the post-assessment survey. The instructors did much soul-search- ing to find an explanation for the low scores. Though the same course content was taught in all the sections, the instruc- tors were encouraged to work indepen- dently. In addition, they were encouraged to try several different teaching strategies, including, but not limited to, lecturing, assignments that encouraged class partici- pation and interest, the use of appropriate examples to teach difficult concepts, and active learning. All the instructors eagerly accepted this approach as a way to engage their students. Texas Tech University Libraries’ In- formation Services Department had ex- perienced a rather large turnover among librarians in the years before 2008. Several vacant positions had recently been filled with librarians who had little or no teach- ing experience. Despite this, the decision was made to increase the number of LIBR 1100 sections in the fall of 2008 and to permit all the Information Services librar- ians who volunteered to teach a section to do so. The expectation was that, with the accumulation of experience and with good mentoring from more experienced librarians, eventually all the instructors involved in teaching LIBR 1100 would significantly improve their teaching skills. The emphasis on independent experimen- tation with new teaching and learning strategies added an exciting and positive approach to this endeavor to improve through experience. However, the lack of teaching experience on the part of some of the instructors who taught in the fall of 2008 may have had some effect on student scores on the post-assessment survey. Considerable improvement was made on the first question in the post-assessment survey. The students were asked to identify the three Boolean operators. (See Appendix 1 for the frequencies and percentages of correct and incorrect answers, along with all possible answers, for each question.) Twenty-five (25) percent of the group an- swered the question correctly in the pre-as- sessment survey, and 76 percent answered it correctly in the post-assessment survey. Considerably less improvement was made on the second question, which asked the students to identify the least likely resource for finding citations to articles. The answer was the Texas Tech University Libraries’ online catalog. Only 6 percent of the group answered this question correctly in the pre- assessment survey, and only 16 percent did so in the post-assessment survey. Question 3 asked the students what to look for in determining the authority of an Internet site. The group did well on 148 College & Research Libraries March 2010 this question in both surveys. Eighty-four (84) percent answered the question cor- rectly in the pre-assessment survey and 95 percent in the post-assessment survey. However, question 4, like question 2, was another matter. When asked to identify the correct statements in a list of possible answers that included examples of a call number, an ISBN number, a citation to a book, a citation to an article, and a URL ad- dress, 7 percent of the students answered the question correctly by identifying the correct examples on the list in the pre-as- sessment survey and 21 percent answered it correctly on the post-assessment survey. Some of the content of question 4 was not covered by the course’s assignments. In question 5, the students were asked to identify the “word search” that would give them books most directly related to gang violence. Ninety-five (95) percent of the students correctly identified “gangs AND violence” as the correct answer in the pre-assessment survey and 97 per- cent selected the correct answer in the post-assessment survey. The results of this and the first question in the survey would seem to show that the majority of the students in the group understood what Boolean operators were and how they worked. Most of the students were not able to identify primary research sources in question 6. Twenty-three (23) percent of them identified the primary sources in the list of possible answers to the question in the pre-assessment survey and 27 percent did so in the post-assessment survey. The students’ answers to question 7 were more encouraging. They were asked to identify “typical scholarly research sources” from a list. Twenty-four (24) percent of the students selected the cor- rect answer in the pre-assessment survey and 57 percent selected the correct answer in the post-assessment survey. This re- flects some improvement. However, the course’s reading assignments and one of the practicums covered this topic well, so a higher score for this question was expected in the post-assessment survey. The low scores on questions 8 and 9 indicate that most of the students do not have a good understanding of what can be found in the online catalog. For question 8, only 4 percent in the pre-assessment survey and 5 percent in the post-assess- ment survey correctly identified the kinds of information that can be found in the Texas Tech University Libraries’ online catalog. Eighteen (18) percent answered question 9 correctly in the pre-assessment survey, indicating that they were aware that full-text magazine articles cannot be found in the catalog. Surprisingly, only 13 percent answered this question correctly in the post-assessment survey. Question 10 asked the students which of two databases—ABI/Inform or Lexis- Nexis Academic Universe—contained full-text newspaper articles. Forty-seven (47) percent identified the correct answer (Lexis-Nexis Academic Universe) in the pre-assessment survey, and 93 percent did so in the post-assessment survey. Much improvement between pre- and post- assessments took place here, most likely because one of the course’s practicums required the students to find full-text ar- ticles in Lexis-Nexis Academic Universe. Most of the students did well on ques- tion 11 in both the pre- and post-assess- ment surveys. This question required knowledge of the difference between PDF and HTML full-text documents, and this difference was explained in class lectures. Seventy-four (74) percent of the students answered the question correctly in the pre-assessment survey and 82 percent did so in the post-assessment survey, an improvement of 8 points. Question 12 asked the students to examine a citation to a journal article and identify its cita- tion style. MLA was the correct answer. Seventy-eight (78) percent of the students answered the question correctly in the pre-assessment survey, but only 74 per- cent did so in the post-assessment survey. Style manuals, in particular the MLA and APA style manuals, were discussed extensively in class, and the students had to compile an annotated bibliography on Pre- and Post-Assessment Surveys for LIBR 1100 149 of bibliographic numbers that they will encounter during their research. Finally, they have yet to learn how to identify primary and scholarly research sources. At the conclusion of the study, the course instructors met to go over the findings. During the meeting, there was discussion about why students got lower scores on questions 9, 12, and 14 in the post-assessment survey than in the pre- assessment survey. The post-assessment scores of these three questions ranged from 2 to 5 percentage points lower than the scores for the same questions in the pre-assessment survey. One discussant suggested that inexperience on the part of some of the instructors may have re- sulted in inadequate teaching, that this inadequacy may have caused confusion, and that the confusion led to some stu- dents forming misconceptions. For this reason, some answers that were answered correctly on the pre-assessment survey were answered incorrectly on the post-as- sessment survey. Though this may explain why the scores of these three questions were lower on the post-assessment sur- vey, the suggestion was only a conjecture. None of the instructors felt confident in accepting any definite reason that would explain the lower scores. What must the librarian instructors who teach LIBR 1100 do to improve the learning that takes place in their course? Clearly a proactive plan is required. In the first chapter of her book Tools for Teaching, Barbara Davis maintains that, “in design- ing or revising a course, faculty must consider what material to teach, how best to teach it, and how to ensure that students are learning what is being taught.”28 Start- ing with this introductory statement, she then offers advice and strategies meant to help college and university faculty “make decisions about the content of their course, the structure and sequence of activities and assignments, the identification of learning outcomes, and the selection of in- structional resources.”29 The instructors of LIBR 1100 can do no better than to use Dr. Davis’ advice and strategies as a blueprint a topic of their choice and to cite their sources using either the MLA or the APA style manuals. Most of the students who took both the pre- and post-assessment surveys did well on question 13 and could identify the features of an annotated bibliography. Fifty-two (52) percent of the students answered this question correctly in the pre-assessment survey, and 79 percent did so in the post-assessment survey. Question 14 asked “What information is contained in a bibliographic citation?” Ninety (90) percent answered question 14 correctly in the pre-assessment survey, but only 88 percent did so in the post-assessment sur- vey. Question 15 asked which statements were correct in a list that included two citations, an ISBN number, a URL address, and a call number. The students did not do well on this question in either the pre- or the post-assessment surveys. Eighteen (18) percent identified the correct statements in the pre-assessment survey, and 23 per- cent did so in the post-assessment survey. Some of the content of this question was not covered by the written assignments, and it may be that some of the librarians teaching the course did not cover this information carefully in their lectures. Conclusions Though the students, as a group, increased their average score by 13 points on the post-assessment survey, the overall per- centage of correct answers on both the pre- and the post-assessment surveys was quite low. This indicates that the students taking LIBR 1100 did not learn as much as the librarians teaching the course wanted them to learn. The students did well on questions 1, 3, 5, 10, 11, 12, 13, and 14. This finding represents about half of the questions. However, they did poorly on questions 2, 4, 6, 7, 8, 9, and 15. The poor performance on these particular questions indicates that many of the students need to learn more about the Texas Tech Uni- versity Libraries’ online catalog. Also, they need to learn how to read and understand citations, call numbers, and other kinds 150 College & Research Libraries March 2010 for developing their course. First, they should continue the process they started in 2008. Each summer, in preparation for teaching in the fall and spring terms, they need to agree on what is important for the students who enroll in LIBR 1100 to learn. Once they have agreed on what is important, the instructors need to improve the course and bring it up to date. The course’s development will include revis- ing all course goals, outcome objectives, the course syllabus and schedule, reading assignments, practicums and quizzes, and writing new materials for added content. After the course is revised, the instructors will need to develop a valid testing instru- ment that will gauge how well the students are learning what the librarians want them to learn.30 The instructors believe that the pre- and post-assessment surveys worked well this year. However, there are other ways to assess, including, but not limited to, a final examination, a portfolio assign- ment, or use of a standard test. If the decision is made to continue us- ing pre- and post-assessment surveys, val- id survey questions should be determined using a pedagogically sound method, and the instructors need to make sure that the teaching points addressed by all the questions are covered in the course’s read- ing assignments and practicums.31 In an effort to incorporate active learning into the course, the practicums used in the fall of 2008 required the students to use data- bases, Web sites, and other mainly online resources to fulfill the requirements of the assignment.32 These practicums proved effective in teaching students content. Some of the questions in the survey that were answered correctly by more students in the post-assessment survey than in the pre-assessment survey tested specific teaching points the students had learned by doing the practicums. The librarians had previously been concerned about having too many practicums for a one-hour credit course. Perhaps, instead of adding more of them, existing practi- cums could be expanded to include two or more teaching points addressed in the questions. Also, when revising the reading assignments, the teaching points should be treated in relatively prominent locations within the assignments. Finally, the librarians teaching LIBR 1100 must “teach” the teaching points cov- ered by the survey questions at appropri- ate times in their classes.33 One way to do this is through carefully prepared scripts explaining each teaching point addressed in a survey question. The scripts could become part of a librarian’s classroom lectures, thereby reinforcing the learn- ing.34 Above all, great emphasis should be placed on reviewing the course and its learning-outcome goals every year, and improvements made when appropriate. Further Research Needed The greater emphasis on student learn- ing-outcomes assessment at the national level and the inclusion of information literacy in the efforts of many col- leges and universities to assess their programs and courses have led to the publication of several articles mostly providing a generic treatment of assess- ment applied to Information Literacy. These articles are readily available in the library profession’s information re- sources. One message that comes from this published literature is that student learning-outcomes assessment will most likely continue to be important for several years. However, reports of research on the experiences of individual library programs in assessing what stu- dents are learning in their information literacy classes are as of yet not so well represented in the literature. This is un- fortunate because librarians who want to improve their information literacy program through assessment can benefit immensely from the experiences of their colleagues at other institutions. Through this literature they can find out what methods are working and how the as- sessment process can be used to improve teaching and learning. What works well at one library may also work well at other libraries. Pre- and Post-Assessment Surveys for LIBR 1100 151 aPPeNDIX 1 Survey Questions and the Pre- and Post-assessment Results Question 1. What are the 3 boolean operators? PR = Pre-Assessment PO = Post-Assessment a – Add b – If c – Not d – Then e – And f – Or g – Sum PR-1 Frequency Percent PO-1 Frequency Percent Correct (c,e,f) 44 25 Correct (c,e,f) 134 76 Incorrect 132 75 Incorrect 42 24 Question 2. What is the least likely resource to use to find citations to articles? PR = Pre-Assessment PO = Post-Assessment a – Library’s online catalog b – Electronic databases c – Internet d – Search engines e – Periodical indexes PR-2 Frequency Percent PO-2 Frequency Percent Correct (a) 10 6 Correct (a) 28 16 Incorrect 165 94 Incorrect 148 84 Question 3. In determining the authority of an Internet site, you should look for: PR = Pre-Assessment PO = Post-Assessment a – Author’s credentials b – Preferred URL (like .edu, .gov, .org) c – Sentence or paragraph stating that the information is authoritative d – a & b PR-3 Frequency Percent PO-3 Frequency Percent Correct (d) 147 84 Correct (d) 167 95 Incorrect 29 16 Incorrect 9 5 Question 4. Which of the following statements are correct? PR = Pre-Assessment PO = Post-Assessment a – 0-415-01987-7 is a call number. b – C100.B34 1991 is a World Wide Web (URL) address. c – Kerbel, Mathew, “Remote and Controlled Media Politics in a Cynical Age,” Westview, 1995 is a citation to a book. d – Kerbel, Mathew, “Remote and Controlled Media Politics in a Cynical Age,” Westview, 1995 is a citation to a journal article. e – http://www.bgsu.edu/colleges/library is a World Wide Web (URL) address. PR-4 Frequency Percent PO-4 Frequency Percent Correct (c,e) 12 7 Correct (c,e) 21 12 Incorrect 164 93 Incorrect 155 88 152 College & Research Libraries March 2010 Question 5. Which of the following word searches would give you books most directly related to gang violence? PR = Pre-Assessment PO = Post-Assessment a – Gangs AND violence b – Gangs OR violence c – Gangs AND NOT violence PR-5 Frequency Percent PO-5 Frequency Percent Correct (a) 168 95 Correct (a) 171 97 Incorrect 8 5 Incorrect 5 3 Question 6. Which of the following are primary research sources? PR = Pre-Assessment PO = Post-Assessment a – Book on constitutional law b – Copy of the United States Constitution c – Book of Toni Morrison’s poems d – Book that interprets Toni Morrison’s poems e – Book of letters written by Abraham Lincoln f – Biography of Abraham Lincoln PR-6 Frequency Percent PO-6 Frequency Percent Correct (b,c,e) 40 23 Correct (b,c,e) 47 27 Incorrect 136 77 Incorrect 129 73 Question 7. Identify which of the following are typical of scholarly research sources. PR = Pre-Assessment PO = Post-Assessment a – Articles geared to a general audience b – Articles with bibliographies and footnotes c – Articles geared to researchers d – Articles meant to inform and entertain e – People Magazine f – Educational Psychology Review PR-7 Frequency Percent PO-7 Frequency Percent Correct (b,c,f) 42 24 Correct (b,c,f) 100 57 Incorrect 134 76 Incorrect 76 43 Question 8. Which of the following kinds of information can be found in the library’s online catalog? PR = Pre-Assessment PO = Post-Assessment a – Journal articles on a certain topic b – Journals the library owns c – Books on a certain topic d – Book reviews e – Sound recordings the library owns PR-8 Frequency Percent PO-8 Frequency Percent Correct (b,c,e) 7 4 Correct (b,c,e) 8 5 Incorrect 169 96 Incorrect 168 95 Pre- and Post-Assessment Surveys for LIBR 1100 153 Question 9. It is possible to find full-text magazine articles in the library’s online catalog: PR = Pre-Assessment PO = Post-Assessment a – True b – False PR-9 Frequency Percent PO-9 Frequency Percent Correct (b) 31 18 Correct (b) 22 13 Incorrect 145 82 Incorrect 154 87 Question 10. Which research database contains full-text newspaper articles? PR = Pre-Assessment PO = Post-Assessment a – ABI/Inform b – Lexis-Nexis Academic Universe PR-10 Frequency Percent PO-10 Frequency Percent Correct (b) 83 47 Correct (b) 163 93 Incorrect 93 53 Incorrect 13 7 Question 11. When you retrieve an article in academic Search Complete, which format leads you to a display that looks like a photocopy? PR = Pre-Assessment PO = Post-Assessment a – PDF full-text b – HTML full-text PR-11 Frequency Percent PO-11 Frequency Percent Correct (a) 131 74 Correct (a) 145 82 Incorrect 45 26 Incorrect 31 18 Question 12. The following citation is in which citation style? – Wilcox, Rhonda V. “Shifting Roles and Synthetic Women in Star Trek: the Next Generation.” Studies in Popular Culture 13.2 (1991): 53-65. PR = Pre-Assessment PO = Post-Assessment a – MLA b – APA c – Turabian d – Chicago PR-12 Frequency Percent PO-12 Frequency Percent Correct (a) 138 78 Correct (a) 130 74 Incorrect 38 22 Incorrect 46 26 Question 13. What is an annotated bibliography? PR = Pre-Assessment PO = Post-Assessment a – List of reference sources on animation b – List of references with summaries of each c – Book with handwritten notes in the margins of pages d – List of citations to books with the author’s signatures included in the citations PR-13 Frequency Percent PO-13 Frequency Percent Correct (b) 92 52 Correct (b) 139 79 Incorrect 84 48 Incorrect 37 21 154 College & Research Libraries March 2010 Question 14. What information is contained in a bibliographic citation? PR = Pre-Assessment PO = Post-Assessment a – Credentials, revisions, date of publication b – Author, title, publication information c – Revisions, title, publisher PR-14 Frequency Percent PO-14 Frequency Percent Correct (b) 158 90 Correct (b) 155 88 Incorrect 18 10 Incorrect 21 12 Question 15. Which of the following statements are correct? PR = Pre-Assessment PO = Post-Assessment a – Saunders, Laverna, “The Virtual Library Today,” Library Administration and Manage- ment 6, no. 2 (Spring 1992): 245-254 is a citation for a journal article. b – Saunders, Laverna, “The Virtual Library Today,” Library Administration and Manage- ment 6, no. 2 (Spring 1992): 245-254 is a citation for a book. c – 0-415-01987-7 is a call number. d – http://www.bgsu.edu/colleges/library is a citation for a book. e – Stacks HD340.B11 1992 is a call number. PR-15 Frequency Percent PO-15 Frequency Percent Correct (a,e) 31 18 Correct (a,e) 41 23 Incorrect 145 82 Incorrect 135 77 Pre- and Post-Assessment Surveys for LIBR 1100 155 aPPeNDIX 2 aCRl Information literacy Competency Standards for Higher education with Their Performance Indicators Standard One The information-literate student determines the nature and extent of the information needed. Performance Indicators 1.1 The information-literate student defines and articulates the need for information. 1.2 The information-literate student identifies a variety of types and formats of potential sources for information. 1.3 The information-literate student considers the costs and benefits of acquiring the needed information. 1.4 The information-literate student reevaluates the nature and extent of the information need. Standard Two The information-literate student accesses needed information effectively and efficiently. Performance Indicators 2.1 The information-literate student selects the most appropriate investigative methods or information retrieval systems for accessing the needed information. 2.2 The information-literate student constructs and implements effectively- designed search strategies. 2.3 The information-literate student retrieves information online or in person using a variety of methods. 2.4 The information-literate student refines the search strategy if necessary. 2.5 The information-literate student extracts, records, and manages the informa- tion and its sources. Standard Three The information-literate student evaluates information and its sources critically and incorporates selected information into his or her knowledge base and value system. Performance Indicators 3.1 The information-literate student summarizes the main ideas to be extracted from the information gathered. 3.2 The information-literate student articulates and applies initial criteria for evaluating both the information and its sources. 3.3 The information-literate student synthesizes main ideas to construct new concepts. 3.4 The information-literate student compares new knowledge with prior knowl- edge to determine the value added, contradictions, or other unique characteristics of the information. 3.5 The information-literate student determines whether the new knowledge has an impact on the individual’s value system and takes steps to reconcile differences. 3.6 The information-literate student validates understanding and interpretation of the information through discourse with other individuals, subject-area experts, and/or practitioners. 3.7 The information-literate student determines whether the initial query should be revised. 156 College & Research Libraries March 2010 Standard Four The information-literate student, individually or as a member of a group, uses infor- mation effectively to accomplish a specific purpose. Performance Indicators 4.1 The information-literate student applies new and prior information to the planning and creation of a particular product or performance. 4.2 The information-literate student revises the development process for the product or performance. 4.3 The information-literate student communicates the product or performance effectively to others. Standard Five The information-literate student understands many of the economic, legal, and social issues surrounding the use of information and accesses and uses information ethically and legally. Performance Indicators 5.1 The information-literate student understands many of the ethical, legal, and socio-economic issues surrounding information and information technology. 5.2 The information-literate student follows laws, regulations, institutional policies, and etiquette related to the access and use of information resources. 5.3 The information-literate student acknowledges the use of information sources in communicating the product or performance. The #1 source ffor jobs in Library and Infformation Science and Technology WHERE JOB SEEKERS AND EMPLOYERS GET RESULTS HRDR joblist.ala.org JOB SEEKERS Search and sort hundreds of job ads by position type, employer, location, and more EMPLOYERS Strengthen your candidate pool— ALA reaches the most engaged professionals and students Pre- and Post-Assessment Surveys for LIBR 1100 157 Notes 1. United States Department of Education, A Test of Leadership: Charting the Future of U.S. Higher Education: A Report of the Commission Appointed by Secretary of Education Margaret Spellings (Washington, D.C.: U.S. Department of Education, 2006) 4, 14–15. 2. Ibid.; Association of College and Research Libraries, “Environmental Scan 2007,” As- sociation of College and Research Libraries, available online at www.ala.org/ala/acrl/acrlpubs/ whitepapers/Environmental_Scan_2.pdf [accessed 11 August 2008]. 3. Robert E. Dugan and Peter Hernon’s article “Outcomes Assessment: Not Synonymous with Inputs and Outputs,” published in the Journal of Academic Librarianship in 2002, had been one of the early calls to librarians involved in information literacy that they must begin using outcomes assessment methods in evaluating their classes and programs. Robert E. Dugan and Peter Her- non, “Outcomes Assessment: Not Synonymous With Inputs and Outputs,” Journal of Academic Librarianship 28, no. 6 (Nov. 2002): 376–80. The American Library Association and the Association of College and Research Libraries became involved in 2007 when they posted documents on their Web sites that addressed assessment in higher education as it relates to information literacy and discussed its importance in learning. The American Library Association and Association of Col- lege and Research Libraries, Instruction Section, “Assessment Issues,” Association of College and Research Libraries, available online at www.ala.org/ala/acrl/acrlissues/acrlinfolit/infolitresources/ infolitassess/assessmentissues.cfm [accessed 6 August 2008]; Association of College and Research Libraries, Instruction Section, “Assessment: We Know We Should Do It But Does It Have To Be So Difficult?” Association of College and Research Libraries, available online at www.ala.org/ala/ acrl/acrlissues/acrlinfolit/infolitresources/infolitassess/assessmentissues.cfm [accessed 8 August 2008]. 4. Pamela A. Jackson, “Plagiarism Instruction Online: Assessing Undergraduate Students’ Ability to Avoid Plagiarism,” College & Research Libraries 67 no. 5 (2006): 418–28. 5. Karl Woodworth and Linda Garr Markwell, “Bored, Yawning Residents Falling Asleep During Orientation? Wake ’Em Up with a Test!” Medical Reference Services Quarterly 24, no. 1 (2005): 77–91. 6. Jonathan Heimke and Brad S. Matthies, “Assessing Freshmen Library Skills and Attitudes before Program Development: One Library’s Experience,” College & Undergraduate Libraries 11, no. 2 (2004): 29–50. 7. James Nichols, Barbara Shaffer, and Karen Shockey, “Changing the Face of Instruction: Is Online or In-class More Effective?” College & Research Libraries 64, no. 5 (2003): 378–88. 8. Elizabeth W. Carter, “Doing the Best You Can with What You Have: Lessons Learned from Outcomes Assessment,” The Journal of Academic Librarianship 28, nos. 1 and 2 (2002): 36–41. 9. Heidi Julien and Stuart Boon, “But Are They Learning Anything? Outcomes Assessment of Information Literacy Instruction,” Canadian Journal of Information and Library Science 26, no. 4 (Dec. 2001): 95–96. 10. Julie Rabine and Catherine Cardwell, “Start Making Sense: Practical Approaches to Out- comes Assessment for Libraries,” Research Strategies 17, no. 4 (2000): 319–35. 11. Carol Anne Germain, Trudi E. Jacobson, and Sue A. Kaczor, “A Comparison of the Effec- tiveness of Presentation Formats for Instruction: Teaching First-Year Students,” College & Research Libraries 61, no. 1 (2000): 65–72. 12. Barbara Bren, Beth Hillemann, and Victoria Topp, “Effectiveness of Hands-on Instruction of Electronic Resources,” Research Strategies 16, no. 1 (1998): 41–51. 13. Joan Kaplowitz, “A Pre- and Post-Test Evaluation of the English 3-Library Instruction Program at UCLA,” Research Strategies 4 (Winter 1986): 11–17. 14. Lana Ivanitskaya and others, “How Does a Pre-assessment of Off-campus Students’ Infor- mation Literacy Affect the Effectiveness of Library Instruction?” Journal of Library Administration 48, nos. 3 and 4 (2008): 509–25. 15. Elizabeth Mulherrin and others, “Information Literacy and the Distant Student: One Uni- versity’s Experience Developing, Delivering, and Maintaining an Online, Required Information Literacy Course,” Internet Reference Services Quarterly 9, nos. 1 and 2 (2004): 21–36. 16. Donald Gilstrap and Jason Dupree, “Assessing Learning, Critical Reflection, and Quality Educational Outcomes: The Critical Incident Questionnaire,” College & Research Libraries 69, no. 5 (2008): 407–26. 17. Lynn Cameron, Steven L. Wise, and Susan M. Lottridge, “The Development and Validation of the Information Literacy Test,” College & Research Libraries 68, no. 3 (May 2007): 229–36. 18. Elizabeth Choinski and Michelle Emanuel, “The One-minute Paper and the One-hour Class: Outcomes Assessment for One-shot Library Instruction,” Reference Services Review 34, no. 1 (2006): 148–55. 158 College & Research Libraries March 2010 19. Margy MacMillan, “Open Resume: Magic Words for Assessment,” College & Research Libraries News 66, no. 7 (July/Aug. 2005): 516–20. 20. Karen R. Diller and Sue F. Phelps, “Learning-outcomes, Portfolios, and Rubrics, Oh My! Authentic Assessment of an Information Literacy Program,” Portal: Libraries and the Academy 8, no. 1 (Jan. 2008): 75–89, available online at http://muse.jhu.edu/journals/portal_libraries_and_ the_academy/toc/pla8.1.html [accessed 12 February 2009]. 21. Davida Scharf and others, “Direct Assessment of Information Literacy Using Writing Portfolios,” Journal of Academic Librarianship 33, no. 4 (2007): 462–77. 22. Lorrie A. Knight, “Using Rubrics To Assess Information Literacy,” Reference Services Review 34, no. 1 (2006): 43–55. 23. Each instructor had taken on the responsibilities of personal professional development over the years. This had included regularly searching in bibliographic resources to find and read articles that might serve to improve or keep the course up-to-date. The instructors had frequently discussed their readings at meetings, and several new ideas derived from these readings were used to revise the course. 24. LIBR 1100’s goals were “to provide an introduction to information resources and search strategies,” “to present a procedure for the critical evaluation and use of these resources,” “to promote the effective use of information to accomplish research goals and to encourage life-long learning,” and “to present the collections and services of the Texas Tech University Libraries.” 25. The course had two additional outcome objectives that did not address the ACRL Informa- tion Literacy Competency Standards for Higher Education. They were “The students who take both assessment surveys will, when treated as a group with an average score determined for the group, experience at least a 20 percent higher average group score on the post-assessment survey than on the pre-assessment survey” and “Students will appreciate the relationship between the purpose and goals of LIBR 1100 and being a successful student, employee, and life-long learner.” 26. Louise H. Kidder provides insight into the potential threats to the validity of pre- and post-tests. Louise H. Kidder, Selltiz, Wrightsman and Cook’s Research Methods in Social Relations, 4th ed. (New York: Holt, Rinehart and Winston, 1981), 45; Peter Hernon, Robert E. Dugan, and Candy Schwartz, eds., Revisiting Outcomes Assessment in Higher Education (Westport, Conn.: Libraries Unlimited, 2006), 137; Robert M. Diamond, Designing and Assessing Courses and Cur- ricula: A Practical Guide, 3rd ed., Jossey-Bass Higher Education and Adult Education Series (San Francisco: Jossey-Bass, 2008), 163; Paul Black and Dylan Wiliam, “The Reliability of Assessments,” in Assessment and Learning, ed. John Gardner, 119–31 (London: Sage, 2006); James H. McMillan, Classroom Assessment: Principles and Practice for Effective Instruction, 2nd ed. (Boston: Allyn and Bacon, 2001), 56–89. 27. Donald Barclay, “Evaluating Library Instruction: Doing the Best You Can With What You Have,” RQ 33, no. 2 (1993): 197–98, 201. 28. Barbara Gross Davis, Tools for Teaching, 2nd ed., Jossey-Bass Higher and Adult Education Series (San Francisco: Jossey-Bass, 2009), 3–18. 29. Ibid. 30. McMillan, Classroom Assessment, 56–75. 31. Davis, Tools for Teaching, 362–72. 32. Dara H. Wexler and Patricia P. Tinto, “Active Learning Inside and Outside the Classroom: Creating Multiple Learning Spaces with Technology,” in University Teaching: A Reference Guide for Graduate Students and Faculty, 2nd ed., ed. Stacey Lane Tice and others, 57–75 (Syracuse: Syracuse University Press, 2005); James M. Lang, On Course: A Week-by-Week Guide to Your First Semester of College Teaching (Cambridge, Mass.: Harvard University Press, 2008), 43–61. 33. Bette Lasere Erickson, Calvin B. Peters, and Diane Weltner Strommer, Teaching First-Year College Students, rev. and expanded ed. (San Francisco: Jossey-Bass, 2006), 87–100. 34. Ibid. Overview This 8-part webinar series will assist libraries in taking their scholarly communication programs to the next level. Featured guest speakers will provide practical per- spectives on emerging areas in scholarly communication. Throughout the series, participants will have opportuni- ties to build and develop a network of colleagues and to review how local successes and activities can build towards a comprehensive program plan. Online cur- riculum coordinators for the series are Julie Garrison and Heather Morrison. Audience The series is designed to cover a broad range of topics geared toward graduates of the popular ARL-ACRL Scholarly Communication Institute, the ACRL “Schol- arly Communication 101: Starting With the Basics” workshops, as well as others with responsibilities in the area of scholarly communication. Specific webinars may also appeal to a broader audience of librarians who feel they need to be better informed of scholarly communica- tion issues. Organizations are welcome to participate as a group, or librarians can participate individually. Format Each webinar will be one hour in length, followed by an optional half-hour online breakout discussion ses- sion. Sessions will take place in an interactive, online classroom environment. They will be recorded and made available to registrants as an archive, so if you sign up for the full series but cannot attend a particular session, there will be an opportunity to catch up later. Optional pre-work assignments will be available in ad- vance to enrich the experience or to provide the neces- sary background to bring participants up to speed in ad- vance of the sessions. Libraries may wish to collaborate on pre-work assignments with neighboring libraries (or more distant libraries virtually). A list of registrants will be available in advance to facilitate coordination of such collaboration. Timing The series began in March 2010 and conclude in No- vember with one webinar per month, except for August. Registration Participants can choose to register for the whole series for a $325 fee, or for individual sessions for a fee of $50 each. Believing that it is crucial for libraries to sustain commitment to building scholarly communication pro- grams, the sponsors of the institute are underwriting the costs to bring this webinar series to you at a greatly re- duced price. We are pleased to offer this opportunity to engage virtually as we know that your professional devel- opment dollars are limited. Webinars take place in an interactive, online classroom environment with one user login. If you wish to par- ticipate as a group, your institution could project the webinar to participants in the same location. However, one person must register, login, and keyboard during the event. Full refunds will be granted up to 14 days prior to the start of each event. Webinar Series Strengthening Programs through Collaboration For more information and to register, go to http://www.arl.org/sc/institute/iscwebseries/index.shtml Upcoming Webinars • Managing Transformative Change: Campus Policy and Politics, or: So we’re not Harvard… April 14, 2010, at 3–4:30 p.m. EDT. • Translating Government Policy into Campus Services. May 20, 2010, time TBA. • Changing Role of Libraries: Journal Hosting and Support. June 15, 2010, 12:00 p.m.–1:30 p.m. EDT. • Transitioning from Subscriptions to Open Access: Article Processing Fees and Combined Licensing/ Author’s Rights Approaches. Target date: Week of July 26, 2010. • Broader Library Involvement in Building Programs—Organizational Strategy. Target date: Week of September 20, 2010. • Broader Library Involvement in Building Programs—Librarian Training and Development. Target date: Week of October 18, 2010. • The Future is Now! Target date: Week of November 15, 2010.