Evidence Summary
Relevancy Trumps Format When Teaching Information Literacy
A Review of:
Tewell, E. C. (2014). Tying television comedies to information literacy:
A mixed-methods investigation. The
Journal of Academic Librarianship, 40(2),
134-141. doi:10.1016/j.acalib.2014.02.004
Reviewed by:
Giovanna Badia
Liaison Librarian, Schulich Library of Science and Engineering
McGill University
Montréal, Québec, Canada
Email: giovanna.badia@mcgill.ca
Received: 8 Sep. 2014 Accepted: 18 Nov. 2014
2014 Badia.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 4.0
International (http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
Abstract
Objective – This study assessed the effects of showing television comedy
clips to demonstrate information literacy concepts when teaching one-shot
instruction sessions. More specifically, it examined whether the students’
retention and understanding increased when television comedy clips were used
and whether students preferred instruction that included popular culture
examples.
Design – A mixed-methods
investigation that employed multiple-choice questionnaires and focus group
interviews.
Setting – A small liberal arts
college in the United States of America.
Subjects – A total of 211 freshmen
students enrolled in a First-Year Studies course. The students were divided
into 16 class sections. The author collected a total of 193 valid responses to
the pretests and posttests in his study.
Methods – Half of the class sections
(103 respondents) were taught selected information literacy concepts using
television comedy clips and a group discussion led by the instructor. The other
half (90 respondents) were taught using only an instructor-led discussion. The
classes were randomly selected to belong to the experimental group (with TV
comedy clips) or the control group (without TV comedy clips). An online pretest
questionnaire, consisting of 10 multiple-choice questions, was administered at
the beginning of the 90-minute library instruction session for both groups. An
online posttest questionnaire, consisting of the same questions as the pretest
but in a randomized order, was completed by the students at the end of the session.
About a month later, one-hour focus group interviews were conducted with a
small subset of the study’s subjects who volunteered to participate in the
focus groups. The experimental focus group consisted of five study participants
who had attended a library instruction session that involved showing the
television comedy clips and the control focus group consisted of six study
participants who had attended a library instruction session that did not
include showing the television comedy clips.
Main Results – The experimental group scored higher than the
control group on the posttest with an average “increase of 1.07 points from
pre- to posttest compared to a 0.13 mean increase in the control group” (p.
139), which means that the experimental group answered one more question
correctly. Four out of the five participants in the experimental focus group
also discussed the television comedy clips even though they were not asked
about them. Conversely, when asked about what they enjoyed in the class, the
majority of participants from both focus groups discussed the content covered
in the session rather than any teaching methods employed. “The quantitative
results suggest that student test results either increased, as in the
experimental group, or remained relatively level, as in the control group, due
to the type of instruction received” (p. 137).
Conclusion – The author states that the results from the test
questionnaires and answers from focus group sessions indicate that using
television comedy clips may be a successful way of improving students’
retention of course content. However, the study’s results could not demonstrate
that students liked classes with popular culture examples more than classes
without them, since the majority of focus group participants found the course
content more interesting than the manner in which the content was taught. The
relevancy of the content presented in an information literacy session appears
to make more of an impact on the students than the format in which it is
presented.
Commentary
This well-conducted
study supplements the small pool of existing literature on the use of popular
culture as one method of supporting information literacy by seeking to answer
research questions that build upon the literature.
Glynn’s (2006) EBL
Critical Appraisal Checklist was applied to this study, which ranks highly for
the data collection, study design, and results questions in the checklist. The
author clearly describes the study’s methods and results for the readers to
form their own interpretation of the data, which will support the logic of the
author’s conclusions. In addition to the description in the methods section,
the appendices contain the questions used in the pretest, posttest, and focus
groups, making it easy for readers to reproduce the study. The only missing
detail is how the groups were randomized. In this reviewer’s opinion, this
article can serve as one possible example of how an assessment study should be
written and conducted.
This study does not
rank as highly for the population section of questions in the critical
appraisal checklist, due to the author’s use of a small “nonrandom convenience
sample” (p. 140). The author indicates the sample size as a limitation of the
study, since the results cannot be generalized to the entire undergraduate
population. He cautions readers to take into account their own student
population and organizational characteristics when deciding how to apply the
study’s findings.
Librarians teaching information literacy sessions will
be able to apply the study’s major finding, “that the fundamental difference
that encourages student learning appears to lay not in the specific format but
in making information literacy more relevant and accessible to students’ lives”
(p. 140), to their own teaching methods. The choice of whether to employ
popular culture in information literacy sessions is up to the instructor, since
there was only an average increase of 1.07 points between the students’ scores
on the pretests and posttests. While this is statistically significant, it may
not be a large enough difference to be practically significant such that
librarians would be convinced to start incorporating television comedy clips
into their own information literacy sessions if they are not already doing so.
References
Glynn, L. (2006). A critical appraisal tool
for library and information research. Library
Hi Tech, 24(3), 387-399. doi:10.1108/07378830610692145