Evidence Summary
Exploring the Disconnect Between Information Literacy Skills and
Self-Estimates of Ability in First-Year Community College Students
A Review of:
Gross, M., & Latham, D. (2012). What’s skill got to do with it?: Information
literacy skills and self-views of ability among first-year college students. Journal of the American Society for
Information Science and Technology, 63(3),
574-583. doi: 10.1002/asi.21681
Reviewed by:
Heather Coates
Digital Scholarship & Data Management Librarian
IUPUI University Library
Indianapolis, Indiana, United States of America
Email: hcoates@iupui.edu
Received: 27 Feb. 2013 Accepted: 25 May 2013
2013 Coates.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 2.5 Canada (http://creativecommons.org/licenses/by‐nc‐sa/2.5/ca/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
Abstract
Objective - To
explore the relationships between information literacy (IL) test scores and
self-estimated ability both prior to and after completing the test.
Design – Information
Literacy Test (ILT) with pre- and post-test surveys of self-estimated ability.
Setting – Two community colleges: a small institution in a
rural area and a large institution in an urban area.
Subjects – First-year community college students enrolled in
entry-level English courses.
Methods – The authors conducted a replication study of their
earlier work using a larger sample from two community colleges. Information
literacy (IL) skills were assessed using the Information Literacy Test (ILT) developed
and validated by researchers at James Madison University. During the spring and
fall semesters of 2009 and 2011, the authors administered in a single session
the ILT, pre-, and post-test survey instruments to 580 participants.
Participants self-selected via sign-up sheet. The first hundred students to
sign up per enrollment period were scheduled. Participants received incentives
for participation, with an additional incentive offered for scoring in the top
15%.
Main Results – The majority of students at both schools (95% at
school 1, 80% at school 2) scored in the below-proficient range on the ILT, a
few scored in the proficient range (5% at school 1, 20% at school 2), but no
students scored in the advanced range. The mean of the few scores in the
proficient range was closer to the below-proficient range (≤65%) than the
advanced range (≥90%). For students at both schools, significant differences
were found between their self-estimated and actual test score. While students
at both schools adjusted their self-estimated scores downward after completing
the ILT, post-test self-estimates remained significantly inflated in relation
to their test performance. In particular, students scoring in the
below-proficient range demonstrated a large and significant gap. The difference
between the self-estimated comparisons to peers and actual scores was
significant for students from both schools who scored in the below-proficient
range. Only the proficient students at school 1 were able to accurately
estimate their IL skill level. Most students completed the ILT remaining
unaware of their poor performance.
Conclusion – The study revealed a significant disconnect between
students’ perceptions of their information literacy skills and their actual
performance. Students scoring in the proficient range demonstrated a stronger
post-test correction response than students scoring at below-proficient levels.
Generally, the authors of the find that the results support the Dunning-Kruger
Effect theory that people lacking skills in a particular domain demonstrate a
miscalibration between self-estimated and actual skill. Specifically, it
confirms that this effect occurs in the domain of information literacy.
There is a need for tools to diagnose information
literacy competence. Most students are unable to self-assess accurately and
competency should not be assumed. Meeting the needs of this population will be
challenging, given that they do not recognize the need for instruction or
assistance.
Commentary
Student self-perception of skill level is a relatively
unstudied aspect of information literacy. This expanded replication study
contributes to the body of evidence suggesting that students are entering
college with inadequate information literacy skills. The comparison of actual
performance with students’ self-estimates is a useful contribution to our
understanding of information literacy.
In general, the study was executed well and addressed
the initial research questions. One concern pertaining to the use of the ILT to
estimate information literacy is the exclusion of ACRL Standard 4, which
addresses the use of information. The exclusion of Standard 4 may result in
inaccurate measurement of student skill level. Despite that limitation, the ILT
is generally accepted as a valid tool for assessing information literacy.
The procedures section is generally strong with some
details missing. More complete description of the pre- and post-test survey
development, data characterization and screening, and rationale for the
statistical methods used would be helpful in assessing the validity of the
data. Some of these details are provided in the Findings section rather than in
the Methods section. The authors fail to state whether an Institutional Review
Board reviewed the study and whether informed consent was provided. Overall,
the results are presented logically and correspond to the initial research
questions. The sample size was sufficient for the reported analyses and level
of precision. The authors clearly state the primary limitations of the study:
1) a non-random sample of self-selected participants at two community colleges;
and, 2) a lack of data on students scoring in the proficient range. Thus, the
sample may not be representative of community college or university students in
general.
This study describes an interesting approach for
understanding the role of perceived ability in information literacy
instruction. This area of research is exploratory, so immediate implications
for practice are few. We can conclude that librarians and faculty should not
rely on students’ self-reported ability to guide IL instruction. Diagnostic
tools for identifying students with deficient IL skills are necessary so that
appropriate instruction can be provided. Such diagnostic tools should attempt
to address all five ACRL standards, particularly Standard #4 (“…uses
information effectively…”), which is not assessed by the ILT.
Additional replication studies carried out at other
types of institutions and using a random sample of students are needed for
further replication. If this disconnect is present in the general student
population, it speaks to the need for integrating information literacy
instruction into program curricula, rather than expecting students to
self-select for optional IL instruction. An approach to engaging students in IL
learning opportunities might be through a certificate or badge program. Some
institutions provide certification for skills in particular software
applications or programming languages. Libraries could provide certification or
badges for application of information literacy skills to relevant tasks; these
could be included in student portfolios to demonstrate real-world skills.