Research Article
First-Year Students’ Research Challenges: Does
Watching Videos on Common Struggles affect Students’ Research Self-Efficacy?
Savannah L. Kelly
Research & Instruction
Librarian
J. D. Williams Library
University of Mississippi
Oxford, Mississippi, United
States of America
Email: slkelly@olemiss.edu
Received: 28 July 2017 Accepted: 2
Nov. 2017
2017 Kelly. This is an Open Access article
distributed under the terms of the Creative Commons‐Attribution‐Noncommercial‐Share Alike License 4.0 International (http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
Abstract
Objective
– The
purpose of this quantitative study was to measure the impact of providing
research struggle videos on first-year students’ research self-efficacy. The
three-part video series explicated and briefly addressed common first-year
roadblocks related to searching, evaluating, and caring about sources. The null
hypothesis tested was that students would have similar research self-efficacy
scores, regardless of exposure to the video series.
Methods
– The
study was a quasi-experimental, nonequivalent control group design. The
population included all 22 sections (N = 359) of First-Year Writing affiliated
with the FASTrack Learning Community at the University of Mississippi. Of 22
sections, 12 (N = 212) served as the intervention group exposed to the videos,
while the other 10 (N = 147) served as the control group. A research
self-efficacy pretest – posttest measure was administered to all students. In
addition, all 22 sections, regardless of control or intervention status,
received a face-to-face one-shot library instruction session.
Results
– As
a whole, this study failed to reject the null hypothesis. Students exposed to
the research struggle videos reported similar research self-efficacy scores as
students who were not exposed to the videos. A significant difference, however,
did exist between all students’ pretest and posttest scores, suggesting that
something else, possibly the in-person library session, did have an impact on
students’ research self-efficacy.
Conclusion
– Although
students’ research self-efficacy may have increased due to the presence of an
in-person library session, this current research was most interested in
evaluating the effect of providing supplemental instruction via struggle videos
for first-year students. As this was not substantiated, it is recommended that
researchers review the findings and limitations of this current study in order
to identify more effective approaches in providing instructional support for
first-year students’ research struggles.
Introduction
Academic
faculty who teach a three-unit lecture course spend 45 hours per semester with
students in the classroom. This in-person delivery is in addition to the 90
hours of accompanying homework per semester expected of students for a
three-unit lecture course. In contrast, an academic librarian working alongside
that faculty member is allotted approximately one hour of in-person class time
with students during the semester, with no expectation that students will
complete homework in preparation for that period. Librarians are asked to use
the designated hour to introduce students to the breadth and depth of academic
research, often within the context of a particular assignment and with the
expectation that students will engage in active learning (e.g., hands-on
searching and evaluating activities). This type of library instruction is
referred to in the literature as the one-shot.
Although academic librarians have objected to this arrangement, it continues to
be the de facto assumption between classroom faculty and librarians at many
institutions, including ours.
At
the University of Mississippi, each librarian in the Research & Instruction
Department teaches approximately 100 one-shot library instruction sessions
annually. As requests for in-person course instruction continue to increase,
especially within first-year curriculum, librarians struggle to balance faculty
demand while also adequately supporting students’ needs. The perspective among
our librarians is
that
the current one-shot model mocks our best efforts in providing valuable and
impactful pedagogy for undergraduate students. As tenure-track professionals we
understand first-hand the difficulties of confronting irrelevant search
results, evaluating whether an article aligns with one’s research question, and
generating enough energy to follow-through on a difficult topic. Yet as
librarians in the classroom we often set aside the complexities of the research
process due to the inherent limitations of the one-shot. The vast majority of
instructional time, especially with first-year students, is spent establishing
foundational concepts (e.g., What are keywords? What sources exist?) with
limited opportunity to address where students struggle most (e.g., Why is this
search not working? Does this source agree with my research argument? Why
should I care about sources?). Consequently, when students encounter research
struggles—failed searches, roadblocks, or dead ends—they tend to do so on their
own.
Our
department wanted to fully support students who encounter such difficulties,
but it was not feasible to double the workload by asking librarians to address
research struggles via a second in-person session for every first-year course.
Therefore, in order to offer additional instructional support to students,
while also maintaining current levels of in-person instruction, we decided that
video tutorials might serve as a viable option for providing supplemental
instruction aimed at addressing students’ research challenges.
The
majority of one-shot library sessions correspond to first-year and second-year
writing courses, and our department had a positive relationship established
with the teaching faculty in the Department of Writing & Rhetoric.
First-semester, first-year students often have the most difficulty acclimating
to academic research expectations and therefore our department decided to pilot
the research struggle videos in the First-Year Writing (WRIT 101) course. In
order to ensure that all students enrolled in WRIT 101 were first-semester,
first-time students in college, the sample was limited to sections affiliated
with the FASTrack Learning Community. Otherwise, the sample might have been an
amalgamation of first-year students along with juniors or seniors who had
delayed taking WRIT 101.
The
purpose of this applied research was to measure the impact of providing
supplemental video tutorials for first-year students in addition to the
one-shot library session. The video series addressed common research roadblocks
related to searching, evaluating, and caring about sources. To measure the
effectiveness of the videos, a pretest-posttest quasi-experimental research
study with control and intervention groups was designed. This research article
outlines the development, execution, and effectiveness of this approach on
first-year students’ research self-efficacy.
Literature
Review
As
early as the mid-90s, academic librarians were creating controlled studies to
compare computer-assisted instruction (CAI) with traditional face-to-face
delivery. In 1998, College & Research
Libraries published a study conducted by UCLA librarians Kaplowitz and
Contini using pretest-posttest data comparing CAI and in-person instruction
with biology students during the 1994-1995 academic year. The authors concluded
that CAI, although time-consuming and expensive, was a worthwhile endeavor and
likely alternative to conducting face-to-face library instruction. This
research was supported a few years later by Germain, Jacobson, and Kaczor
(2000) who found that their web-based instructional model improved students’
library skills as effectively as in-person library instruction. While academic
librarians continued to embrace online methods of instruction, Australian
librarians at Deakin University published an article comparing face-to-face
with standalone and meditated tutorials (Churkovich & Oughtred, 2002).
Their findings, in contrast to previous studies, supported that students’
library skills increased more with in-person instruction.
In
2008, Zhang, Watson, and Banfield conducted a systematic review of all library
literature from 1990 to 2005 that compared CAI with face-to-face instruction.
They limited their study designs to rigorous randomized controlled trials,
controlled trials, cohort studies, and case studies that used both pretest and
posttest measures. Of 728 potential studies, only 10 were included in the final
analysis. Even so, Zhang et al. asserted that those ten studies lacked
methodological rigor, notably internal and external validity. Despite that
admonition, research studies comparing face-to-face and online instruction
continued to advance the literature without addressing some of the
methodological concerns pointed out by Zhang et al. Anderson and May (2010)
compared three forms of instruction—in-person, blended, and online—and
concluded from pretest-posttest data that the method did not affect students’
retention of information literacy skills. Shortly thereafter, Archambault
(2011) analyzed student artifacts created from different methods of instruction
and found that students performed better with CAI alone than with combined
in-person and CAI instruction. Continuing the trend, Walton and Hepworth (2012)
published a study analyzing U.K. students’ source evaluation comments resulting
from three different interventions. Their research shared similarities between
Archambault’s artifact analysis and Anderson and May’s research design.
However, Walton and Hepworth’s quantitative study diverged from Anderson and
May’s (N = 103) in that their sample size was significantly smaller (N = 35).
The following year, Hess (2014) compared in-person, online, and combined
instruction with upper-division sociology students with a sample size (N = 36)
as small as in the Walton and Helpworth study. Controlled studies published in the
library literature during the past few years have had significantly fewer
participants than some of the studies evaluated in Zhang et al.’s systematic
review, including Kaplowitz and Contini (1998) with 423 students and Germain et
al. (2000) with 303 students. These smaller sample sizes are of concern,
especially when comparing multiple interventions, as their size often affects
the study’s power to detect statistical significance as well as external
validity. Most recently in the literature, Bordignon et al. (2016) conducted a
controlled study comparing students’ information literacy skills in response to
online IL learning objects and face-to-face workshops. Their sample (N = 110)
was comprised of 75 students during Spring semester and 35 students the subsequent
Fall term. Results indicate that statistically significant differences existed
between participants’ pre-and post- responses in relation to finding
articles.
The
aforementioned studies compared CAI or online methods with face-to-face instruction.
In each of these cases, the dependent variables were skill-based outcomes
(Anderson & May, 2010; Bordignon et al., 2016; Germain et al., 2000; Hess,
2014) or a combination of skill-based and affective measures (Churkovich &
Oughtred, 2002; Kaplowitz & Contini, 1998). Other studies have more
thoroughly investigated students’ affective approaches, though not within the
context of comparing online and in-person interventions. Kracker’s (2002)
pretest-posttest mixed methods study measured the “awareness of the affective
aspects of the research process” (p. 284), as well as anxiety and satisfaction
between students who were presented with information on Kuhlthau’s Information
Search Process (ISP), and those that were not. Kracker and Wang (2002) then used
qualitative data from the same study to categorize students’ research
experiences into three affective dimensions: emotional states, perceptions of
the process, and affinity to research. Such empirical studies helped set the
stage for addressing students’ affective approaches to research.
In
recent years, librarians have argued that standards of information literacy are
incomplete without particular attention to students’ affective dimensions
(Fourie & Julien, 2013; Schroeder & Cahoy, 2010). These concerns were
publically addressed when the Association and College and Research Libraries
(ACRL) created, revised, and officially adopted the Framework for Information Literacy for Higher Education (2015). The
Framework addressed both cognitive
(Knowledge Practices) and affective (Dispositions) engagement within the
context of information literacy.
It
is of interest that librarians continue to explore the interactions between
information literacy and students’ affective dimensions. This current empirical
study measured the impact of research struggle videos on students’ research
self-efficacy. Bandura’s (1977; 1982; 1984; 1986; 1997) foundational and
prolific work on self-efficacy undergirds this study, as well as almost all
studies (Kurbanoglu, 2003; Mi & Riley-Doucet, 2016) investigating people’s
“beliefs in [their] capabilities to organize and execute the courses of action
required to produce given attainments” (Bandura, 1997, p. 3). The construct of
self-efficacy is closely related to, although not the same as, measures of
self-confidence. Both self-confidence and self-efficacy scales measure
confidence levels, but only a confidence scale assumes that an action has taken
place (Stankov, 2013). Perceived self-efficacy, on the other hand, is “not a
measure of the skills one has but a belief
about what one can do under different sets of conditions with whatever skills
one possesses” (Bandura, 1997, p. 37). Such self-efficacy beliefs “affect
thought processes, the level and persistency of motivation, and affective
states...People who have strong beliefs in their capabilities approach
difficult tasks as
challenges
to be mastered rather than as threats to be avoided” (Bandura, 1997, p. 39).
There
is no single instrument for measuring self-efficacy (Bandura, 2006); this
construct is notably contextual and framed in relation to particular domains of
functioning (Byrne, Flood, & Griffin, 2014; Kurbanoglu, 2003). In the
current study, research self-efficacy
was operationalized as first-year students’ academic research skills in
relation to searching, evaluating, and caring about sources. According to
Bandura (1997), levels of self-efficacy are affected by four inputs: mastery
experiences, vicarious experiences, verbal persuasions, and psychological and
affective states. The most important, mastery experiences, reflect an
individual’s prior successes or failures in relation to a particular task. Most
first-semester, first-year students lack successful mastery experiences in
relation to academic research skills. Vicarious experiences, on the other hand,
are those successes or failures as modeled by someone else. In this current
study, video tutorials were employed as a form of modeling successful
approaches in overcoming common research struggles in relation to searching, evaluating,
and caring about sources.
As
a whole, quasi-experimental studies are largely underrepresented within the
library and information science field. This study complements, but does not
replicate, the extant literature comparing online and face-to-face
interventions. Rather than examining two different formats, this study sought
to measure the impact of providing supplemental video tutorials in addition to,
but not in lieu of, in-person library instruction. This study also diverges
from the literature in that these particular videos directly addressed common
research struggles with the expectation that modeling these experiences via
video tutorials would have a positive impact on students’ confidence and
reported research self-efficacy.
Aims
The
purpose of this study was to examine the impact of providing research struggle
videos to first-year students enrolled in First-Year Writing (WRIT 101)
courses. Specifically, does watching research struggle videos prior to
in-person library instruction affect students’ research self-efficacy? The null
hypothesis tested was that first-year students enrolled in FASTrack WRIT 101
courses who watched research struggle videos prior to in-person library
instruction would report the same levels of research self-efficacy as students
who did not watch the videos prior to in-person library instruction.
Methods
This
research study was a nonequivalent control group design (Design 10: Campbell
& Stanley, 1966). The quasi-experimental approach diverges from a
traditional experimental pretest-posttest control group design due to the
inability to randomize individual participants. Although random assignment was
used to determine which class sections would serve as control and intervention
groups, random sampling was not possible given that individuals signed up for
sections based on course schedule preferences. Twenty-two FASTrack First-Year
Writing sections were offered during fall 2016 (N = 359); of those, 10 sections
served as the control group (N = 147), while the other 12 sections served as
the intervention group (N = 212). Eleven sections were originally scheduled for
each group, however one of the control sections was unintentionally designated
as intervention and treated accordingly. The control and intervention groups
were randomly assigned among the seven FASTrack writing instructors after
ensuring that each faculty member had at least one intervention group during
the semester. An open channel of communication was established between the
author and writing instructors regarding the development of this project to
ensure buy-in and full collaboration prior to submitting the proposal to the
Institutional Review Board (IRB) in August 2016.
The
struggle video tutorials were created to explicate and briefly address common
first-year students’ research struggles as it related to searching, evaluating,
and caring about sources. The content was developed based on the author’s 10
years of academic library experience working with first-year students. The
first video established a research claim and demonstrated ways to search
iteratively when encountering poor results; the second video evaluated an
academic source that both agreed and disagreed with the hypothetical research
claim; and the final video discussed the value of expending time and energy in
caring about one’s sources. The purpose of these video tutorials was to provide
a vicarious experience (i.e., modeling) through which students would learn how
to overcome common research struggles. This modeling via video tutorials was
intended to increase students’ confidence in their academic research skills
(i.e., their research self-efficacy).
The
length, pacing, and approach of each video followed recommendations set forth
by van der Meij and van der Meij (2013) for instructional content. After
writing scripts, the videos were recorded via ScreenFlow, and a personalized
introductory video was added using a green screen in the library’s recording
studio. The final video series would take students approximately seven minutes
to watch. Videos were played consecutively in the following order:
1.
Introduction
to Videos, 0:35
2.
The
Struggle with Searching, 2:36
3.
The
Struggle with Evaluating, 2:15
4.
The
Struggle to Care, 1:11
In
this study, research self-efficacy was operationalized as first-year students’
academic research skills—searching, evaluating, and caring about sources. After
the development of the video series, the self-efficacy scale was created to
measure students’ confidence with respect to the domains of functioning that
were addressed in the video series. This alignment between the independent and
dependent variables sought to capture the impact of the struggle videos on
students’ research self-efficacy.
The
construction of the research self-efficacy scale followed examples Bandura set
forth in his chapter entitled “Guide for Constructing Self-Efficacy Scales” (2006). Examples used three anchors
(cannot do at all, moderately can do, highly certain can do) with a confidence
range of 0-100, coupled with imperative statements (e.g., Stop yourself from
worrying about things; Get students to work well together). In order to
replicate Bandura’s recommended structure, the same anchors and confidence
range were employed, while the imperative statements were revised to reflect
academic research skills for first-year students. Several items on the scale
were designed to present “types of dissuading conditions” (Bandura, 2006, p.
311) that first-year students would encounter (i.e., research struggles) when
locating and evaluating sources. These “gradations of challenge” (Bandura,
2006, p. 311) are intrinsic to measurements of self-efficacy and notably
contingent on context.
In
the current study, it was not feasible to establish construct validity through
rigorous factor analysis prior to administration. However content validity was
addressed by ensuring that the scale represented all three components of
research self-efficacy as operationalized in this study as the ability to
search, evaluate, and care about sources. In addition, face validity was
established among undergraduates after piloting the survey with eight
lower-division students who participated in cognitive interviews while
responding to the scale. The current scale went through two revisions based on
student feedback prior to administration. Both the pretest (a = .827) and
posttest (a = .869) scores in this current study
indicated strong internal consistency reliabilities of Cronbach’s alpha. The
full scale as provided to the students is included in Appendix A.
Table
1
Means
and Standard Deviations for Pretest and Posttest Responses by Group
Variable |
Control
Group |
Intervention
Group |
||||||
Pretest |
Posttest |
Pretest |
Posttest |
|||||
(N = 147) |
(N = 126) |
(N = 212) |
(N = 187) |
|||||
M |
SD |
M |
SD |
M |
SD |
M |
SD |
|
Use Google.com |
88.53 |
15.64 |
90.44 |
14.48 |
86.44 |
18.99 |
86.45 |
18.95 |
Use UM Library website |
61.39 |
29.03 |
76.55 |
22.92 |
60.30 |
27.40 |
77.41 |
21.38 |
Adjust Search terms |
82.16 |
17.81 |
84.95 |
15.89 |
76.04 |
22.84 |
79.96 |
19.72 |
Evaluate agree |
83.34 |
17.12 |
84.28 |
17.25 |
81.33 |
18.31 |
85.29 |
15.81 |
Evaluate disagree |
82.84 |
18.35 |
85.24 |
15.80 |
79.58 |
20.01 |
84.59 |
16.92 |
Continue looking |
76.76 |
21.04 |
80.87 |
19.28 |
72.49 |
21.49 |
78.04 |
20.22 |
Care about quality |
80.81 |
18.85 |
83.21 |
17.18 |
77.21 |
21.09 |
80.67 |
17.04 |
Keep from being frustrated |
55.48 |
27.21 |
65.92 |
24.44 |
55.26 |
26.19 |
63.94 |
26.40 |
Not care about topic |
68.50 |
24.74 |
74.70 |
21.80 |
63.81 |
24.82 |
71.14 |
22.83 |
Care due tomorrow |
76.17 |
25.70 |
81.19 |
22.01 |
75.81 |
24.80 |
80.66 |
20.26 |
Once
the struggle videos and research self-efficacy scale were created, it was
essential to coordinate the exact timing of these variables across all 22 WRIT
101 sections. To ensure as much internal validity as possible and to offset
potential intervening variables, the researcher provided a regimented timeline
to all seven writing instructors. One-shots were scheduled for all 22 classes
during a two-week period in October. Faculty administered the research
self-efficacy scale (pretest) on paper to their sections during class the day
prior to each section’s scheduled one-shot. The 12 sections designated as the
intervention group then watched the research struggle videos collectively
during class immediately after taking
the pretest measure. The following class period, regardless of control or
intervention status, students participated in an active, one-shot session. The
faculty then administered the research self-efficacy scale again (posttest) to
all sections the same day students turned in their research assignment,
approximately two weeks after the one-shot. Students who were absent during the
pretest measure were asked by their writing instructors to refrain from taking
the posttest measure. The difference between the control and intervention
groups was the presence of the video series. Everything else, including the
teaching librarian and the content of the one-shot, was kept the same for all
22 class sections.
Results
Measures
of central tendency and variability are provided in Table 1 for both the
control and intervention groups’ pretest and posttest scores. All students’
initial pretest scores were higher than the researcher anticipated, with
several means in the mid-80s. Although it is possible that these reported
levels were a reflection of the anchors adopted during the creation of the
scale (i.e., would responses have differed if the anchor near 90-100 range
stated “absolutely certain can do” rather than “highly certain can do”?), the
responses are more likely a manifestation of the Dunning-Kruger effect as noted
throughout the literature on self-assessments (Guillory & Blankson, 2017;
Kruger & Dunning, 1999; Miller & Geraci, 2011). However high the
pretest means, it was still possible for upward movement in the posttest
measure.
In
the pretest for both the control (M = 88.53) and intervention (M = 86.44)
groups, students were most confident in their ability to use Google to locate sources, and solidly
confident that they could evaluate whether a source agrees (M = 83.34, M =
81.33) or disagrees (M = 82.84, M = 79.58) with a research argument. Both
groups reported the least confidence that they could keep from being frustrated
when unable to locate relevant sources on a topic (M = 55.48, M = 55.26), and
using the UM Library’s website to locate relevant sources for WRIT 101 class
assignments (M = 61.39, M = 60.30). The two aforementioned items also
represented the largest variability of responses in the pretest (i.e., standard
deviations were much higher; frustrated: SD = 27.21, SD = 26.19; library: SD =
29.03, SD = 27.40). In other words, although the means were
lower, students overall reported a wider range of confidence for these two
measures. A final observation from the pretest data was that students in
the control group had slightly higher means than students in the intervention
group on several items, most likely due to differences in sampling. An
independent samples t-test between
the control and intervention groups on the pretest responses indicated no
significant differences existed between the groups, except for one variable:
Adjust your search terms if the results from a search are not relevant or
useful. Here, the control and intervention groups were significantly different
at the outset, t(352.084)
= 2.848, p = .005, equal variances
not assumed.
Table
2
Independent
Samples t-Test between Control
and Intervention Posttest Responses
Variable |
t |
df |
Sig. (2-tailed) |
Cohen’s d |
Use
Google.com |
2.11 |
307.93^ |
.035* |
.24 (small) |
Use
UM Library website |
-.30 |
312 |
.764 |
.03 |
Adjust
search terms |
2.49 |
303.32^ |
.013* |
.28 (small) |
Evaluate
agree |
-.47 |
312 |
.636 |
.05 |
Evaluate
disagree |
.40 |
312 |
.686 |
.05 |
Continue
looking |
1.27 |
312 |
.205 |
.15 |
Care
about quality |
1.28 |
312 |
.201 |
.15 |
Keep
from being frustrated |
.71 |
312 |
.477 |
.08 |
Not
care about topic |
1.40 |
312 |
.163 |
.16 |
Care
due tomorrow |
.22 |
311 |
.826 |
.03 |
Total
score (å
1-10) |
1.24 |
311 |
.217 |
.14 |
It
is clear from Table 1 that students’ responses in both groups increased from
the pretest to posttest measure, but it is not clear whether the control and
intervention groups’ posttest responses were markedly different from one
another. In order to test the null hypothesis of no difference between the
control and intervention groups, an independent t-test (a = .05, two-tailed) was
computed using the posttest scores. A paired t-test was not possible due to students’ anonymity taking the
pretest and posttest measure. The independent samples t-test between the control and intervention posttest responses is
provided in Table 2. As a whole, the study failed to reject the null hypothesis
(p > .05). The total score between
students who watched the videos and students who did not watch the videos was
not significant, t(311) = 1.24, p = .217, d = .14.
However, a statistically significant difference did exist between the groups on
two independent scale items: using Google, t(307.93)
= 2.11, p = .035, d = .24, and adjusting search terms, t(303.32) = 2.49, p = .013, d = .28). The
difference between the control (M = 90.44) and intervention (M = 86.45) groups
regarding Google was unexpected given that the struggle videos did not
discourage viewers from using Google. As evident in Table 1, while students in the
control group reported increased confidence in their ability to use Google
between the pretest and the posttest (M = 88.53, M = 90.44), those in the
intervention group remained stable across both measures (M = 86.44, M = 86.45).
The second item—adjusting search terms—in which the control and intervention
groups were significantly different on the posttest, is difficult to interpret
without acknowledging that the control and intervention groups were at the
outset significantly different on this item in the pretest.
As
evinced in Table 1, students’ research self-efficacy levels increased from the
pretest to the posttest in all measures, regardless of control or intervention
groups (total scores: pretest, N = 359, M = 739.64, SD = 140.67; posttest, N =
313, M = 795.88, SD = 134.57). Thus a final independent t-test was computed to determine if a statistically significant
difference existed between all groups’ pretest and posttest responses. As
provided in Table 3, nine of the ten items, as well as the total score, were
statistically significant (p <
.05). This significance, however, should be considered alongside the
corresponding effect sizes (Cohen’s d),
which ranged from negligible to small to moderate.
When working with large sample sizes, such as in this current study (N=359), it
is often the effect sizes rather than the presence of statistical significance
that relay the true magnitude of the difference between groups.
Table
3
Independent
Samples t-Test between All
Groups’ Pretest and Posttest Responses
Variable |
t |
df |
Sig.
(2-tailed) |
Cohen’s d |
Use
Google.com |
-.57 |
671 |
.572 |
.04 |
Use
UM Library website |
-8.47 |
663.02^ |
.001* |
.65
(moderate) |
Adjust
search terms |
-2.26 |
670.99^ |
.024* |
.17 |
Evaluate
agree |
-2.10 |
671 |
.037* |
.16 |
Evaluate
disagree |
-2.85 |
671 |
.005* |
.22
(small) |
Continue
looking |
-3.13 |
668.63^ |
.002* |
.24
(small) |
Care
about quality |
-2.09 |
670.15^ |
.037* |
.16 |
Keep
from being frustrated |
-4.67 |
671 |
.001* |
.36
(small) |
Not
care about topic |
-3.76 |
670.35^ |
.001* |
.29
(small) |
Care
due tomorrow |
-2.76 |
668.68^ |
.006* |
.21
(small) |
Total
score (å 1-10) |
-5.28 |
670 |
.001* |
.41
(small) |
Discussion
This
current research tested the null hypothesis that first-year students enrolled
in FASTrack WRIT 101 courses who watched research struggle videos prior to
in-person library instruction would report the same levels of research
self-efficacy as students who did not watch the videos prior to in-person
library instruction. As indicated in the results, the researcher failed to
reject the null hypothesis. The probability was greater than .05 that the
observed difference between means in the total score would have occurred by
chance if the null hypothesis were true. Although it is not uncommon for
controlled studies to yield insignificant results after comparing instructional
approaches (Germain et al., 2000; Hess, 2014; Kaplowitz & Contini,
1998;Yong, Levy, & Lape, 2015), such outcomes should always be considered
alongside effect sizes (e.g., Walton & Hepworth, 2012), as well as within
the larger research framework. In this current study, the results can be
evaluated within the specific context of a quasi-experimental design: namely,
did the approach itself limit the impact of the video series on students’
research self-efficacy?
In
order to preserve the integrity of the intervention, it was critical that
students in the control group were not exposed to the videos. Therefore the
videos were not posted on YouTube, the Learning Management System (LMS), or
emailed directly to the intervention group. The only way to maintain complete
control over which students were exposed to the videos was to have the faculty
member play the videos during class time. This, however, was not ideal, and did
not allow students the opportunity to engage individually with the videos.
Students could not adjust the speed of the videos, or view again outside of
that class period. Although the videos followed best practices in terms of
pacing, content, and “look and feel” (Bowles-Terry, Hensley, & Hinchliffe,
2010, p. 26), there was limited authentic engagement between students and the
video tutorials in the classroom. In addition, it is recommended that future
research studies employ undergraduate students, rather than librarians, to
serve as narrators in the video tutorials. This important distinction is based
on the theoretical consideration that vicarious experiences are most effective
when the model is similar to, rather than different from, the viewer (Bandura,
1997).
The
timing of the video series in this current study is worth reconsideration. The
intent behind providing video tutorials prior to the one-shot was that students
who viewed the videos would be aware of potential roadblocks before they
attended the one-shot. However, this approach most likely provided the video
tutorials too early during the semester, during a time in which students had
little to no context for understanding academic roadblocks. The videos may have
also affected the level of engagement during the one-shot (i.e., did the videos
prime or dissuade students from paying attention?). A more effective approach
might have been to expose the intervention group to the videos after students
gained hands-on academic experience during the one-shot library session.
A
third limitation was the creation and administration of the research
self-efficacy scale. Although it is relatively common within library and
information science literature (Mahmood, 2017) to develop in-house
self-assessment instruments, attempts to establish psychometric properties
should be made prior to administration. Although reliability and content and
face validity were established for this current scale, it is highly recommended
that future studies establish the instrument’s construct validity prior to
administration with additional populations. A related consideration was the
timing of the research self-efficacy scale. As noted previously, students
reported surprisingly high levels of confidence during the pretest. This
phenomenon was most likely a manifestation of the Dunning-Kruger effect, which
is essentially that “the skills that engender competence in a particular domain
are often the very same skills necessary to evaluate competence in that domain”
(Kruger & Dunning, 1999, p. 1121). Thus, first-year students’ lack of
experience with academic research skills also made them unable to accurately
assess their own competence in that domain. This overconfidence effect is most
prominent among low-performers (Guillory & Blankson, 2017; Kruger &
Dunning, 1999; Miller & Geraci, 2011), although low-performing students who
overestimate their abilities also have less confidence in reported
self-assessments than high-performing students (Guillory & Blankson, 2017;
Miller & Geraci, 2011). On the other hand, it is also important to
recognize that students’ posttest scores were higher than their initial pretest
scores. This is an interesting observation given that self-assessments,
including academic self-efficacy beliefs, tend to be more accurate when
administered at the end of the semester rather than the beginning (Gore, 2006;
Guillory & Blankson, 2017). In this particular study, it is possible that
the posttest measure was a more reliable instrument of students’ research
self-efficacy beliefs since it was administered later in the semester than the
pretest, and after students had the opportunity to engage in academic research.
Although
valuable to recognize the potential limitations of the current research design,
it is equally sensible to acknowledge the outcome of this study: the struggle
videos did not have a significant effect on students’ research self-efficacy.
Notwithstanding that, something did influence students’ research self-efficacy
as evidenced in the upward trend between the pretest and posttest results
(Tables 1 and 3). Given the effect sizes and that a statistically significant
difference was observed across all students, regardless of control or
intervention group, it is likely that all participants were exposed to the same
experience. A plausible explanation, although not necessarily the only one
(i.e., maturation), is the impact of the one-shot instruction session that all
students received. It is likely that the in-person instruction, not the
presence or absence of the struggle videos, affected students’ research
self-efficacy during the semester.
Conclusion
The
purpose of this research was to measure the impact of providing supplemental
instructional content via struggle videos for first-year students. The outcome
was that students who were exposed to the video series reported similar
research self-efficacy as students who were not exposed to the video series.
Although students’ research self-efficacy scores increased overall from the
pretest to the posttest, this current study was not investigating the impact of
the in-person library instruction.
The
quasi-experimental approach, although rigorous, presented particular
challenges, especially in providing an authentic environment for students to
engage with the video tutorials. It is recommended that subsequent research
examine the impact of providing self-paced, or interactive, struggle videos
outside of the classroom environment. It is also important to recognize that
the self-efficacy scale used in this study was created in-house. Future
researchers are encouraged to evaluate the scale’s construct validity prior to
additional administrations. The limitations of this current study have been
clearly delineated in the discussion for the benefit of researchers who, like
our instruction librarians, are interested in more fully supporting students’
research struggles.
References
Anderson, K.
& May, F. A. (2010). Does the method of instruction matter? An experimental
examination of information literacy instruction in the online, blended, and
face-to-face classrooms. Journal of Academic Librarianship, 36(6),
495-500. https://doi.org/10.1016/j.acalib.2010.08.005
Archambault, S
G. (2011). Library instruction for freshman English: A multi-year assessment of
student learning. Evidence Based Library and Information Practice, 6(4),
88-106. https://doi.org/10.18438/B8Q04S
Association of
College and Research Libraries. (2015). Framework for information literacy for
higher education. Retrieved from http://www.ala.org/acrl/standards/ilframework
Bandura, A. (1977).
Self-efficacy: Toward a unifying theory of behavioral change. Psychological
Review, 84(2), 191-215. https://doi.org/10.1037/0033-295X.84.2.191
Bandura, A.
(1982). Self-efficacy mechanism in human agency. American Psychologist, 37(2),
122-147. https://doi.org/10.1037/0003-066X.37.2.122
Bandura, A.
(1984). Recycling misconceptions of perceived self-efficacy. Cognitive
Therapy and Research, 8(3), 231-255. https://doi.org/10.1007/BF01172995
Bandura, A.
(1986). The explanatory and predictive scope of self-efficacy theory. Journal
of Social and Clinical Psychology, 4(3), 359-373. https://doi.org/10.1521/jscp.1986.4.3.359
Bandura, A.
(1997). Self-efficacy: The exercise of
control. New York: Freeman.
Bandura, A.
(2006). Guide for constructing self-efficacy scales. In F. Pajares & T. C.
Urdan (Eds.), Self-efficacy beliefs of
adolescents (307-337). Greenwich, CT: Information Age.
Bordignon, M.,
Strachan, G., Peters, J., Muller, J., Otis, A., Georgievski, A., & Tamim,
R. (2016). Assessment of online information literacy learning objects for first
year community college English composition. Evidence Based Library and
Information Practice, 11(3), 50-67. https://doi.org/10.18438/B8T922
Bowles-Terry,
M., Hensley, M. K., & Hinchliffe, L. J. (2010). Best practices for online
video tutorials in academic libraries. Communications in Information
Literacy, 4(1), 17-28. Retrieved from http://www.comminfolit.org/index.php?journal=cil&page=article&op=view&path%5B%5D=Vol4-2010AR1
Byrne,
M., Flood, B., & Griffin, J. (2014). Measuring the academic
self-efficacy of first-year accounting students. Accounting Education, 23(5),
407-423. https://doi.org/10.1080/09639284.2014.931240
Campbell, D. T. & Stanley, J. C. (1966). Experimental
and quasi-experimental designs for research. Boston: Houghton Mifflin.
Churkovich, M.
& Oughtred, C. (2002). Can an online tutorial pass the test for library
instruction? An evaluation and comparison of library skills instruction methods
for first year students at Deakin University. Australian Academic & Research Libraries, 33(1), 25-38.
Retrieved from http://dro.deakin.edu.au/view/DU:30001851
Germain, C.
A., Jacobson, T. E., & Kaczor, S. A. (2000). A comparison of the
effectiveness of presentation formats for instruction: Teaching first-year
students. College & Research
Libraries, 61(1), 65-72. https://doi.org/https://doi.org/10.5860/crl.61.1.65
Gore, P. A.
(2006). Academic self-efficacy as a predictor of college outcomes: Two
incremental validity studies. Journal of Career Assessment, 14(1),
92-115. https://doi.org/10.1177/1069072705281367
Guillory, J.
J., & Blankson, A. N. (2017). Using recently acquired knowledge to
self-assess understanding in the classroom. Scholarship of Teaching and
Learning in Psychology, 3(2), 77-89. https://doi.org/10.1037/stl0000079
Hess, A. N.
(2014). Online and face-to-face library instruction: Assessing the impact on
upper-level sociology undergraduates. Behavioral & Social Sciences
Librarian, 33(3), 132-147. https://doi.org/10.1080/01639269.2014.934122
Kaplowitz, J.
& Contini, J. (1998). Computer-assisted instruction: Is it an option for
bibliographic instruction in large undergraduate survey classes? College & Research Libraries, 59(1),
19-27. https://doi.org/10.5860/crl.59.1.19
Kracker, J.
(2002). Research anxiety and students’ perceptions of research: An experiment.
Part I. Effect of teaching Kuhlthau’s ISP model. Journal of the American
Society for Information Science & Technology, 53(4), 282-294.
Kracker, J.
& Wang, P. (2002). Research anxiety and students’ perceptions of research:
An experiment. Part II. Content analysis of their writings on two experiences. Journal
of the American Society for Information Science and Technology, 53(4),
295-307.
Kruger, J.
& Dunning, D. (1999). Unskilled and unaware of it: How difficulties in
recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social
Psychology, 77(6), 1121-1134. https://doi.org/10.1037/0022-3514.77.6.1121
Kurbanoglu, S.
(2003). Self‐efficacy: A concept closely linked to information literacy and
lifelong learning. Journal of Documentation, 59(6), 635-646. https://doi.org/10.1108/00220410310506295
Mahmood, K.
(2017). Reliability and validity of self-efficacy scales assessing students’
information literacy skills: A systematic review. The Electronic Library,
35(5), 1035-1051. https://doi.org/10.1108/EL-03-2016-0056
Mi, M., & Riley-Doucet, C. (2016). Health professions students’ lifelong
learning orientation: Associations with information skills and self efficacy. Evidence
Based Library and Information Practice, 11(2), 121-135. Retrieved
from https://journals.library.ualberta.ca/eblip/index.php/EBLIP/article/view/26088/20416
Miller, T. M. & Geraci, L. (2011). Unskilled but aware: Reinterpreting
overconfidence in low-performing students. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 37(2), 502-506. https://doi.org/10.1037/a0021802
Schroeder, R. & Cahoy, E. S. (2010). Valuing information literacy:
Affective learning and the ACRL standards. Portal: Libraries and the Academy,
10(2), 127-146. https://doi.org/10.1353/pla.0.0096
van der Meij, H. & van der Meij, J. (2013). Eight
guidelines for the design of instructional videos for software training. Technical Communications, 60(4),
205-228.
Walton, G.
& Hepworth, M. (2013). Using assignment data to analyse a blended
information literacy intervention: A quantitative approach. Journal of
Librarianship and Information Science, 45(1), 53-63. https://doi.org/10.1177/0961000611434999
Yong, D.,
Levy, R., & Lape, N. (2015). Why no difference? A controlled flipped
classroom study for an introductory differential equations course. PRIMUS,
25(9-10), 907-921.
Zhang, L.,
Watson, E. M., & Banfield, L. (2007). The efficacy of computer-assisted
instruction versus face-to-face instruction in academic libraries: A systematic
review. The Journal of Academic Librarianship, 33(4), 478-484. https://doi.org/10.1016/j.acalib.2007.03.006
Appendix A
Pretest –
Posttest Research Self-Efficacy Scale
This
scale is designed to help us get a better understanding of the kinds of things
that are difficult for students. Please rate how certain you are that you can do each of the things described
below. Consider only what you think you can do at this time (not at some point in the future). Your answers are
anonymous and confidential.
By
continuing with this scale, you agree that you are 18 years of age or older.
Rate your
degree of confidence by recording a number from 0 to 100 using the scale given
below:
0 10 20 30 40 50 60 70 80 90 100
Cannot Moderately Highly certain
do
at all
can do can do
Confidence (0-100)
Use
Google.com to locate relevant sources for WRIT 101 class assignments ______
Use
the UM Library’s website to locate relevant sources for WRIT 101 class
assignments ______
Adjust
your search terms if the results from a search are not relevant or useful ______
Evaluate
whether a source agrees with your research argument ______
Evaluate
whether a source disagrees with your research argument ______
Get
yourself to continue looking for relevant sources when you can’t seem to find ______
what
you need
Get
yourself to care about the quality of sources you use ______
Keep
from being frustrated when you can’t find any sources related to your topic ______
Get
yourself to care about locating sources when you do not care about your ______
assignment
topic
Get
yourself to care about source quality when your assignment is due tomorrow ______
Today’s
Date _______ & Time _________