Evidence Summary
Quality of
Student Paper Sources Improves after Individual Consultation with Librarians
A Review of:
Reinsfelder,
T. L. (2012). Citation analysis as a tool to measure the impact of individual
research consultations. College & Research
Libraries, 73(3), 263-277.
Retrieved from http://crl.acrl.org/content/73/3/263.abstract
Reviewed by:
Laura
Newton Miller
Collections
Assessment Librarian
Carleton
University
Ottawa,
Ontario, Canada
Email:
laura_newtonmiller@carleton.ca
Received: 27 Nov. 2012 Accepted: 8 Feb. 2013
2013 Newton
Miller. This is an Open Access article distributed under the terms of the
Creative Commons‐Attribution‐Noncommercial‐Share Alike License 2.5 Canada (http://creativecommons.org/licenses/by‐nc‐sa/2.5/ca/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
Abstract
Objective – To
determine whether the quality of sources used for a research paper will improve
after a student receives one-on-one instruction with a librarian. To test
citation analysis and a rating scale as means for measuring effectiveness of
one-on-one consultations.
Design – Citation
analysis.
Setting – Academic
library of a large American university.
Subjects – Papers
from 10 courses were evaluated. In total, 76 students were asked to meet with
librarians. Of these, 61 actually participated. Another 36 students from the
control group were not asked to meet with a librarian (although 1 partook in a
consultation).
Methods –
Librarians invited faculty to participate in a new service to help improve
quality of student research papers. Eligible courses included those with a
required research paper component where papers could be evaluated at different
times in the project. Faculty instructed students in the class to meet with the
librarian after a first draft of a paper was written. Students from seven
courses were asked to meet with a librarian. Courses included English
Composition (2), Geography (1), Child Development (1), Occupational Therapy
(1), Marketing (1) and Women Writers (1). Three courses acted as control groups
(all English Composition). After meeting with students to make recommendations,
librarians used a rating scale (measuring relevancy, authority, appropriate
dates and scope) to review the quality of sources in both drafts and final
papers.
Main Results –
One-on-one consultations with a librarian resulted in sources being of a higher
quality in the final paper. With the exception of authority, the differences
between draft and final paper were statistically significant in all measures
(overall quality, relevance, dates and scope). Those in the control group
showed no improvement in quality of sources between draft and final paper.
Conclusion – Quality
of sources in final paper improves after one-on-one consultations with
librarians. The use of a rating scale is helpful in objectively measuring
quality of sources, although there is potential for subjective interpretation.
Commentary
Although
citation analysis is commonly used to study library resources, this study takes
a unique twist to the design by quantitatively examining the effects of
individual research consultations. One-on-one instruction studies usually rely
on more subjective tools such as satisfaction surveys and anecdotal evidence.
Using citation analysis and a new rating scale offers a fresh take on
evaluating library impact. This paper is not only testing to see whether
librarian consultation is effective, but also whether citation analysis is a
useful tool to test that hypothesis. The answer is yes to both, with some
caveats.
The EBL
Critical Appraisal Checklist (Glynn, 2006) was used to determine various
strengths and weaknesses of the study. The researcher’s rating scale was tested
for reliability among raters and the author admits that the tool is far from
perfect. However, he is to be commended for clearly explaining ways to improve
the tool with better instructions and more descriptive categories and criteria,
so that future researchers are fully aware of potential drawbacks. A copy of
the rating scale with descriptions is included for other researchers’ future
use.
This is a
very readable article. However, there are concerns regarding the population and
methodology. The researcher explains some of the issues regarding diversity of
assignments and how faculty instructed students in acceptable sources, but it
is challenging to know if there are just too many variables affecting the
results. Although there are benefits to one-on-one instruction (such as
tailoring to individual student needs), there is a lack of standardization involved.
For instance, it is not completely clear if the librarians were telling the
students the actual resources to use, or were recommending places where they
could find useful resources.
Two
librarians provided recommendations to students regarding resources and then
subsequently scored student papers. There is potential for bias in the ways
that they scored the sources, since they knew which students/papers they had
assisted. The librarians might have looked more favourably on the
papers/sources of those students that they had helped versus those students
that they hadn’t. Having different
people score the papers would have helped eliminate this bias.
We don’t
know the librarians’ individual style or their subject expertise. The
researcher is comparing resources of English Composition, Geography, Child
Development, Occupational Therapy, Marketing and Women Writers. These are very
different subjects, and what one librarian recommends for resources could be
quite different from the other based on their knowledge of the subject.
Limiting to only English Composition papers for both groups would have made for
stronger comparisons.
The use of
tools such as rating scales and rubrics to measure student learning has
received much attention in recent years. Despite some drawbacks, this paper
helps to support academic librarians looking for different measures of library
impact that are substantive and improved over methods such as satisfaction
surveys. However, time invested in individual consulting, evaluating and
scoring citations for large classes is a potential challenge. Future
researchers should take the author’s recommendations to make the rating scale a
more reliable tool, and to limit comparisons to similar or equivalent courses.
References
Glynn, L. (2006). A critical
appraisal tool for library and information research. Library Hi Tech, 24(3),
387-399. doi: 10.1108/07378830610692154