Evidence Summary
Web Search Engines - Not Yet
a Reliable Replacement for Bibliographic Databases
A Review of:
Bates, J., Best, P., McQuilkin, J., & Taylor, B. (2017) Will web search
engines replace bibliographic databases in the systematic identification of research?
The Journal of Academic Librarianship, 43(1),
8-17. https://doi.org/10.1016/j.acalib.2016.11.003
Reviewed by:
Emma
Hughes
Freelance
Information Professional
Norwich,
United Kingdom
Email:
emma.e.hughes@outlook.com
Received: 29 Nov. 2017 Accepted: 29 Apr. 2018
2018 Hughes.
This is an Open Access article distributed under the terms of the Creative
Commons‐Attribution‐Noncommercial‐Share Alike License 4.0
International (http://creativecommons.org/licenses/by-nc-sa/4.0/),
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly attributed, not used for commercial
purposes, and, if transformed, the resulting work is redistributed under the
same or similar license to this one.
DOI: 10.18438/eblip29378
Abstract
Objective - To explore whether web search engines could replace
bibliographic databases in retrieving research.
Design - Systematic review.
Setting - English language articles in health and social care;
comparing bibliographic databases and web search engines for retrieving
research published between January 2005 and August 2015, in peer-reviewed
journals and available in full-text.
Subjects - Eight bibliographic databases: ASSIA (Applied Social
Sciences Index and Abstracts), CINAHL Plus (Cumulative Index to Nursing and
Allied Health Literature), LISA (Library and Information Science Abstracts),
Medline, PsycInfo, Scopus, SSA (Social Services Abstracts), and SSCI (Social
Sciences Citation Index) and five web search engines: Ask, Bing, Google, Google
Scholar, Yahoo.
Methods - A literature search via the above bibliographic
databases and web search engines. The retrieved results were independently
appraised by two researchers, using a combination of tools and checklists,
including the PRESS checklist (McGowan et al., 2016) and took guidance on
developing search strategies from the Centre for Reviews and Dissemination
(2009).
Main Results - Sixteen papers met the appraisal requirements. Each
paper compared at least one bibliographic database against one web-search
engine. The authors also discuss findings from their own search process.
Precision and sensitivity scores from each paper were compared. The results
highlighted that web search engines do not necessarily use Boolean logic and in
general have limited functionality compared to bibliographic databases. There
were variances in the way precision scores were calculated between papers, but
when based on the first 100 results, web search engines were similar to some
databases. However, their sensitivity scores were much weaker.
Conclusion - Whilst precision scores were strong for web search
engines, sensitivity was lacking; therefore web search engines cannot be seen
as a replacement for bibliographic databases at this time. The authors
recommend improving the quality of reporting in studies regarding literature
searching in academia in order for reliable comparisons to be made.
Commentary
Due to the deluge of research information on the
internet, web search engines could be seen as a viable, free alternative to
searching bibliographic databases. This paper was reviewed using the AMSTAR 2;
a critical appraisal tool for systematic reviews (Shea et al., 2017).
The systematic review’s methods and search strategy
were clearly explained and provided, giving the research strong validity.
Unlike the studies included in the review, the authors performed their search
via more than one web search engine and provided clear reasons for the search
engine choices. In the methodology the authors state that the review followed
the PRISMA guidelines (Moher et al., 2009), however the authors have not
included a figure or list of the excluded articles, as the PRIMSA flow diagram
would suggest including.
All findings are provided, including those of the
authors’ search in both bibliographic databases and web search engines.
Limitations are discussed, including the age of some papers retrieved and a
discrepancy in the ways different papers define “precision”. One paper used a
scoring system (Tober, 2011) based on their definition of recall, precision and
importance, whilst others calculated precision from a selected number of hits
(first 10, 50, or 100 hits) rather than the total number retrieved. This meant
the authors struggled to analyse precision scores. The findings of this paper
are consistent with previous studies in suggesting that web search engines, in
particular Google Scholar, could be used in conjunction with bibliographic
databases when searching for information. Interestingly, two papers suggested
that Google Scholar could offer better precision scores than some bibliographic
databases (McFadden et al., 2012; Walters, 2009). However, web search engines
should not at this stage be used as a reliable replacement.
The authors did not explore if there were relevant
results retrieved from web search engines that were not found in the
bibliographic databases. This may help to determine the value of web search
engines for contributing unique evidence, that otherwise might not be
identified in traditional systematic review searches.
Grey literature (for example, unpublished reports or
conference abstracts) is a form of evidence often required in social care
research (Ford & Korjonen, 2012), but often not indexed in bibliographic
databases. Therefore, it would be interesting to see this study replicated to
explore searches for different forms of evidence.
This paper highlights the need for a consistent
definition of precision to assist academics comparing studies in future
research. Overall this paper adds to the growing body of research exploring the
potential of web search engines for retrieving empirical research, so it is
useful for librarians deciding whether to incorporate web search engines into
their teaching.
References
Centre for Reviews and Dissemination. (2009). Systematic reviews: CRD's guidance for
undertaking reviews in health care. Retrieved from: https://www.york.ac.uk/media/crd/Systematic_Reviews.pdf
Ford, J., & Korjonen, H. (2012). Information needs
of public health practitioners: A review of the literature. Health
Information and Libraries Journal, 29(4),
260-273. https://doi.org/10.1111/hir.12001
McFadden, P., Taylor, B., Campbell, A., &
McQuilkin, J. (2012). Systematically identifying relevant research: Case study
on child protection social workers’ resilience. Research on Social Work Practice, 22(6), 626-636. https://doi.org/10.1177/1049731512453209
McGowan, J., Sampson, M., Salzwedel, D., Cogo, E.,
Foerster, V., & Lefebvre, C. (2016).
PRESS peer review of electronic search strategies:
2015 guideline statement. Journal of
Clinical Epidemiology, 75(July), 40-46. https://doi.org/10.1016/j.jclinepi.2016.01.021
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G.,
& The Prisma Group (2009). Preferred reporting items for systematic reviews
and meta-analyses: The PRISMA statement. PLoS
Medicine, 6(7).
https://doi.org/10.1371/journal.pmed.1000097
Shea, B. J., Reeves, B. C., Wells, G., Thuku, M.,
Hamel, C., Moran, J., Moher, D., Tugwell, P., Welch, V., Kristjansson, E.,
& Henry, D. A. (2017). AMSTAR 2: A critical appraisal tool for systematic
reviews that include randomised or non-randomised studies of healthcare
interventions, or both. British Medical
Journal, 358(j4008),1-9. https://doi.org/10.1136/bmj.j4008
Tober, M. (2011). PubMed, ScienceDirect, Scopus or
Google Scholar: Which is the best search engine for an effective literature research
in laser medicine? Medical Laser Application, 26(3), 139- 144. https://doi.org/10.1016/j.mla.2011.05.006
Walters, W.H. (2009). Google Scholar search
performance: Comparative recall and precision. Libraries and the Academy, 9(1), 5-24.