195

Discovery and the Disciplines: An Inquiry into 
the Role of Subject Databases through Citation 
Analysis

Alexa L. Pearce*

Libraries have adopted web scale discovery services with the goal of providing their 
users with a streamlined research experience. However, the single search box that 
characterizes web scale discovery is one option among many that libraries continue 
to provide, including subject databases and other legacy tools. Libraries lack evidence 
regarding which of these tools are best suited to the various stages and levels of 
expertise that may characterize a user’s research process. A case study approach, 
focusing on the field of academic history, is employed to test the discoverability of a 
subset of scholarly work across several search platforms. 

Introduction
The widespread adoption of web scale discovery services by academic research libraries has 
been accompanied by a general consensus that these tools are poised to meet user expectations 
for a streamlined research experience.1 A Google-like single search box is one of the defin-
ing features of web scale discovery tools, promising access to content from disparate source 
databases via a preharvested central index.2 However, in many cases, this single search box 
continues to serve as one option among many that libraries provide. The full assortment of 
search interfaces that libraries continue to offer, including catalogs as well as subject-specific 
databases, comprises a complex discovery ecosystem with myriad options for where and how 
users may begin and proceed with their research process.

Despite this enduring complexity, libraries lack evidence about which tools are best 
suited to the various stages and levels of expertise that may characterize a user’s discovery 
experience. The literature is at best anecdotal on this topic, tending to convey an assumption 
that web scale tools are well suited for novice users and uses while subject-specific databases 
can best serve more advanced discovery needs.3 Moreover, while libraries have produced a 
wealth of research to understand and evaluate web scale discovery tools in recent years, they 
have dedicated comparatively little evaluative attention to subject-specific databases, many of 
which are also known as abstracting and indexing (A&I) services.4 Accordingly, while there 
is an accumulation of research on topics ranging from the configuration of web scale tools to 
their usability, there is a comparative lack of evidence with which libraries may discern and 

* Alexa L. Pearce is Head, Social Sciences & Clark Library, at the University of Michigan Library; email: alexap@
umich.edu. ©2019 Alexa L. Pearce, Attribution-NonCommercial (http://creativecommons.org/licenses/by-nc/4.0/) 
CC BY-NC.

mailto:alexap@umich.edu
mailto:alexap@umich.edu
http://creativecommons.org/licenses/by-nc/4.0/


196  College & Research Libraries March 2019

describe the continuing value of A&I tools and other legacy databases, in light of web scale 
adoption.

This investigation employs a case study approach, focusing on the field of academic 
history, to test the assumption that subject databases continue to serve advanced discovery 
needs better than web scale tools. First, the study engages citation analysis to characterize a 
sample of published historical scholarship by format, publication date, and language. The 
discoverability of historical literature is then tested across six search platforms, including 
library-provided tools such as ProQuest’s Summon, OCLC’s WorldCat, and Historical Ab-
stracts, as well as Google Scholar, which is available freely on the web. Results indicate that 
historical literature may be more effectively represented by web scale and web-based tools, 
such as Google Scholar and Summon, than by narrower, subject-specific databases.

Literature Review
The literature on web scale discovery demonstrates a consensus that users prefer web scale 
discovery tools when given a choice and that they expect and are comfortable with a search 
environment characterized by web scale features, such as a single search box, rapid retrieval 
of results, facets for refinement, and relevance ranking.5 The literature also reveals established 
knowledge of the mechanics and configuration options of web scale discovery services, with 
many studies emphasizing the primacy of a centralized index.6 Yet many aspects of web scale 
discovery tools remain opaque to library professionals, reflecting the highly competitive 
commercial environment from which they emerged. For example, while the literature reveals 
widespread understanding of the centrality and significance of search algorithms, their pro-
prietary nature often precludes deeper understanding of how specific discovery services find, 
rank, and display results. Additionally, library professionals continue to grapple with discern-
ing exactly which content is included by discovery tools in both search and display modes.7

While librarian attitudes are documented less prominently than user attitudes in the 
literature, there is evidence to suggest that the lack of transparency around web scale discov-
ery services contributes to enduring reservations among librarians, regarding their efficacy. 
Indeed, NISO’s Open Discovery Initiative (ODI) was formed to advocate for and facilitate the 
disclosure of details related to the exposure of metadata, content, and indexing among the 
full community of discovery stakeholders.8 While the ODI has made some progress toward 
modeling better disclosure practices, web scale discovery tools are still perceived as moving 
targets, in terms of tracking what they do and do not index. By comparison, A&I databases 
have traditionally offered relatively stable title lists and, as a result, may more readily earn 
the confidence of some library professionals.9

Library professionals have identified beneficial outcomes for the adoption of web scale 
discovery services, in spite of their opacity. For example, instruction librarians have noted 
that teaching a single interface, instead of several, affords time in the classroom to focus on 
higher-level concepts and techniques.10 Additionally, the literature demonstrates an emerging 
sense that web scale services play a worthwhile role in the research process, though this role 
is commonly described as complementary or supplementary, suggesting that web scale tools 
may coexist with, but not replace, subject-specific databases.11

However, the nature of the complementary relationship remains somewhat vague. The 
literature reflects an anecdotal view that web scale tools are appropriate for novice users or 
as starting points, while subject databases can better serve advanced users and uses.12 This as-



Discovery and the Disciplines   197

sertion has yet to be tested through any specific disciplinary lens or with attention to indexed 
content. Moreover, there is some evidence to support an opposing view. In their discussion of 
learning to teach the web scale discovery service called Summon, Catherine Cardwell, Vera 
Lux, and Robert J. Snyder found that it is “not the most efficient starting point in all circum-
stances” and noted positive impressions of Summon among faculty and graduate students, 
supporting a conclusion that it “can be much more than a simple tool for novice users.”13

The few comparative studies that have included subject databases or A&I tools have not 
necessarily applied equal scrutiny across all tools or have produced mixed results. For example, 
Andrew D. Asher, Lynda M. Duke, and Suzanne Wilson found that the EBSCO Discovery Service 
(EDS) outperformed ProQuest’s Summon, Google Scholar, and conventional library catalogs and 
databases in an investigation of undergraduates’ research habits and their ability to find relevant 
results based on preformulated questions.14 However, this study did not name the conventional 
databases that students consulted, nor did it separate them from library catalogs for the purposes 
of comparison. In a rigorous comparison of student searching between Summon and the A&I 
service called Social Sciences Abstracts, Sarah P.C. Dahlen and Kathlene Hanson found that stu-
dents expressed preference for and reported greater ease of use with Summon.15 Articles selected 
by students through test searches of both tools were also evaluated by librarians, who found that 
authority, as defined by the taxonomy developed by Chris Leeder, Karen Markey, and Elizabeth 
Yakel, was higher for articles retrieved from Social Sciences Abstracts than for articles retrieved 
from Summon, while relevance was higher for articles retrieved from Summon.16 Dahlen and 
Hanson concluded that discovery layers and subject-specific databases have complementary 
strengths and that both continue to serve a purpose in the larger context of library resources. 
Dahlen and Hanson also reiterated the prevailing view that subject-specific databases may be 
better suited for advanced disciplinary uses than discovery services, though they acknowledged 
the potential utility for discovery tools in “conducting searches on esoteric topics.”17

In seeking to better understand the relationship between subject databases and web scale 
tools, the present study acknowledges methodological challenges. Many standard methods 
that the profession has relied upon for database evaluation date to the 1980s or early 1990s, 
if not earlier, and were not developed to account for the range of variables that affect the ef-
ficacy of web scale discovery tools, from local configuration and access restrictions to dynamic 
content representation.18 Nor have existing methods been tested against the growing scale of 
information resources that characterizes the twenty-first century library discovery environ-
ment.19 Accordingly, this study shifts its focus away from database evaluation as such, seeking 
instead to understand the nature of scholarly literature in a specific subject area, academic 
history, as a prerequisite for testing the discoverability of this literature across platforms.

Regarding history specifically, several studies that analyze information needs and infor-
mation-seeking behaviors of academic historians date to the 1970s and 1980s, and many have 
focused on specified subfields and formats, such as Kee DeBoer’s study on journal literature in 
U.S. history.20 More recent studies that analyze useful platforms for finding historical literature 
have also tended to analyze lists of journal titles indexed, to the exclusion of monographs and 
other formats.21 The emphasis on journal literature does not align with findings from previous 
investigations. For example, Clyve Jones, Michael Chapman, and Pamela Carr Woods found 
that historians used nonserial sources, “especially monographs,” to a far greater degree than 
serial sources, while Margaret F. Stieg found that historians described books as the most con-
venient source types, compared to others, and were unlikely to consult indexes or abstracts, 



198  College & Research Libraries March 2019

which many viewed as “irrelevant.”22 While both of those studies are several decades old, their 
findings on historians’ preferences for books have yet to be contradicted in the literature or 
elsewhere. For example, Margaret Stieg Dalton and Laurie Charnigo’s 2004 sequel to Stieg’s 
study found that journal articles and book chapters had increased in significance to historians, 
but books remained dominant.23 M. Sara Lowe found that nonserial use exceeded serial use at a 
fairly consistent rate over several decades, with nonserial use at 71.4 percent in 2002.24 Despite 
these findings, the emphasis on journal literature has persisted in evaluative studies. In their 
relatively recent comparison of Historical Abstracts to Google Scholar, Hal P. Kirkwood and 
Monica C. Kirkwood noted that they found more book results in Google Scholar than in His-
torical Abstracts.25 The authors remarked that this was “not intrinsically bad,” but they did not 
admit the possibility that the inclusion of book results could be beneficial to academic histori-
ans and concluded, not for that reason alone, that Historical Abstracts was the superior tool.26 

This study presents an updated investigation of the characteristics of historical literature 
to better inform the question of where it may be found. By testing an inclusive discovery 
environment for historical scholarship, the present study adds clarity to questions about the 
relative strengths and weaknesses of coexisting search platforms and their optimal roles and 
placement in a user’s process. 

Methodology
This study’s citation analysis drew upon all secondary literature cited in the American Historical 
Review (AHR) during a six-year period, from 2010 through 2015. The AHR is the official publica-
tion of the American Historical Association (AHA) and, as stated on its website, has served as 
“the journal of record for the historical profession in the United States since 1895.”27 While the 
scholarly conversation in history extends beyond the AHR, its flagship status situates it well 
to represent current and prominent research and debates in the field. Additionally, the AHR 
represents all subfields of history in its research articles and reviews of new scholarship. For 
this study, the author gathered citations from research articles only. While reviews facilitate 
discovery of new scholarship for many historians, they were excluded from this study because 
they do not tend to cite sources in the extensive manner of research articles. 

AHR research articles tend to cite a combination of scholarly secondary works, such as books, 
journal articles, chapters, and dissertations, along with extensive archival sources and other 
primary materials. For the purposes of testing the library discovery environment and creating 
a fair basis for comparison, the author included citations to published and secondary materials 
and excluded citations to archival and manuscript sources. The rationale for this decision was 
to develop a sample of citations that a researcher could reasonably expect to find using library 
search tools, as opposed to archival finding aids. Research libraries do offer specialized tools and 
expertise for locating archival and manuscript sources, but these functions have traditionally 
remained outside the indexing scope of history subject databases, such as Historical Abstracts.

The process for gathering citations entailed reading through all endnotes attached to the 
research articles included in the study. Citations that met criteria for inclusion were copied and 
pasted by hand into a spreadsheet. In addition to excluding archival and manuscript sources, as 
described above, the author excluded citations to newspaper and popular press articles published 
prior to 1900. Citations to entire periodicals, as opposed to articles, were also excluded. Books 
from all date ranges were included. Citations to nonscholarly newspaper and magazine articles 
published after 1900 were included. Citations to published primary sources were also included. 



Discovery and the Disciplines   199

These criteria were designed to focus the study on secondary literature as much as possible and 
to mitigate inherent advantages that some of the included search platforms may have. For ex-
ample, general and popular press articles published prior to the twentieth century are somewhat 
likely to be used as primary sources by historians and are not within the scope of indexing for 
Historical Abstracts or America: History and Life but may be cataloged, by chance, in WorldCat. 

The resulting population comprised 22,572 citations. After separating duplicates, the total 
number was 19,937. Using a random number generator, the deduplicated list was reordered 
to allow selection of a random sample of 400 citations, which affords a confidence level of 95 
percent and a confidence interval of 5. All further discussion relates to testing and analysis of 
the sample, but the entire citation population, including the deduplicated list, is available for 
consultation and reuse via the author’s institutional repository for research data.28 

The first step in analysis was to characterize each citation according to format, publication 
date, and language. Book introductions were recognized and coded as formats if the citation 
included a named author or title, in the manner of a book chapter. If an introduction was cited 
without author or title information, the citation was treated as a whole book. The published 
data for this study includes precise publication dates for each citation. However, for the pur-
poses of analysis, dates were grouped and coded by decade, starting from the year 01 and 
ending in 10. The 1990s, for example, includes all citations published between 1991 and 2000. 
Tables 1–3 present all formats, date ranges, and languages present in the sample of citations.

Second, the author searched for all citations in 6 different search platforms, namely:
• Historical Abstracts 
• America: History and Life
• JSTOR
• Google Scholar
• WorldCat
• ArticlesPlus (Summon)

The primary question for each database included in the study was how comprehensively 
it represented the sample of AHR citations. For a citation to count as present in a database, it 
had to be represented in the format in which it was cited. For example, if a search for a book 
turned up a dissertation with the same author and very similar title, the citation was not 
considered present. Similarly, if a search for a book chapter turned up a result for the same 
essay published elsewhere as a journal article, the citation was not counted as present. It was 
not necessary for book chapters to have their own records, but it was necessary for them to 
be discernible among search results as chapters, such as in a table of contents listing. 

To expedite the search process, Historical Abstracts and America: History and Life were 
searched simultaneously. For all of the platforms except Google Scholar, advanced searches 
were performed, with both title and author information entered for each citation. All search-
ing took place between February and May of 2017. The results presented here reflect the 
content available at the time of investigation. Because the intent of this study is to understand 
discovery broadly, rather than through the lens of a single institution’s configurations, the 
author selected the options to search across all content in both JSTOR and Summon, rather 
than limiting to content that is available only to the affiliated institution, the University of 
Michigan (U-M) Library. The other tools included do not discriminate between a full body 
of indexed literature and a subset of that literature that is available for institutional access. 
Following are additional details about why each tool was selected for inclusion.



200  College & Research Libraries March 2019

America: History and Life and Historical Abstracts
America: History and Life and Historical Abstracts (AHL/HA) were selected for inclusion 
as the two databases that the academic library profession recognizes as primary indexes to 
published scholarship in history. Both are available exclusively on the EBSCO platform. The 
U-M Library subscribes to the full-text versions of each, though access to full text was not a 
consideration for this study. Historical Abstracts indexes scholarship on all aspects of world 
history, excluding the U.S. and Canada, from 1450 to the present. America: History and Life 
covers all time periods in U.S. and Canadian history. Both are known for offering a unique 
option to limit searches by time period of interest. Recognizing the two indexes as “compan-
ion and complementary services” and providing combined analysis follows the convention 
used previously by DeBoer.29 

JSTOR
JSTOR was selected based on its widespread name recognition among historians, history 
students, and other scholars working on historical topics.30 While library professionals are 
aware that JSTOR does not serve as a formal indexing tool, owing to its lack of descriptive 
metadata, library users do not generally make this distinction. While some library users may 
consult JSTOR with awareness of its role as a digital archive, many users also think of it as a 
place to identify current and relevant scholarship of interest, across subject areas. 

Google Scholar
Google Scholar was selected based on its continuing prominence among the web-based search 
engines that many scholars and students consult frequently. Though library professionals 
continue to debate its merits, Google Scholar searches a vast universe of content without ex-
plicit disciplinary or format-based boundaries around what it includes and excludes. As the 
original tool that succeeded in bringing the Google search experience, characterized by speed 
and relevance, to the world of academic scholarship, it has a useful presence in any study that 
seeks to measure legacy tools against newly dominant search interfaces. 

WorldCat
WorldCat was selected based on its continuing ability to function as and represent a research 
library catalog, as opposed to a disciplinary index or discovery service. WorldCat includes re-
cords for a plethora of formats in addition to books. Given the breadth and depth of WorldCat’s 
holdings, the author believed it would add a dimension to the study that no other tool could 
replicate and would thereby provide an interesting point of comparison to both disciplinary 
tools and discovery services. This study consulted WorldCat via the FirstSearch interface, as 
opposed to using WorldCat.org, WorldCat Local, or WorldCat Discovery.

ArticlesPlus (Summon)
ArticlesPlus is the U-M Library’s locally branded configuration of ProQuest’s Summon dis-
covery service. U-M built the front end using the vendor’s API, but the content is otherwise 
the same as using the native interface. The U-M Library has not chosen to include its catalog 
holdings in ArticlesPlus.31 It is the only web scale discovery service in the study, as defined by 
its centralized index of academic, news, reference, and popular content sourced from various 
publishers, content providers, and subject areas. While it cannot represent the strengths and 

http://WorldCat.org


Discovery and the Disciplines   201

weaknesses of all commercial discovery services, the author believes there is value in com-
paring its coverage of the AHR sample to the coverage afforded by the other tools. Further 
research that includes one of Summon’s main competitors would also be of value.

Results
Part 1: Characterizing the Sample
The sample of 400 citations was found to include English language materials, predominantly, 
with 333 citations in English, accounting for 83.25 percent of the sample. The next largest lan-
guage representations were French, with 31 citations (7.75%), and German, with 11 citations 
(3%). Table 1 presents the language breakdown of the sample.

Excluding the 11 citations that were translated into English, there were 322 English lan-
guage citations. Of these, 150 (47%) were monographs. 

Broken down by format, close to half of the sample consisted of citations to monographs. 
There were 196 monograph citations, accounting for 49 percent. Academic journal articles 
formed the second largest subset, with 90 citations (22.5%). Book chapters were the third larg-
est subset, with 41 citations (10.25%). Table 2 presents the format breakdown of the sample.

Of the 196 monographs in the sample, 150 (77%) were English language, not including 
translations into English.

Broken down by date range of publication, no subset emerged with as large a share of the 
sample as in the other two categories. The largest subset included citations published during 
the first decade of the twenty-first century, 2001–2010, with 141 citations (35.25%). The next 
largest subset was the 1990s, with 61 citations (15.25%), followed by the 1980s with 45 citations 
(11.25%). Table 3 presents the date range breakdown of the sample.

TABLE 1
Sample Broken Down by Language  

(N = 400)
English 333 83.25%

French 31 7.75%

German 11 3.00%

Spanish 7 1.75%

Russian 5 1.00%

Italian 3 0.75%

Japanese 3 0.75%

Hebrew 2 0.50%

Arabic 1 0.25%

Breton 1 0.25%

Croatian 1 0.25%

Ladino 1 0.25%

Latin 1 0.25%

TABLE 2
Sample Broken Down by Format (N = 400)

Monograph 196 49.0%
Journal article 90 22.5%
Book chapter 41 10.25%
Edited volume 24 6.00%
Published primary source 14 3.50%
Newspaper article 12 3.00%
Series 8 2.00%
Book introduction 5 1.00%
Conference proceedings 2 0.50%
Film 2 0.50%
Web article 2 0.50%
Digital scholarship 1 0.25%
Grey literature 1 0.25%
Magazine article 1 0.25%
Photograph 1 0.25%



202  College & Research Libraries March 2019

Part 2: Finding the Sample
Google Scholar emerged as the search platform where 
most citations in the sample could be found. Of the 
400 citations, 335 (83.75%) were represented in Google 
Scholar. ArticlesPlus followed closely, with records for 
318 citations (79.5%). WorldCat held records for the 
third highest portion of the sample, with 272 (68%). 
Both JSTOR and AHL/HA represented far fewer cita-
tions than the other three. Table 4 presents the number 
of citations found in each platform. 

Results were further broken down to see how well each tool approximated the sample in 
terms of date range, language, and publication. Each search tool found a majority of English 
language citations. Google Scholar, ArticlesPlus, and WorldCat came closest to matching the 
sample’s English percentage of 83.25 percent. Of the 335 citations found in Google Scholar, 
296 (88.36%) were in English. In ArticlesPlus, 278 (87.42%) of the 318 citations were in English. 
Of WorldCat’s 272 citations, 227 (83.46%) were in English. The AHL/HA citations were 95.25 
percent English, or 80 out of a total of 84. For JSTOR, the figure was even higher, at 96.96 
percent, or 64 out of 66. 

The set of citations found in WorldCat included nine languages, out of thirteen included 
in the full sample. ArticlesPlus included eight of thirteen languages, while Google Scholar 
included seven. AHL/HA included four languages, while JSTOR included only two: English 
and French.

Table 5 provides a language breakdown comparison across the full sample and the subsets 
of citations that were found in each platform. 

Broken down by format, Google Scholar and ArticlesPlus came closest to mirroring the 
sample by finding more monographs than any other format. The Google Scholar subset in-
cluded 175 monographs (52.24%), a slightly higher rate than the 49 percent found in the full 
sample. In ArticlesPlus, monographs accounted for 180 citations (56.6%). In both the JSTOR 
and AHL/HA subsets, there were more journal articles than monographs. In AHL/HA, there 
were 36 monographs out of 84 total citations (42.86%), while JSTOR contained 28 monograph 
citations out of 66 total (42.42%). The AHL/HA subset included 39 journal article citations 
(46.43%), which is more than double the proportion in the full sample, which was 22.5 percent. 
JSTOR’s subset included 34 journal article citations (51.51%), which also more than doubled 
the portion represented in the sample.

TABLE 3
Sample Broken Down by Date 
Range of Publication (N = 400)

2000s 141 35.25%

1990s 61 15.25%

1980s 45 11.25%

2010s 39 9.75%

1960s 27 6.75%

1970s 23 5.75%

1950s 14 3.50%

19th century 11 2.75%

1920s 10 2.50%

1930s 8 2.00%

1940s 8 2.00%

Pre-1801 6 1.50%

1910s 4 1.00%

1900s 3 0.75%

TABLE 4
Number of Citations Represented in Each Search 

Platform (N=400)
Google Scholar 335 83.75%
ArticlesPlus 318 79.50%
WorldCat 272 68.00%
AHL/HA 84 21.00%
JSTOR 66 16.50%



Discovery and the Disciplines   203

The set of citations found in Google Scholar included 11 formats, of 15 total in the sample. 
Both WorldCat and ArticlesPlus included 10 of these 15 formats. AHL/HA included six for-
mats, while JSTOR included just four. 

Table 6 provides a format breakdown comparison across the full sample and the subsets 
of citations that were found in each search platform.

Google Scholar, ArticlesPlus, and WorldCat came closest to mirroring the sample in terms 
of its date range breakdown. In the sample, the largest date range subset was the 2000s, with 
141 out of 400 citations (35.25%). This decade was represented by 127 citations in Google 
Scholar’s subset of 335 (37.91%). In ArticlesPlus, the 2000s were represented by 119 citations 
out of 318 (37.42%). In WorldCat, 93 citations were from the 2000s, out of 272 (34.19%). In 
AHL/HA, the portion was higher than the sample with 41 out of 84 citations from the 2000s 
(48.81%), while 31 out of 66 (46.97%) of the JSTOR sample was from this date range.

The full sample included 14 date range categories, all of which were represented in the 
Google Scholar, WorldCat, and ArticlesPlus subsets. AHL/HA included seven date range cat-
egories, with no citations dated prior to the 1950s. JSTOR included eight date range categories, 
with no citations dated prior to the 1920s. 

Table 7 provides a date range breakdown comparison across the full sample and the 
subsets of citations that were found in each search platform. 

In the sample of 400 citations, there were 38 (9.5%) that were uniquely represented in only 
one of the search platforms consulted.32 Of these, 18 (47.37%) were found in Google Scholar. 
Ten of the uniquely represented citations (26.32%) were in WorldCat, while eight (21.05%) 
were in ArticlesPlus. AHL/HA indexed one of the uniquely represented citations (2.63%), and 
JSTOR did not include any citations that were not also represented in another platform. The 
unique citations in Google Scholar were mostly English language book chapters. Similarly, a 
small plurality of the unique citations in WorldCat were English language book chapters. In 

TABLE 5
Language Breakdown Compared Across Platforms

Language Percent 
of 

Sample 
(N=400)

Percent 
of Google 

Scholar subset 
(n=335)

Percent of 
ArticlesPlus 

subset 
(n=318)

Percent of 
WorldCat 

subset 
(n=272)

Percent 
of AHL/

HA subset 
(n=84)

Percent 
of JSTOR 

subset 
(n=66)

English 83.25% 88.36% 87.42% 83.46% 95.23% 96.96%
French 7.75% 6.27% 7.55% 9.56% 2.38% 3.03%
German 3.00% 2.09% 2.20% 1.84%
Spanish 1.75% 1.79% 1.26% 2.21%
Russian 1.00% 0.30% 0.31% 0.37% 1.19%
Italian 0.75% 0.90% 0.63% 1.10% 1.19%
Japanese 0.75% 0.30% 0.31% 0.74%
Hebrew 0.50%
Arabic 0.25%
Breton 0.25% 0.31% 0.37%
Croatian 0.25%
Ladino 0.25%
Latin 0.25% 0.37%



204  College & Research Libraries March 2019

TABLE 6
Format Breakdown Compared Across Platforms

Format Percent 
of Sample 

(N=400)

Percent 
of Google 

Scholar subset 
(n=335)

Percent of 
ArticlesPlus 

subset 
(n=318)

Percent of 
WorldCat 

subset 
(n=272)

Percent 
of AHL/

HA subset 
(n=84)

Percent 
of JSTOR 

subset 
(n=66)

Monograph 49.0% 52.24% 56.6% 69.85% 42.86% 42.42%
Journal article 22.5% 24.78% 25.79% 6.62% 46.43% 51.51%
Book chapter 10.25% 8.96% 5.03% 6.99% 1.19% 3.03%
Edited volume 6.00% 7.16% 6.29% 8.82% 7.14% 3.03%
Published primary 
source

3.50% 2.39% 1.89% 2.94% 1.19%

Newspaper article 3.00% 0.30% 1.26%
Series 2.00% 1.79% 2.20% 2.94% 1.19%
Book introduction 1.00% 1.19% 0.31%
Conference 
proceedings

0.50% 0.37%

Film 0.50% 0.31% 0.74%
Web article 0.50% 0.6%
Digital scholarship 0.25% 0.30% 0.37%
Grey literature 0.25% 0.30% 0.37%
Magazine article 0.25% 0.31%
Photograph 0.25%

TABLE 7
Publication Date Range Breakdown Compared Across Search Platforms

Date Range Percent 
of Sample 

(N=400)

Percent of 
Google Scholar 
subset (n=335)

Percent of 
ArticlesPlus 

subset 
(n=318)

Percent of 
WorldCat 

subset 
(n=272)

Percent 
of AHL/

HA subset 
(n=84)

Percent 
of JSTOR 

subset 
(n=66)

2000s 35.25% 37.91% 37.42% 34.19% 48.81% 46.97%
1990s 15.25% 16.42% 15.09% 17.65% 15.48% 15.15%
1980s 11.25% 11.94% 11.64% 12.5% 21.42% 13.64%
2010s 9.75% 9.55% 9.43% 6.99% 5.95% 12.12%
1960s 6.75% 6.57% 6.60% 6.99% 3.57% 6.06%
1970s 5.75% 5.97% 5.97% 5.88% 3.57% 3.03%
1950s 3.50% 2.09% 3.46% 3.31% 1.19%
19th century 2.75% 2.09% 1.89% 2.57%
1920s 2.50% 1.49% 1.26% 1.47% 1.52%
1930s 2.00% 1.49% 2.20% 2.21%
1940s 2.00% 2.09% 2.52% 2.57% 1.52%
Pre-1801 1.50% 0.90% 1.26% 2.21%
1910s 1.00% 0.90% 0.94% 0.74%
1900s 0.75% 0.60% 0.31% 0.74%



Discovery and the Disciplines   205

ArticlesPlus, the majority of unique citations were English language newspaper and magazine 
articles. Tables 8–11 present characteristics of the 38 citations that were uniquely represented 
in one of the six search platforms.

TABLE 8
Characteristics of Citations Found Only in Google Scholar (n=18)

Language Number Format Number Date Range Number 
English 15 Book Chapter 9 2000s 5
French 1 Monograph 2 2010s 3
Russian 1 Journal article 2 1990s 3
Japanese 1 Web article 2 19th century 2

Published primary source 1 1960s 1
Book introduction 1 1910s 1
Newspaper article 1 1920s 1

1970s 1
1980s 1

TABLE 9
Characteristics of Citations Found Only in WorldCat (n = 10)

Language Number Format Number Date Range Number
English 7 Monograph 4 1990s 3
French 1 Book chapter 3 2000s 2
Japanese 1 Conference proceeding 1 1970s 1
Latin 1 Film 1 1950s 1

Published Primary source 1 1920s 1
19th century 1

Pre-1800 1

TABLE 10
Characteristics of Citations Found Only in ArticlesPlus (n=8)

Language Number Format Number Date Range Number
English 7 Newspaper article 4 2010s 2
French 1 Journal article 2 1950s 2

Magazine article 1 2000s 1
Book chapter 1 1930s 1

1920s 1
1910s 1

TABLE 11
Characteristics of Citations Found Only in AHL/HA (n=1)

Language Number Format Number Date Range Number
Russian 1 Journal article 1 2000s 1



206  College & Research Libraries March 2019

Thirty citations from the sample (7.5%) were not found in any of the search platforms 
consulted.33 The format breakdown was varied, but there were pluralities among book chap-
ters (23%) and newspaper articles (23%). The date range breakdown among this set was also 
varied, with a small plurality in the 2000s (17%). Nine languages were included, out of 13 in 
the full sample. Fifteen, or half of the citations that were not found, were in English (50%). 
Table 12 presents selected characteristics of the 30 citations that were not found.

Eleven citations (2.75%) were found in all of the search platforms.34 These citations were 
far less varied than those that were not found. All were English language citations, and a ma-
jority (7) were monographs. Most were from the 2000s, with a few from the 1980s and 1990s. 
Table 13 presents selected characteristics of the 11 citations that were found in all platforms.

Discussion
Part 1: What Do Historians Cite?
This study suggests that English language monographs maintain unrivaled prevalence among 
materials that historians are most likely to cite, across subfields of history. Librarians who 
work closely with humanities-oriented historians are unlikely to be surprised by this finding, 
as it supports anecdotal understanding of the monograph’s enduring significance. This result 
also aligns with earlier findings, including Dalton and Charnigo’s 2004 follow-up to the 1981 
Stieg study and Lowe’s 2003 analysis, which also drew upon cited references from the AHR.35 
While history, as a profession, continues to expand and support the use of digital tools and 

TABLE 13
Characteristics of Citations Found in All Search Platforms (n=11)

Language Number Format Number Date Range Number
English 11 Monograph 7 2000s 6

Journal article 3 1990s 3
Edited volume 1 1980s 2

TABLE 12
Characteristics of Citations Not Found in Any of the Search Platforms (n=30)

Language Number Format Number Date Range Number
English 15 Book chapter 7 2000s 5
German 4 Newspaper article 7 1960s 4
French 3 Published Primary Source 5 1980s 4
Hebrew 2 Journal article 4 1990s 3
Russian 2 Monograph 4 2010s 3
Arabic 1 Book Introduction 1 1920s 3
Croatian 1 Photograph 1 1950s 2
Ladino 1 Conference Proceeding 1 19th century 2
Spanish 1 1970s 2

1930s 1
1900s 1



Discovery and the Disciplines   207

methods to produce and evaluate new scholarship,36 this study suggests that the monograph 
persists as the format most likely to be cited in secondary historical literature. Journal articles 
formed the second-highest share of the sample but accounted for less than a quarter of the 400 
citations, while monographs accounted for just about half. Book chapters and edited volumes, 
both of which are book formats, comprised the next largest format subsets represented in the 
sample, after journal articles. Nonbook formats aside from journal articles were represented 
in much smaller numbers.

Considering the publication date ranges represented in the sample, it is worth noting 
that the dominance of monographs does not necessarily equate to a lack of interest in cur-
rency. The sample demonstrated historians’ tendency to cite relatively recent work, with the 
largest plurality of sources aged to the previous decade. Dalton and Charnigo found some 
evidence to support a perception that “article-length publications,” including book chapters, 
have become more frequently cited by historians than they were at the time of the Stieg study 
and offered an increased value for currency as one possible explanation.37 The present study 
reinforces that such an interest in currency need not preclude the use and citation of books.

Given the exclusion of archival sources, government information, and manuscript mate-
rial, it is unsurprising that book formats were so heavily weighted in the sample. The excluded 
formats were left out to form a fair basis for comparison among discovery and web platforms 
and conventional disciplinary indexes, as the latter have never specialized in the discovery of 
archival, government, or manuscript sources. However, the promise of discovery services is to 
unite disparate and unique formats into a single search experience, prompting a compelling 
case for further research that takes into account all cited sources. 

Further considering the publication date ranges of cited sources, the sample was over-
whelmingly recent, though not immediately recent. Unlike the language and format break-
downs, there was not a single date range that represented a majority or approached half of the 
sample. Instead, there was a discernible plurality in the 2000s. Date range is characteristically 
different from the other categories, given the arbitrary designation of decades. Noting that 
a citation was from the 1990s provides an approximate sense of how much time has passed 
between the publication of a cited source and its citing work. The date range alone does not 
necessarily provide any information about the nature of the scholarly arguments expressed 
in the work, though it could serve as an indicator, with further analysis.

It is worth noting that citations from the same decade as the citing works, the 2010s, were 
less prevalent than citations from the 2000s, the 1990s, and the 1980s. In other words, while 
historians were most likely to cite scholarship from between 5 and 10 years prior, they were 
also more likely to cite scholarship from 20 or 30 years prior than from 1–5 years prior. As 
mentioned above, this finding suggests an interest in currency, but one that is commensurate 
with the continuing dominance of monographs, which are likely to retain currency for longer 
periods than other formats and which may not be reviewed until several years after publica-
tion. This qualified version of currency makes sense in the context of earlier studies that have 
found consistently that book reviews have served an important role in historians’ discovery 
of new scholarship.38 

The prevalence of English language sources is also noteworthy, for a citation analysis 
stemming from a flagship journal that includes all historical subfields. While there is a pos-
sibility that the results reflect a tendency of historians in the United States to consult and cite 
English language sources, this study does not present sufficient evidence to support such a 



208  College & Research Libraries March 2019

conclusion beyond the realm of secondary literature. Similar to the consideration of format 
characteristics above, it is likely that inclusion of archival, government, and manuscript ma-
terials would have yielded a greater diversity of languages than was found in the present 
study. As the present analysis allows reflection upon general patterns in secondary literature, 
it does prompt the question of whether the sources cited reflect a widespread tendency of 
scholars everywhere to publish in English.39 Lowe also reflected on the dominance of English 
in a study that did not exclude unpublished material and, for a possible explanation, quoted 
a 1977 study by Buchanan and Herubel, which asserted that flagship journals may represent 
“hegemonic attempts at uniformity and disciplinary activity.”40 Lowe went on to ask the 
compelling question, “Do mainstream, less-specialized journals lose out on diversity when 
they become the ‘spokesman’ for a profession?”41 Further mining of the citation data for the 
present study may contribute to a clearer understanding of this and related questions.

With awareness that articles represented in this study were most likely to cite mono-
graphs, recently published works, and English language materials, we can turn our attention 
to the representation and discoverability of these works across the selected search platforms.

Part 2: Which Search Platforms Represent This Body of Literature?
The primary question posed by this study asks where scholarly literature in history is dis-
coverable, to better understand and optimize the coexistence of web scale discovery services 
alongside conventional disciplinary databases. Accordingly, one way of approaching the 
results is to ask which platform or platforms came closest to approximating the sample of 
cited references, through representation of formats, languages, and publication date ranges. 

Taking all three of those criteria into consideration, it is clear from the results that Google 
Scholar, ArticlesPlus, and WorldCat each came close to mirroring the sample, while both JS-
TOR and AHL/HA did not. Considering quantity only, Google Scholar emerged as the plat-
form through which the largest portion of the sample could be identified, with ArticlesPlus 
a close second. Following Google Scholar and ArticlesPlus, WorldCat found the next highest 
quantity of citations, offering what are arguably the most comprehensive and highest quality 
bibliographic descriptions, compared to any of the other platforms. While both WorldCat and 
ArticlesPlus included 10 of the 15 formats represented in the sample, WorldCat was notice-
ably weaker in representing some formats, such as newspaper articles. This is not surprising, 
given WorldCat’s longstanding role as a catalog, as opposed to periodical index.

Through further analysis, a very noticeable divide is discernible among the three tools 
that found the most citations—Google Scholar, ArticlesPlus, and WorldCat—and the three that 
found the fewest: JSTOR and AHL/HA. On average, Google Scholar, ArticlesPlus, and WorldCat 
found 308 of the 400 citations, or about 77 percent. By contrast, the average number found by 
JSTOR and AHL/HA was 75, or about 19 percent. On the surface, this finding suggests that 
historians are citing sources across a wide range of disciplines, formats, and languages and 
that the literature of the field is most comprehensively captured by a discovery tool that can 
also span such a wide range. Remarks made by Stieg in her 1981 study have enduring reso-
nance, as she described “an important fact about history […] history is really only an umbrella 
term covering a wide variety of specializations that have little in common with each other but 
their method.”42 Historians necessarily consult the literature of the subfield in which they are 
working, which may be characterized by topics or emphases ranging from social and cultural 
to medical, scientific, or environmental. In addition, historians rely on bodies of theory and 



Discovery and the Disciplines   209

criticism that emerge from other disciplinary traditions, such as area studies, gender studies, 
literary analysis, and political science. In other words, historical literature is not singularly 
defined and is unlikely to be well encapsulated in any search tool that is limited by discipline. 

This finding also reinforces one of the questions that is central to this study, namely, 
how to discern the enduring value of conventional disciplinary databases and/or A&I tools. 
While results from a single citation analysis may not be definitive, those presented here do 
not suggest clear or unique value to the discovery of historical literature by the companion 
databases that are the most well known among library professionals: AHL/HA. Consequently, 
the present study suggests that there is not currently a subject-specific database that can claim 
to serve the discipline of history well and especially not better than multidisciplinary tools 
that operate at scale.

The multidisciplinary nature of historical literature cannot explain the gap between the top 
performing platforms and JSTOR, which is also multidisciplinary in nature, but represented 
only 16.5 percent of the sample. While JSTOR continues to observe a 3- to 5-year “moving 
wall” for much of its journal content, this should not necessarily impact the present study, 
which found that historians were most likely to cite sources that were at least a few years old. 
This study does not suggest that JSTOR is not a useful tool for historians to consult, especially 
considering its enduring popularity. Historian Lara Putnam named only one library database 
in her recent article on the impact of digital search tools on historical research and production, 
and it was JSTOR, typically grouped with and listed after tools like Google, Google Books, 
and Wikipedia. Discussing library databases generally, Putnam referred to “JSTOR and kin” 
to elucidate.43 The present study does suggest that historians are consulting and citing sources 
from a much wider sphere than what has been captured in the JSTOR archive and that there 
are platforms, such as ArticlesPlus, that tap into that sphere to a far greater extent.

It is worth reiterating that neither JSTOR nor AHL/HA provided particularly strong cov-
erage of journal articles, though both tools have traditionally specialized in providing access 
to journal literature. One explanation for this finding may be that it indicates limitations of 
the citation analysis method. However, the citations gathered for this study were tailored to 
represent secondary literature, specifically to ensure that JSTOR and AHL/HA would not be 
at an unfair disadvantage. A more complete explanation may emerge from further analysis of 
the citation data. For example, Jones, Chapman, and Woods were able to glean a set of com-
monly used, if not conclusively “core,” journals for the field of English history in their 1972 
study.44 It may be the case that a set of journals approaching core status would not emerge 
from this study, given the scope of the AHR. Further analysis of journal titles cited may help 
explain the weak performances of both AHL/HA and JSTOR.

Despite the narrowness of AHL/HA and the multidisciplinary nature of JSTOR, AHL/
HA actually outperformed JSTOR in this study, finding 21 percent of the sample. However, 
compared to the other platforms, AHL/HA emerges as a tool whose utility lies in indexing only 
a very narrow subset of secondary literature in history. Consequently, library professionals 
likely do a disservice to students and faculty if we perpetuate the notions that all disciplines 
have their primary indexes; that, in comparison to web scale services, these indexes are most 
appropriate for advanced research needs; and that, for history, AHL and HA are standard 
bearers. For example, Kirkwood and Kirkwood conveyed such a perspective in their admit-
tedly “nonscientific analysis,” which began with unbridled praise for Historical Abstracts, 
declaring that “no other history research tool matches its scholarly standards or comprehensive 



210  College & Research Libraries March 2019

coverage.”45 Thomas Mann argued as recently as 2015 that “the silo databases for individual 
disciplinary areas are there for a reason,” namely, to provide indexing coverage to the “most 
important journals within their subjects, undiluted by thousands of tangential periodicals.”46 
Without further analysis, it is impossible to say conclusively whether the 57 percent of journal 
articles that were not indexed by AHL/HA in the present study can be considered “tangential,” 
but such a claim would strike the author as unlikely. 

It is important to reiterate that this study did not include qualitative analysis and does not 
offer evidence to equate the presence of more citations with better usability. While citations 
were identifiable in Google Scholar, as such, they were not necessarily attached to biblio-
graphic records or links to full text, features that generally enhance the discovery experience. 
The relative merits and drawbacks of Google Scholar have received considerable scrutiny in 
the literature.47 The evidence presented here is consequential for Google Scholar, insofar as it 
suggests that its scope is not necessarily as weak for nonscience fields as has been suggested 
elsewhere.48 

While ArticlesPlus did provide records for all citations found, the quality and complete-
ness of records varied, and inconsistencies and errors in indexing were observed. This can 
be explained, at least in part, by Summon’s method of creating hybrid records that reflect 
metadata and full text from multiple sources. In theory, items indexed in Summon should be 
represented by these single, hybrid records, though matching is not consistently successful in 
practice. This observation aligns with a widely cited critique of web scale discovery services, 
to the effect that they are “only as effective as the quality and completeness of the metadata 
they ingest, process and index” and could benefit from increased standardization.49 However, 
in addition to its vast scope of coverage, ArticlesPlus is configured with robust delivery op-
tions, providing users with reliable paths to locate materials in library holdings—or to request 
assistance—via an institutionally branded link resolver. By contrast, Google Scholar’s many 
textual citations may be easily construed as dead ends if researchers lack experience pursuing 
results presented in such brevity. As with any locally configured web scale tool, the delivery 
environment of ArticlesPlus, designed to work in tandem with its discovery features, reflects 
the commitment and continuing investments of the affiliated institution, more so than features 
inherent to Summon.

An unexpected finding was that WorldCat contained records for 18 journal articles (20%), 
out of the 90 included in the sample. By comparison, JSTOR contained records for 34 journal 
articles (38%), while AHL/HA contained records for 39 (43%). This finding almost certainly 
represents some libraries’ local practice of cataloging journal articles according to specified 
circumstances. It does not translate to a recommendation to use WorldCat for locating journal 
articles, but it does indicate that WorldCat is not as far behind JSTOR and AHL/HA in describing 
them as the author would have guessed. While WorldCat performed relatively well for many 
formats, languages, and date ranges, it is worth noting that the FirstSearch platform does not 
provide features that define web scale tools or that are consistent with user expectations, as 
outlined above. This circumstance may be remedied by libraries’ adoption of the WorldCat 
Discovery service, untested in the present study.

The present study does not suggest that AHL/HA are not useful tools for historians and 
students seeking secondary sources, even if they do not merit recognition as the biggest or best 
tools for finding historical literature. These tools provide access to a set of prominent history 
journals, though unrepresentative of the range of sources that historians consult. Previous 



Discovery and the Disciplines   211

qualitative studies that have found AHL/HA to be useful are not necessarily negated by the 
very quantitative approach taken here. If the so-called “silo databases” do continue to bring 
value, it is likely related to the ability to search a relatively smaller and more specialized hay-
stack than a web scale tool can provide.

Part 3: Limitations
There are several limitations to the present study. First, the analyzed citations come from 
articles in a single journal within a specified six-year time frame. While the results align with 
findings from other studies using varying methodologies and covering varying time periods, it 
may still be the case that different patterns would emerge from citation analysis that included 
citations gathered from other formats, such as books, as well as from articles in other, more 
specialized, historical journals.

There are well documented limitations to citation analysis as a methodology. Jones, Chap-
man, and Woods quoted an enduring criticism of the method, to the effect that “an author 
need not cite what he reads nor read what he cites.”50 Stieg also noted that citation studies 
“can only analyze what is actually cited,” which is unlikely to be everything that was used, 
and mentions that citation studies “cannot show relative importance among sources.”51 While 
these critiques suggest that what gets cited cannot give us a full picture, they should not inhibit 
attempts to discern and interpret meaningful patterns among cited works. 

The search method used in this study entailed known item searches, conducted by a 
professional librarian, often taking advantage of advanced, fielded search options. Searching 
of this nature, done to form a confident basis for a quantitative assessment, is very different 
from the exploratory or topical searching that a researcher of any experience level might en-
gage in. The ability to find a known item with a complete citation can only partially capture 
its discoverability. Scholarly literature must also be discoverable through topical searching, 
untested by this study. However, the present study does bear consequence for a clearer un-
derstanding of what is potentially discoverable wherein it adds to our knowledge of the types 
and volumes of sources we can expect to find in specific search platforms.

Finally, the study only considers the discipline of history, which may be more interdis-
ciplinary in nature than other fields. While the present study cannot describe the value that 
A&I tools may bring to the research experience across all disciplines, it does contribute to a 
clearer understanding that all A&I tools are not equally valuable and that their relationship 
to web scale discovery tools cannot be characterized categorically. 

Conclusion
The findings presented here point to several insights into historical literature and its discover-
ability. While English language monographs, published relatively recently, continue to stand 
out among work cited by historians, the full scope of this work is better represented by library 
discovery tools that incorporate wide ranges of formats, subject areas, date ranges, and lan-
guages than by the relatively narrower tools that researchers and library professionals alike 
have tended to associate with historical research. Because none of the search platforms that 
were tested found a sizable number of citations that were not found in any of the other tools, 
it is not possible to draw firm conclusions about discernible strengths for specified formats, 
languages, or date ranges. With the finding that book chapters and newspaper articles were 
most likely to be either uniquely represented or unrepresented across search platforms, the 



212  College & Research Libraries March 2019

study suggests that discoverability for these formats in particular may be weaker than for 
other formats.

The evidence presented here should prompt more skepticism of, or potentially help to 
dismantle, the assumption that subject databases are always best for advanced uses, with 
web scale tools conceived as “supplementary” and better suited to the domain of initial or 
novice queries. While the experience of searching a smaller, more targeted index will likely 
continue to hold appeal and serve a purpose, our professional understanding of the roles of 
A&I tools needs to be qualified and merits greater evaluative attention. Some disciplines will 
likely continue to be well served by a strong A&I service that represents the vocabulary and 
the literature of the field; as the present study suggests, such is not the case for all fields.

Further, the awareness that disciplinary databases are not necessarily the best tools for 
advanced uses cannot necessarily be accompanied by an assumption that the opposite is true. 
The legacy configurations of many A&I interfaces will not be met as favorably as web scale 
discovery tools by users accustomed to the single search box experience. Libraries will do 
well to conduct further research into the enduring value of A&I tools and design solutions to 
bring their assets into the web scale discovery environment, characterized by robust options 
for discovery, delivery, and access to discipline-specific search mechanisms, when and where 
appropriate. 

Acknowledgements
The author is especially grateful to Ken Varnum and Lettie Conrad for their comments on 
earlier versions of this paper and to Beau Case for support and encouragement during the 
early phases of this project. Josh Ringuette and Alexandra Andre provided indispensable as-
sistance with data collection. The staff of the University of Michigan Library’s Research Data 
Services unit has enabled the preservation and sharing of this project’s relevant data. Many 
thanks to all.

Notes
 1. Andrew D. Asher, Lynda M. Duke, and Suzanne Wilson, “Paths of Discovery: Comparing the Search 

Effectiveness of EBSCO Discovery Service, Summon, Google Scholar, and Conventional Library Resources,” 
College & Research Libraries 74, no. 5 (Sept. 2013): 464, doi:10.5860/crl-374; Mary M. Somerville and Lettie Y. Con-
rad, “Collaborative Improvements in the Discovery of Scholarly Content: Accomplishments, Aspirations, and 
Opportunities: A SAGE White Paper,” available online at https://studysites.sagepub.com/repository/binaries/
pdf/improvementsindiscoverability.pdf [accessed 6 December 2017]; Courtney Lundrigan, Kevin Manuel, and 
May Yan, “‘Pretty Rad’: Explorations in User Satisfaction with a Discovery Layer at Ryerson University,” Col-
lege & Research Libraries 76, no.1 (Jan. 2015): 43, doi:10.5860/crl.76.1.43; Marshall Breeding, “The Future of Library 
Resource Discovery,” Information Standards Quarterly 27, no. 1 (Spring 2015): 24–30.

 2. Jason Vaughan, “Web Scale Discovery What and Why,” Library Technology Reports 47, no. 1 (2011): 5–11.
 3. Beth Thomsett-Scott and Patricia E. Reese, “Academic Libraries and Discovery Tools: A Survey of the 

Literature,” College & Undergraduate Libraries 19, no. 2/4 (Apr. 2012): 128, doi:10.1080/10691316.2012.697009; Nancy 
Fawley and Nikki Krysak, “Information Literacy Opportunities within the Discovery Tool Environment,” 
College & Undergraduate Libraries 19, no. 2/4 (Apr. 2012): 213, doi:10.1080/10691316.2012.693439; Stefanie Buck and 
Margaret Mellinger, “The Impact of Serial Solutions’ SummonTM on Information Literacy Instruction: Librarian 
Perceptions,” Internet Reference Services Quarterly 16, no. 4 (July 2011): 165, doi:10.1080/10875301.2011.621864.

 4. The vast literature on discovery has been well analyzed in several review articles, as well as compiled 
into edited volumes. A selection of these include: Thomsett-Scott and Reese, “Academic Libraries and Discovery 
Tools”; Nadine P. Ellero, “Integration or Disintegration: Where Is Discovery Headed?” Journal of Library Metadata 
13, no. 4 (Oct. 2013): 311–29, doi:10.1080/19386389.2013.831277; Planning and Implementing Resource Discovery Tools in 
Academic Libraries, eds. Diane Dallis and Mary Pagliero Popp (Hershey, PA: Information Science Reference, 2012).

https://doi.org/10.5860/crl-374
https://studysites.sagepub.com/repository/binaries/pdf/improvementsindiscoverability.pdf
https://studysites.sagepub.com/repository/binaries/pdf/improvementsindiscoverability.pdf
https://doi.org/10.5860/crl.76.1.43
https://doi.org/10.1080/10691316.2012.697009
https://doi.org/10.1080/10691316.2012.693439
https://doi.org/10.1080/10875301.2011.621864
https://doi.org/10.1080/19386389.2013.831277


Discovery and the Disciplines   213

 5. Asher, Duke, and Wilson, “Paths of Discovery,” 464–88; Sarah P.C. Dahlen and Kathlene Hansen, “Prefer-
ence vs. Authority: A Comparison of Student Searching in a Subject-Specific Indexing and Abstracting Database 
and a Customized Discovery Layer,” College & Research Libraries 78, no. 7 (Nov. 2017): 878–97, doi:10.5860/crl.78.7.878; 
Jeffrey Daniels, Laura Robinson, and Susan Wishnetsky, “Results of Web-Scale Discovery: Data, Discussions, 
and Decisions,” Serials Librarian 64, no. 1/4 (Jan. 2013): 81–87, doi:10.1080/0361526X.2013.761056; Suqing Liu, Sansan 
Liao, and Jing Guo, “Surviving in the Digital Age by Utilizing Libraries’ Distinctive Advantages,” Electronic 
Library 27, no. 2 (Apr. 10, 2009): 298–307, doi:10.1108/02640470910947647.

 6. Nadine P. Ellero, “An Unexpected Discovery: One Library’s Experience with Web-Scale Discovery Ser-
vice (WSDS) Evaluation and Assessment,” Journal of Library Administration 53, no. 5/6 (July 2013): 323–43, doi:10
.1080/01930826.2013.876824; Amy I. Kornblau, Jane Strudwick, and William Miller, “How Web-Scale Discovery 
Changes the Conversation: The Questions Librarians Should Ask Themselves,” College & Undergraduate Libraries 
19, no. 2/4 (Apr. 2012): 144–62, doi:10.1080/10691316.2012.693443; Thomsett-Scott and Reese, “Academic Libraries 
and Discovery Tools”; Vaughan, “Web Scale Discovery What and Why.”

 7. Ellero, “An Unexpected Discovery,” 325; Kenneth J. Varnum, “A Brief History of the Open Discovery Ini-
tiative,” Learned Publishing 30, no.1 (Jan. 2017): 45–48, doi:10.1002/leap.1078; Rachel Kessler et al., “Optimizing the 
Discovery Experience through Dialogue: A Community Approach,” Insights 30, no. 2 (July 2017): 32, doi:10.1629/
uksg.367. 

 8. Varnum, “A Brief History of the Open Discovery Initiative”; Jenny Walker, “The NISO Open Discovery 
Initiative: Promoting Transparency in Discovery,” Insights 28, no. 1 (Mar. 2015): 85, doi:10.1629/uksg.186; Nettie 
Lagace, “NISO Releases Recommendations from the Open Discovery Initiative: Promoting Transparency in 
Discovery Working Group,” Serials Review 40, no. 4 (Oct. 2014): 287–88, doi:10.1080/00987913.2014.978244; Michael 
Kelley, “Coming into Focus,” Library Journal 137, no. 17 (Oct. 15, 2012): 34.

 9. Ellero, “An Unexpected Discovery,” 326; Kessler et al., “Optimizing the Discovery Experience through 
Dialogue”; Marshall Breeding, “Looking Forward to the Next Generation of Discovery Services,” Computers in 
Libraries 32, no. 2 (Mar. 2012): 29; Andrew J. Welch, “Implementing Library Discovery: A Balancing Act,” in Plan-
ning and Implementing Resource Discovery Tools in Academic Libraries, eds. Diane Dallis and Mary Pagliero Popp 
(Hershey, PA: Information Science Reference, 2012), 325.

10. Asher, Duke, and Wilson, “Paths of Discovery,” 476; Buck and Mellinger, “Impact of Serial Solutions’ 
SummonTM,” 170.

11. Ellero, “An Unexpected Discovery,” 326; Buck and Mellinger, “Impact of Serial Solutions’ SummonTM on 
Information Literacy Instruction,” 165.

12. Thomsett-Scott and Reese, “Academic Libraries and Discovery Tools,” 128; Fawley and Krysak, “Informa-
tion Literacy Opportunities within the Discovery Tool Environment,” 213; Buck and Mellinger, “The Impact of 
Serial Solutions’ SummonTM on Information Literacy Instruction,” 165.

13. Catherine Cardwell, Vera Lux, and Robert J. Snyder, “Beyond Simple, Easy, and Fast: Reflections on Teach-
ing Summon,” College and Research Libraries News 73, no. 6 (2012): 344–47, available online at https://crln.acrl.org/
index.php/crlnews/article/view/8778/9344 [accessed 18 September 2017 ]; Asher, Duke, and Wilson, “Paths of 
Discovery,” 474.

14. Asher, Duke, and Wilson, “Paths of Discovery.”
15. Dahlen and Hanson, “Preference vs. Authority,” 884–86.
16. Ibid., 883, 887.
17. Ibid., 892.
18. Carol Tenopir, “Evaluation of Database Coverage: A Comparison of Two Methodologies,” Online Review 

6, no. 5 (1982): 423–41; Thomas E. Nisonger, “Use of the Checklist Method for Content Evaluation of Full-Text 
Databases: An Investigation of Two Databases Based on Citations from Two Journals,” Library Resources & Tech-
nical Services 52, no. 1 (Jan. 2008): 4–17.

19. Nisonger, “Use of the Checklist Method,” 4.
20. Kee DeBoer, “Abstracting and Indexing Services for Recent U.S. History,” RQ 28, no. 4 (1989): 537–45.
21. Jennalyn W. Tellman, “A Comparison of the Usefulness of IBZ and FRANCIS for Historical Research,” 

Reference & User Services Quarterly 41, no. 1 (Fall 2001): 56–66; Hal P. Kirkwood and Monica C. Kirkwood, “His-
torical Research: Historical Abstracts with Full Text or Google Scholar,” Online 35, no. 4 (Aug. 2011): 28–32.

22. Clyve Jones, Michael Chapman, and Pamela Carr Woods, “The Characteristics of the Literature Used by 
Historians,” Journal of Librarianship 4, no. 3 (July 1972): 142, doi:10.1177/096100067200400301; Margaret F. Stieg, “The 
Information Needs of Historians,” College & Research Libraries 42, no. 6 (1981): 551, 554, doi:10.5860/crl_42_06_549.

23. Margaret Stieg Dalton and Laurie Charnigo, “Historians and Their Information Sources,” College & Re-
search Libraries 65, no. 5 (2004): 405, doi:10.5860/crl.65.5.400.

https://doi.org/10.5860/crl.78.7.878
https://doi.org/10.1080/0361526X.2013.761056
https://doi.org/10.1108/02640470910947647
https://doi.org/10.1080/01930826.2013.876824
https://doi.org/10.1080/01930826.2013.876824
https://doi.org/10.1080/10691316.2012.693443
https://doi.org/10.1002/leap.1078
https://doi.org/10.1629/uksg.367
https://doi.org/10.1629/uksg.367
https://doi.org/10.1629/uksg.186
https://doi.org/10.1080/00987913.2014.978244
https://crln.acrl.org/index.php/crlnews/article/view/8778/9344
https://crln.acrl.org/index.php/crlnews/article/view/8778/9344
https://doi.org/10.1177/096100067200400301
https://doi.org/10.5860/crl_42_06_549
https://doi.org/10.5860/crl.65.5.400


214  College & Research Libraries March 2019

24. M. Sara Lowe, “Reference Analysis of the American Historical Review,” Collection Building 22, no. 1 (2003): 
15, doi:10.1108/01604950310457168.

25. Kirkwood and Kirkwood, “Historical Research,” 32.
26. Ibid., 32.
27. American Historical Association, “About the American Historical Review,” available online at https://

www.historians.org/publications-and-directories/american-historical-review/about-the-american-historical-
review [accessed 18 September 2017].

28. Alexa L. Pearce, Discovering History: An Analysis of Secondary Literature Cited in the American Historical 
Review, 2010–2015 (Oct. 2, 2017), distributed by Deep Blue Data, doi:10.7302/Z2QR4V9S.

29. DeBoer, “Abstracting and Indexing Services for Recent U.S. History,” 539.
30. Lara Putnam, “The Transnational and the Text-Searchable: Digitized Sources and the Shadows They Cast,” 

American Historical Review 121, no. 2 (Apr. 2016): 378, doi:10.1093/ahr/121.2.377; Dalton and Charnigo, “Historians 
and Their Information Sources,” 411.

31. At the time of submission, the U-M Library is preparing to launch a consolidated search interface that 
will allow users to search simultaneously across its catalog as well as all content from ArticlesPlus, along with 
database and journal holdings. 

32. The full list of citations found in only one search platform is included in the published data for this study. 
See Pearce, Discovering History.

33. The full list of citations that were not found in any of the search platforms is included in the published 
data for this study. See Pearce, Discovering History.

34. The full list of citations found in all search platforms is included in the published data for this study. See 
Pearce, Discovering History.

35. Dalton and Charnigo, “Historians and Their Information Sources”; Lowe, “Reference Analysis of the 
American Historical Review,” 15–16.

36. American Historical Association, “Guidelines for the Professional Evaluation of Digital Scholarship by 
Historians,” available online at https://www.historians.org/teaching-and-learning/digital-history-resources/
evaluation-of-digital-scholarship-in-history/guidelines-for-the-professional-evaluation-of-digital-scholarship-
by-historians [accessed 26 September 2017].

37. Dalton and Charnigo, “Historians and Their Information Sources,” 406.
38. Stieg, “The Information Needs of Historians,” 554; Dalton and Charnigo, “Historians and Their Informa-

tion Sources,” 408.
39. Mary Jane Curry and Theresa Lillis, “The Dominance of English in Global Scholarly Publishing,” Inter-

national Higher Education, no. 46 (Winter 2007), doi:10.6017/ihe.2007.46.7948.
40. Lowe, “Reference Analysis of the American Historical Review,” 16.
41. Ibid., 16.
42. Stieg, “The Information Needs of Historians,” 550.
43. Putnam, “The Transnational and the Text-Searchable,” 378, 383.
44. Jones, Chapman, and Woods, “The Characteristics of the Literature Used by Historians,” 145.
45. Kirkwood and Kirkwood, “Historical Research,” 29.
46. Thomas Mann, The Oxford Guide to Library Research (New York, NY: Oxford University Press, 2015), 79.
47. Hannah Rozear, “Where ‘Google Scholar’ Stands on Art: An Evaluation of Content Coverage in Online 

Databases,” Art Libraries Journal 34, no. 2 (Apr. 2009): 21–25; Julie Arendt, “Imperfect Tools: Google Scholar vs. 
Traditional Commercial Library Databases,” Against the Grain 20, no. 2 (2008): 26, 28, 30; Xiaotian Chen and Kevin 
O’Kelly, “Cross-Examining Google Scholar,” Reference & User Services Quarterly 52, no. 4 (2013): 279–82.

48. Rozear, “Where ‘Google Scholar’ Stands on Art”; Kirkwood and Kirkwood, “Historical Research.”
49. Ellero, “Integration or Disintegration,” 316.
50. Jones, Chapman, and Woods, “The Characteristics of the Literature Used by Historians,” 139.
51. Stieg, “The Information Needs of Historians,” 549.

https://doi.org/10.1108/01604950310457168
https://www.historians.org/publications-and-directories/american-historical-review/about-the-american-historical-review
https://www.historians.org/publications-and-directories/american-historical-review/about-the-american-historical-review
https://www.historians.org/publications-and-directories/american-historical-review/about-the-american-historical-review
https://doi.org/10.7302/Z2QR4V9S
https://doi.org/10.1093/ahr/121.2.377
https://www.historians.org/teaching-and-learning/digital-history-resources/evaluation-of-digital-scholarship-in-history/guidelines-for-the-professional-evaluation-of-digital-scholarship-by-historians
https://www.historians.org/teaching-and-learning/digital-history-resources/evaluation-of-digital-scholarship-in-history/guidelines-for-the-professional-evaluation-of-digital-scholarship-by-historians
https://www.historians.org/teaching-and-learning/digital-history-resources/evaluation-of-digital-scholarship-in-history/guidelines-for-the-professional-evaluation-of-digital-scholarship-by-historians
https://doi.org/10.6017/ihe.2007.46.7948

	_GoBack