150 Scholarly Metrics Baseline: A Survey of Faculty Knowledge, Use, and Opinion about Scholarly Metrics Dan DeSanto and Aaron Nichols Dan DeSanto and Aaron Nichols are Library Assistant Professors in Bailey/Howe Library at the University of Vermont; e-mail: ddesanto@uvm.edu, aaron.nichols@uvm.edu. ©2017 Dan DeSanto and Aaron Nichols, Attribution-NonCommercial (http://creativecommons.org/licenses/by-nc/4.0/) CC BY-NC. This article presents the results of a faculty survey conducted at the Uni- versity of Vermont during academic year 2014–2015. The survey asked faculty about: familiarity with scholarly metrics, metric-seeking habits, help-seeking habits, and the role of metrics in their department’s tenure and promotion process. The survey also gathered faculty opinions on how well scholarly metrics reflect the importance of scholarly work and how faculty feel about administrators gathering institutional scholarly metric information. Results point to the necessity of understanding the campus landscape of faculty knowledge, opinion, importance, and use of scholarly metrics before engaging faculty in further discussions about quantifying the impact of their scholarly work. aculty at our institution possess a range of attitudes, knowledge, and opin- ions about the metrics that purport to measure the impact and influence of their scholarship. While many faculty work in departments that require and emphasize traditional scholarly metrics in the reappointment, tenure, and promotion process (RPT), other departments use nontraditional measures that better fit their discipline, and still other departments rely almost exclusively on professional judgment. We sought to capture at the University of Vermont, a midsized research institution, a scan of our campus’ faculty, not only to assess disciplinary differences, but also to put together a campuswide picture of how our faculty use, perceive, and understand scholarly metrics. Five guiding questions shaped our survey work: • How familiar are faculty with scholarly metrics? • How/why/when do they seek them out? • Where do faculty turn for help? • What role do scholarly metrics play in the tenure and promotion process? • What opinions and thoughts do faculty members have about how well these metrics reflect the impact of a scholar’s work? These guiding questions served as the framework for our survey and also serve as the outline for this article’s results section. doi:10.5860/crl.78.2.150 Scholarly Metrics Baseline 151 Literature Review The field of scholarly metrics has often focused on detailed studies of specific metrics,1 suggestions for new metrics,2 and the benefits/limitations of certain impact measures.3 In recent years, a discourse has emerged that favors article-level metrics or “altmetrics” and criticizes traditional journal-level metrics for conflating the impact of an article with the impact of the journal in which it was published. Other criticisms include journal-level metrics taking too long to generate and being easy to manipulate.4 In response, the field of altmetrics seeks to find evidence of impact by examining the digital artifacts associated with an article: number of downloads, number of times viewed, number of readers in a scholarly community like Mendeley or ResearchGate, number of times shared on social media. Priem, Taraborelli, Groth, and Neylon in their Altmetrics: A Manifesto explain: That dog-eared…article that used to live on a shelf now lives in Mendeley, Cit- eULike, or Zotero—where we can see and count it. That hallway conversation about a recent finding has moved to blogs and social networks—now, we can listen in. The local genomics dataset has moved to an online repository—now, we can track it. This diverse group of activities forms a composite trace of impact far richer than any available before.5 The article-level or “altmetric” demonstration of impact is growing as scholars use applications like ResearchGate, Impact Story, or Mendeley and institutions subscribe to campuswide applications like PlumX that track the altmetrics of their researchers. Because altmetrics have become an essential piece of the discourse regarding scholarly metrics, we chose to include questions about this newer mode of measuring impact alongside our questions about traditional metrics. Our campus, like many others, places varying levels of importance and value on scholarly metrics from academic discipline to academic discipline. Different disci- plines look to different metrics. Within disciplines, debates occur as to the merits and shortcomings of specific indicators.6 Disciplines may also use indicators of impact for different purposes. Scientists commonly use impact metrics to assist in making deci- sions regarding hiring, tenure, promotion, and salary increases.7 For researchers in the humanities, where books are a major platform for scholarly output, demonstrating impact becomes more complicated. Citation indexes often include journal citations but exclude book citations.8 Since book publication and citation information is not fre- quently listed in citation indexes, it is up to academic departments to devise their own measures for faculty success and not rely on citation indexes alone. While numerous studies have pointed out the limitations associated with traditional scholarly metrics,9 they remain an important piece of the RPT process for many academics and can be more or less problematic for faculty depending on differing emphases within disciplines. Our experience working with faculty at the University of Vermont indicated that some faculty, especially newer faculty, have questions about how to find, track, and collect scholarly metric information related to their own scholarly output. There is a long history of academic libraries in the United States and Europe offering citation support to faculty members looking to demonstrate the influence of their scholarly output.10 Librarians commonly assist tenure and promotion candidates in locating journal impact measures, performing citation searches, and understanding traditional and altmetrics.11 In some cases academic librarians have also worked with administra- tors to offer support at the institutional level.12 Academic librarians are well suited to provide faculty support in this area because they are familiar with scholarly informa- tion resources across varying subject areas and have long-term experience using and 152 College & Research Libraries February 2017 developing bibliographic data. Yet many librarians lack an understanding of where their faculty members are starting out or what types of metrics they find important. As we sought to gain this understanding of our own campus, we began searching for guiding examples. While there are an abundance of studies that measure disciplinary and rank differ- ences in faculty perceptions and awareness of important scholarly communications endeavors such as use of library tools,13 institutional repositories,14 and open access journals,15 we could find no studies addressing how faculty members understand scholarly metrics or how useful they find them. Likewise, we could find no studies that captured perceptions of importance to RPT or faculty opinions about applications seek- ing to track campuswide scholarly output. These questions are important to librarians if we are to meet our disciplinary faculty colleagues “in the middle” by understanding what motivates, encourages, or concerns them about scholarly metrics. To that end, we set out to capture a scan of our own campus that would create a picture of why and how our faculty demonstrate scholarly impact, what metrics and tools they use (if any), and how they feel about efforts to quantify the impact of their scholarly work. Method During winter break 2014–2015, an online survey was distributed to all tenure-track faculty on campus with the exception of faculty in the College of Medicine. Most fac- ulty in the College of Medicine do not have teaching responsibilities and focus solely on research and publication. We excluded this large cohort of nonteaching research faculty because their RPT expectations and emphasis on publishing are unique and would have dramatically affected our results. Adjunct faculty and those not involved in the RPT process were also excluded from the survey because there is little or no institutional demand to demonstrate scholarly impact. The survey was designed with an online survey tool and distributed through campus e-mail. The survey instrument included nine questions and followed the guiding ques- tions outlined at the beginning of this document. Information was collected regarding demographics, knowledge and understanding of scholarly metrics, help-seeking habits, perceived importance to the RPT process, seeking and tracking metrics, and opinions on the application of scholarly metrics. The survey instrument contained both closed- and open-ended survey questions and took approximately ten minutes to complete. To encourage open and honest responses, the survey was anonymized to protect the identities of all survey responders. The instrument was reviewed by our campus’ sta- tistical consulting clinic and piloted with five faculty members prior to its distribution. The survey was distributed to faculty on December 18, 2014, and was closed on February 6, 2015. Two reminders were sent out during this time period. Out of 470 faculty solicited for participation, 225 faculty began the survey and 206 completed it, providing a response rate of 44 percent. Results were tabulated and analyzed with the survey tool’s datasets and IBM SPSS statistical software. Both inferential and descrip- tive statistics were included for statistical analysis. Some open-ended responses were analyzed for trends, as demonstrated in the results. Results are presented as the total of all survey respondents and are, for some ques- tions, broken down by academic rank and/or disciplinary category. To present data that are statistically significant, we present data in three major disciplinary categories: sciences; social sciences, business, and social services; and humanities and arts. The appendix lists the departments represented in each disciplinary category. We grouped the social sciences, business, and social services together because many universities as- sociate business departments or schools with the social sciences. A connection between business and the social sciences is also made in the scholarly literature. At our institu- Scholarly Metrics Baseline 153 tion, social sciences and social services are brought together under the same college. We grouped the arts and humanities together because they share a well-established focus on critical thinking and expression. The departments we include in the sciences are traditionally those associated with the STEM sciences. Again, we designated these three disciplinary categories to provide clues to disciplinary trends while ensuring that our groupings remained broad enough to prove statistically significant. We define the term “scholarly metrics” to include both traditional impact metrics (such as h-index, ISI journal impact factor, SCImago journal rank) as well as citation count. While we could have asked separate questions about impact metrics and citation counts, we felt that this could be intimidating to some respondents and would have made the survey lengthy. We consider article-level metrics or “altmetrics” separately and posed questions specifically about altmetrics. The campus discourse at the time of our survey adds a measure of significance to the survey’s response rate and the responses themselves. As our survey was about to be released, our provost sent a memo to college deans charging them with establishing a list of metrics that demonstrate scholarly productivity and impact in their respective colleges. We did not know about the memo before it was disseminated, and our survey launched a few weeks after the memo went to deans. The provost’s charge generated a good amount of discussion from faculty, and we recognize that our results were gath- ered during a time of heightened campus awareness and focus on scholarly metrics. Results Demographics of Respondents Responses were spread across a wide variety of academic departments. Faculty from a total of thirty-nine different departments participated in the survey. Table 1 shows the departments with the most faculty respondents. Overall we were pleased to see diverse representation from most departments on campus. The distribution of survey responses from assistant, associate, and full professors closely match our campus’ percentages of assistant, associate, and full professors. Full professors made up 36 percent of survey respondents, and they account for 41.4 percent of all faculty on our campus. Associate professors accounted for 41 percent of survey respondents and 39.4 percent of all faculty, and assistant professors made up 19 percent of survey respondents and 19.2 percent of faculty campuswide. TABLE 1 Demographic Responses by Department* Department Participants Engineering 17 History 14 Education 13 Psychological Sciences 12 Romance Languages and Linguistics 12 English 10 Political Science 10 Business 9 *31 other departments gave 7 or fewer responses. 154 College & Research Libraries February 2017 Familiarity with Traditional Metrics & Altmetrics Our results showed a wide range of facility with traditional journal-level metrics. To the question, “How well do you feel you understand scholarly metrics (journal impact factor, h-index, SJR)?” about a third of all respondents reported that they understood traditional metrics “not at all” or “not very well,” a third reported “somewhat,” and a third reported “fairly well” or “extremely well.” Of these respondents, assistant profes- sors reported slightly higher rates of understanding, which is logical as faculty at this rank would show more concern about demonstrating scholarly impact for promotion. When broken down by disciplinary category, faculty in the sciences and the social sciences/business/social services categories reported much higher rates of understand- ing than those in the humanities and arts. Most striking in the results dealing with understanding metrics was the stark differ- ence between faculty in the humanities and arts and faculty in other departments. We found that faculty in the social sciences/business/social services category understand metrics at more similar rates to faculty in the sciences. Altmetrics, or article-level metrics, had much lower rates of understanding. This is not surprising, since altmetrics are still very new to most faculty. For clarification, we added explanatory statements to response choices (“This term is completely new to me,” “I have heard the term”). Over two thirds of faculty respondents stated that they were “not at all” or “marginally” familiar with altmetrics. Only about 7 percent of respondents said that they were either “familiar” or “extremely familiar” with alt- metrics, indicating that they have started tracking their own altmetric data. TABLE 2 How well do you feel you understand scholarly metrics*? (by rank) Assistant Professor Associate Professor Full Professor Total (206 Responses) Not at All 2.5% 12.4% 10.4% 9.7% Not Very Well 22.5% 19.1% 26.0% 22.3% Somewhat 32.5% 37.1% 33.8% 35.0% Fairly Well 37.5% 27.0% 24.7% 28.2% Extremely Well 5.0% 4.5% 5.2% 4.9% *Incomplete responses not included in this table. TABLE 3 How Well do you feel you understand scholarly metrics?* (by discipline) Sciences Social Sciences, Business, and Social Services Humanities & Arts Total (205 Responses) Not at All 0% 3.3% 31% 9.8% Not Very Well 8% 23.3% 41.4% 22.0% Somewhat 42.5% 36.7% 22.4% 35.1% Fairly Well 41.4% 31.7% 5.2% 28.3% Extremely Well 8% 5% 0% 4.9% *Incomplete responses not included in this table. Scholarly Metrics Baseline 155 Broken out by rank, full professors were much more likely to have no experience at all with altmetrics. Faculty at the assistant professor rank were marginally familiar with altmetrics; however, they were more likely than their senior colleagues to have begun tracking the altmetric data related to their scholarly work. TABLE 4 How familiar are you with “altmetrics” or non-traditional means of demonstrating scholarly impact?* (by rank) Assistant Professor Associate Professor Full Professor Total (206 Responses) Not at All Familiar (This term is completely new to me) 7.5% 39.3% 50.6% 37.4% Marginally Familiar (I have heard the term) 52.5% 29.2% 29.9% 34.0% Somewhat Familiar (I have seen altmetrics before but do not personally track them) 27.5% 24.7% 14.3% 21.4% Familiar (I have seen altmetrics and have started gathering altmetrics on my own scholarship) 12.5% 5.6% 3.9% 6.3% Extremely Familiar (I track my own altmetrics and use them to demonstrate scholarly impact) 0.0% 1.1% 1.3% 1.0% *Incomplete responses not included in this table. TABLE 5 How familiar are you with “altmetrics” or non-traditional means of demonstrating scholarly impact?* (by discipline) Sciences Social Sciences, Business, and Social Services Humanities & Arts Total (205 Responses) Not at All Familiar (This term is completely new to me) 27.6% 26.7% 62.1% 37.1% Marginally Familiar (I have heard the term) 39.1% 36.7% 24.1% 34.1% Somewhat Familiar (I have seen altmetrics before but do not personally track them) 25.3% 23.3% 13.8% 21.5% Familiar (I have seen altmetrics and have started gathering altmetrics on my own scholarship) 6.9% 11.7% 0% 6.3% Extremely Familiar (I track my own altmetrics and use them to demonstrate scholarly impact) 1.1% 1.7% 0% 1.0% *Incomplete responses not included in this table. 156 College & Research Libraries February 2017 Within disciplines, there is a much more varied exposure to altmetrics in the sci- ences and social sciences/business/social services categories than in the humanities and arts. Largely, faculty in the humanities and arts indicated that the term “altmetrics” was completely new to them. The sciences and social sciences/business/social services categories were more likely to report greater exposure to altmetric measures. Very few faculty members on our campus track their own altmetrics. We do not cur- rently have an institutional subscription to an altmetrics software like PlumX or Altmetric. com, nor have our libraries done a great deal of outreach work around altmetrics. None- theless, given the amount of attention devoted to altmetrics in the information sciences literature, we were surprised to see just how rarely faculty tracked their own altmetrics. We were not surprised that assistant professors were more likely to be familiar with altmetrics, indicating that newer faculty may be more eager to demonstrate scholarly impact through nontraditional means. As with the previous results, faculty in the social sciences, business, and social services fields closely mirrored faculty in the sciences and had an even greater number of faculty reporting that they were “familiar” with altmetrics. Importance to the Tenure and Promotion Process More than half of respondents indicated that their departments encouraged the use of scholarly metrics in the tenure and promotion process. However, only 27 percent of respondents indicated that their department required any type of scholarly metrics. There were seemingly high rates of respondents who did not know if their departments encouraged or required the inclusion of scholarly metrics. In a follow-up study, the survey results presented here could be corroborated with department chairs to see how well faculty understand the expectations for demonstrat- ing scholarly impact within their department’s RPT process. The results presented here illustrate a significant number of faculty that are unsure of their department’s RPT expectations for demonstrating scholarly impact. A wide range of importance is assigned to scholarly metrics across campus. To the question “How important are scholarly metrics to your department’s tenure and promotion process?” no one category of importance garnered more than 30 percent of respondents. “Fairly important” received the highest level of indication at 27.3 percent. Extremely different levels of importance are assigned to scholarly metrics in the RPT process depending on a faculty member’s discipline. While these disciplin- ary differences may not be surprising, we stress the significance and degree of these disciplinary differences and also the similarity of the social sciences/business/social services category to the sciences. TABLE 6 Does your department encourage the inclusion of scholarly metrics in your tenure and promotion dossier?* (all respondents) YES NO Don’t Know No Answer 56% 23% 16% 5% TABLE 7 Does your department require the inclusion of scholarly metrics in your tenure and promotion dossier?* (all respondents) Yes No Don’t Know No Answer 27% 46% 21% 5% Scholarly Metrics Baseline 157 By rank, assistant professors report scholarly metrics being of greater importance to RPT than their more senior colleagues. More than half (55%) of faculty at the assistant professor rank feel metrics are “fairly” or “extremely” important in their department’s RPT process as compared to 37.1 percent of associate professors and 36.4 percent of full professors. By cross-tabulating data on perceived importance to the RPT process with the data presented earlier on faculty understanding, we are able to confirm a relationship between faculty understanding and the importance of scholarly metrics in the RPT process. Said simply: faculty who report better understanding of scholarly metrics also report them as a more important part of the RPT process. Likewise, respondents who reported not understanding scholarly metrics reported that metrics were not important to their RPT process. From this we can gather that most faculty learn about scholarly metrics when scholarly metrics become important to their career advancement. While the previous questions sought to capture our campus’ current situation regarding the importance of scholarly metrics to the RPT process, we also sought to capture faculty opinions regarding how much weight ought to be assigned to metrics in the RPT process. We asked the multiple choice question, “How much weight do you feel your department should place on scholarly metrics in their promotion and tenure decisions?” and left space for follow-up textual responses. TABLE 8 How important are scholarly metrics to your department’s tenure and promotion process?* (by discipline) Sciences Social Sciences, Business, and Social Services Humanities & Arts Total (205 Responses) Not At All Important 2.3% 5% 44.8% 15.1% Not Very Important 10.3% 8.3% 19% 12.2% Somewhat Important 23% 21.7% 12.1% 19.5% Fairly Important 32.2% 40% 6.9% 27.3% Extremely Important 21.8% 11.7% 1.7% 13.2% Don’t Know 10.3% 13.3% 15.5% 12.7% *Incomplete responses not included in this table. TABLE 9 How important are scholarly metrics to your department’s tenure and promotion process? (by rank) Assistant Professor Associate Professor Full Professor Total (206 Responses) Not At All Important 2.5% 18% 18.2% 15.0% Not Very Important 7.5% 10.1% 16.9% 12.1% Somewhat Important 12.5% 23.6% 19.5% 19.9% Fairly Important 30.0% 28.1% 24.7% 27.2% Extremely Important 25% 9% 11.7% 13.1% Don’t Know 22.5% 11.2% 9.1% 12.6% 158 College & Research Libraries February 2017 More than half of faculty respondents indicated that “some weight” should be as- signed, around a third responded that “very little weight” should be assigned, and only 5 percent felt that “a great deal of weight” should be placed on scholarly metrics. TABLE 10 Importance to RPT vs Understanding: Cross-tabulation How well do you feel you understand scholarly metrics? Not at All Not Very Well Somewhat Fairly Well Extremely Well Total How important are scholarly metrics to your department’s tenure and promotion process? Not at All (% within: “How well do you feel you understand scholarly metrics?”) 60% 28.3% 5.6% 1.7% 10% 15% Not Very Important (% within: “How well do you feel you understand scholarly metrics?”) 5% 15.2% 13.9% 10.3% 10% 12.1% Somewhat Important (% within: “How well do you feel you understand scholarly metrics?”) 10% 8.7% 33.3% 15.5% 20% 19.9% Fairly Important (% within: “How well do you feel you understand scholarly metrics?”) 5% 21.7% 22.2% 43.1% 40% 27.2% Extremely Important (% within: “How well do you feel you understand scholarly metrics?”) 5% 4.3% 11.1% 24.1% 20% 13.1% *Incomplete responses not included in this table. TABLE 11 How much weight do you feel your department should place on scholarly metrics in their tenure and promotion decisions? Sciences Social Sciences, Business, and Social Services Humanities & Arts Total (205 Responses) Very Little Weight 20.7% 28.3% 67.2% 36.1% Some Weight 73.6% 63.3% 31% 58.5% A Great Deal of Weight 5.7% 8.3% 1.7% 5.4% Scholarly Metrics Baseline 159 The sciences and the social sciences/business/social services category again respond- ed to this questions very similarly. A full 67.2 percent of faculty in the humanities felt that “very little weight” should be assigned to scholarly metrics, as compared to 20.7 percent in the sciences and 28.3 percent in the social sciences/business/social services category. Further, 73.3 percent of faculty members in the sciences and 63.3 percent in the social sciences, business, and social services felt that “some weight” should be as- signed. Less than 10 percent of faculty in every disciplinary category felt that scholarly metrics should be assigned “a great deal of weight” in the RPT process. We followed up with a space for textual responses and asked why faculty assigned the importance they did. Respondents gave varying responses, some very nuanced, others focused on the nature of work in their discipline, and still others more broadly philosophical about the implications of quantifying scholarship. Sample responses are below: • “It is important at one level, because my department has faculty from differ- ent fields… There are different journals and we are not really familiar with the importance of the journals that are in the other field.” • “It’s important to publish research in reputable outlets and to publish articles that other scholars refer to in their own work. Scholarly metrics help to show the extent to which a researcher is contributing to the collective field.” • “We should use a diversity of measures and qualitative comments to demon- strate the impact of a scholar’s work in a sub-field.” • “The value of the work itself should be judged on its own merits. The worth or weight of the location of publication is frequently fraught with political factors beyond the author’s control.” • “Quantification of the value of a historian’s scholarly output is not a very useful enterprise, and cannot reflect effectively a scholar’s contribution to the field, neither in the short nor the long term. It also indirectly discourages disciplined method and precision, and indirectly encourages quantity rather than quality of scholarly accomplishment. This poses a particular danger to probationary faculty.” • “We already have a better measure of the quality and significance of the work: the opinions of a battery of experts.” • “In Education, many journals that practitioners actually read do not have impact factors. Journals for researchers do not influence practice nearly as much. Do we need to write for researchers if we got into this field to influence practice?” Open-ended responses brought up many of the arguments that one might expect. Some stated that scholarly metrics are a broad brush but remain effective tools for measuring scholarly impact. Others asserted that metrics are fraught with problems and are too imprecise to be valid. Many other responses touched upon issues within the scholarly landscape we had not expected. The initial response above is one such response. It explains how scholarly metrics can be a tool for communication within a bifurcated discipline. The final response above points out how academics often take for granted that scholarly material is necessarily intended for other scholars. These responses, though differing in their perceptions of importance, remind us just how complex, intricate, and individualized scholarly communication can be. Seeking Help with Metrics When asked in an open question where on campus they would turn for help with scholarly metrics, we found five distinct categories that trended in the responses. Faculty members largely identified our libraries as the resource on campus to which they would turn despite our libraries offering no formal outreach in this area. The 160 College & Research Libraries February 2017 percentage of results pointing to the libraries may be slightly higher due to the survey having been administered by researchers at the libraries; however, the findings point to faculty members’ reliance on the library for support. Other significant findings include an indication that faculty members also draw from collegial and mentor relationships for help with scholarly metrics. A significant number of respondents (19%) stated that they were unsure where to turn for help. When asked in another open question, “What information regarding scholarly met- rics or impact-tracking would be most helpful to you?” a number of themes emerged. Responses highlighted a faculty desire for: • Pragmatic descriptions of individual metrics and “how to” resources for find- ing and tracking metrics • More information about tracking impact measures related to their own scholar- ship (Google Scholar profiles, alerts, and the like) • Information about article-level (altmetric) data • A way of identifying the metrics most relevant to their discipline While libraries have a large role to play in educating faculty about measures of scholarly impact, it seems what faculty members want most is short, pragmatic instruc- tion that illustrates how to find impact measures most pertinent to their own work. Providing this type of tailored instruction in different disciplines may be a tall order; however, scholarly metrics may be an area where instruction delivered at point-of- need proves most effective. Seeking Scholarly Metrics We sought to find out when, apart from the promotion and tenure procedure, faculty members seek out scholarly metric information. We asked the open question, “Besides putting together your reappointment or promotion dossiers, when do you look at scholarly metrics?” The short answer is: “usually never.” After grouping the textual responses, more than 41 percent of respondents said that they never seek out metrics, apart from the RPT process. A small number of faculty (10% or less) responded that they seek out metrics to accomplish any of the following tasks: assess the impact of their own scholarly work; assess the impact of a journal in which they are considering publishing; make a case for the impact of their own work during annual performance reviews; evaluate job candidates; or as part of performing a literature search. Below are some selected responses that represent the categories above: • “I get daily emails from Academia.edu on when anyone searches for me on Google and other search engines. About once every two weeks I click on the link in the email and actually look at the data they provide, such as which keywords they used when searching, what country they’re from, and which articles of mine they ultimately find and download. I’d say about once a semester I poke around Google Scholar and check for new citations to my work. I often end up reading the papers that cite my work as a way to stay current in my field and also to understand what parts of my work are getting taken up and put to use.” TABLE 12 Where on campus would you turn for help with scholarly metrics? Libraries Colleagues In Department Or Chair I Don’t Know/ Not Sure Internet/ Electronic Sources Deans/ Associate Deans/Provost 41% 22% 19% 8% 5% Scholarly Metrics Baseline 161 • “When trying to figure out where to send new work for submission.” • “I look at them to determine potential gaps in the literature. For example, if an article has a lot of citations/impact but contains several flaws, then it helps me formulate potential research opportunities to improve the study.” • “For academic program review, and in assessing the “hirability” of new faculty members—the latter, of course, with a defensive stance for their future success in the academy.” • “During annual review preparation.” • “Preparing reference letters as solicited from other universities for candidates in the RPT process, or reference letters for job candidates.” We should not take for granted that scholars are continually measuring the reach and impact of their own scholarly work. On our campus, results show that they are not. Whether due to a lack of knowledge about the tools at their disposal, a disinterest in what happens to their scholarship once it clears the hurdle of publication, or simply not enough time to delve into this “secondary” type of scholarly work, scholars on our campus rarely track the impact to their scholarship unless it is required for the purposes of RPT. When respondents do go looking for scholarly metric information, they overwhelm- ingly turn to either Google Scholar (51.56%) or Journal Citation Reports (JCR) (39.11%). Both assess impact at the journal level, JCR using journal impact factor and Google Scholar using h-5 index. However, faculty may be turning to Google Scholar for other measures such as citation counts or their own h-5 index number (this metric can be used to measure a scholar’s body of work as well as a publication’s). Our survey did not parse out how or why faculty use Google Scholar metrics so readily. TABLE 13 What resources do you use to find scholarly metric information?* (all respondents) Google Scholar 51.56% Journal Citation Reports 39.11% SCImago Journal & Country Rank (SCOPUS) 3.11% None 29.33% Other 12.89% No Answer 6.67% *Additional responses include Researchgate and Journal Self-Report. TABLE 14 Metrics Resources by Discipline Do you use…..to find scholarly information? % “YES” in the Sciences % “YES” in Social Sciences, Business, and Social Services % “YES” in the Humanities & Arts Google Scholar 70.1% 71.7% 17.2% Journal Citation Reports 64.4% 50% 1.7% SCImago Journal & Country Rank (SCOPUS) 2.3% 6.7% 0% None 9.2% 20% 75.9% 162 College & Research Libraries February 2017 Consistent with the findings above, the faculty that do search Google Scholar and JCR overwhelmingly come from the sciences and social sciences/business/social services; however, faculty in the humanities and arts did report moderate usage of Google Scholar. Additionally, we asked faculty whether or not they currently use an application or tool (such as a Google Scholar profile or personal account on ResearchGate) to track the metric data related to their own scholarly work. Most (78.5%) do not, but a significant number (21.5%) do. Again, a disciplinary divide is evident. Consistent with our previous findings, most usage of scholarly metrics resources comes from the sciences and social sciences/business/social services. Similarly, it is faculty in these fields that use applications and tools to track their impact data. No faculty members in the humanities and arts responded that they use an application or program to track scholarly metrics. Opinions Regarding Scholarly Metrics Quantifying a scholar’s work is not a trivial thing. As librarians, we should keep in mind that these numeric indicators often represent years of scholarly work and a ca- reer exploring a certain topic. With this in mind, we asked faculty two more broadly philosophical questions. The first concerned the effectiveness of scholarly metrics to demonstrate the impact of a scholar’s work. The second concerned the growing trend of universities implementing campuswide applications to track and aggregate the scholarship produced by their faculty. TABLE 15 Do you currently use applications or programs to track metrics related to your scholarly output? Sciences Social Sciences, Business, and Social Services Humanities & Arts Total (205 Responses) Yes 32.2% (28) 26.7% (16) 0.0% (0) 21.5% (44) No 67.8% (59) 73.3% (44) 100% (58) 78.5% (161) TABLE 16 How accurately do scholarly metrics reflect the importance of a researcher’s scholarly work? (by discipline) Sciences Social Sciences, Business, and Social Services Humanities & Arts Total (205 responses) Not Accurately At All 6.9% 6.7% 43.1% 17.1% Not Very Accurately 21.8% 35% 39.7% 30.7% Somewhat Accurately 47.1% 38.3% 15.5% 35.6% Fairly Accurately 21.8% 16.7% 0% 14.1% Extremely Accurately 2.3% 3.3% 1.7% 2.4% Scholarly Metrics Baseline 163 Taken together, 47.8 percent of faculty members felt that scholarly metrics reflected the importance of a researcher’s work “not accurately at all” or “not very accurately.” Slightly fewer (35.6%) felt that importance was reflected “somewhat accurately,” and only 16.5 percent felt that importance was reflected either “fairly accurately” or “extremely accurately.” Notable also was the extremely small number (2.4%) who responded “extremely accurately.” Disciplinary trends continued into this area of opinion, with faculty in the sciences more likely to view metrics as effective in conveying the importance of a researcher’s scholarly work, faculty in the social sciences/business/social services category slightly less likely to find metrics an effective means, and the vast majority of faculty in the humanities and arts viewing metrics as “not at all accurate” or “not very accurate.” We asked respondents in this section to expand upon their stated opinions. Selected responses include: • “The most innovative or iconoclastic ideas often spend their first decades in the scholarly margins. In my field there can be many decades, even generations, before a published piece of knowledge is built on by someone else. I routinely draw on 19th (century), and sometimes even earlier, work.” • “Niche” areas of research can be devalued.” • “In humanities, books are often cited by other books, and these do not typically turn up in Web of Science or Google Scholar.” • “Some of the top journals in my field do not have very high scholarly metrics. However they remain the best our field has.” • “Metrics vary radically with the size of the scholarly community devoted to a particular discipline.” The open-ended responses highlight the disciplinary limits of scholarly metrics and the finite set of scholarship that can be assessed using impact metrics. In article- intensive disciplines, faculty tend to feel that it is a better measure, with the caveat that smaller subdisciplines or scholarly interests may not be represented accurately. In book-intensive disciplines or disciplines that produce artistic scholarship, faculty point to limitations of format, expectations of currency, and a lack of indexing that make traditional models of statistically assessing impact a poor reflection of scholarly output. Perhaps most striking in this section was the extremely high rate of concern shown by faculty members when asked, “Do you have any concerns about university adminis- trators tracking the scholarly metric data of their faculty?” Although this question was open-ended, it was fairly easy to categorize responses into three areas: Concerned, Not Concerned, and Neutral (which includes responses that expressed neither sentiment). A full 68 percent of respondents expressed concern about university administrations tracking the scholarly metric data of their faculty. Many responses pointed to a concern that administrators would make reductivist decisions based solely on statistical impact measures. Other respondents pointed to concerns over their disciplines being moved toward statistical measures of impact despite their disciplines being very poor fits for this type of assessment. Other faculty respondents had very little concern as long as other measures of impact were also considered. We have selected the following representative responses: • “Yes. In spite of administrators assuring everyone that they will contextualize data, look at other sources, etc. I am pretty sure it will eventually come down to making decisions based on some number.” • “Yes absolutely. Even when people fully believe numbers are imperfect (think of quantitative teaching evaluations), numbers are so much easier to deal with than more laborious but more appropriate forms of evaluation. Numbers are a hammer and administrators start to see only nails” 164 College & Research Libraries February 2017 • “Not really, unless they’re not also looking at other indicators of quality and impact.” • “There are qualitative factors regarding the quality and quantity of scholarship that no metrics system can register, such as what is said in reviews of a book. Finally, the effect on faculty morale, in the Humanities at least, is grim. Are we now factory workers tasked with producing quotas of essays whose actual content is irrelevant?” • “I’m not sure. I already feel like the expectations for publications are dispro- portionate in the workload.” We again point out that our survey was released to faculty in a climate of heightened awareness around scholarly metrics. While some of these comments may be colored by this campus climate, the comments are a good reminder that campuswide tracking of scholarly metrics is not without its share of possible pitfalls. Librarians engaging their faculty in discussions about collecting scholarly metric data may be well served to first examine the climate on their campus and the opinions of their faculty. While some authors take as a given the benefits of campuswide applications that track traditional or altmetric information, we caution that such undertakings may be more complicated and not without a certain risk of faculty misperceiving a library or librarian’s intentions. Discussion Limitations and Suggestions for Future Research Our discussion and analysis of the results take into consideration the study’s limita- tions and opportunities for future research. Our study is limited to one campus, and survey responses, to some degree, reflect faculty experiences at our particular campus. Future research might compare faculty attitudes and knowledge in institutions that offer institutional services and support for scholarly metrics against those that do not. Such research could explore the value and usefulness of scholarly metrics support ser- FIGURE 1 Do you have any concerns about university administrators tracking the scholarly metric data of their faculty? Concern, 68% (140) Neutral, 12.4% (26) Rate of Concern No Concern, 19. 4% (40) Scholarly Metrics Baseline 165 vices. Additionally, this study focuses on faculty attitudes and knowledge of scholarly metrics. A follow-up study that measures the attitudes and knowledge of university administrators, department chairs, and librarians would enable researchers to compare and contrast the knowledge and attitudes of these different groups. Finally, this study does not focus on which scholarly metrics are important to each discipline. A study that surveys the landscape of scholarly metrics by discipline, particularly the most important metrics for each discipline, would provide practical information to those who seek to establish a scholarly metrics support service and create more awareness of the differences between how each discipline measures scholarly output. Summary of Results Our results both confirm and complicate a known disciplinary divide regarding faculty use of scholarly metrics. While it is widely assumed that metrics are used more read- ily to measure impact in the traditional STEM sciences, research often pairs the social sciences and humanities. This is due to their often being indexed together16 and their grouping as “nonbasic sciences” or “non-STEM.”17 Whatever the reason, the results on our campus indicate that faculty in the category of social sciences, business, and social services behave much more like faculty in the traditional STEM sciences in as- sessing the impact of their scholarship. Faculty in both areas place a greater emphasis on scholarly metrics in the RPT process, therefore requiring faculty in both disciplinary areas to seek out metrics more frequently and understand them better. Our results emphasize the relationship between perceived importance and under- standing, indicating that faculty members will take the time to learn about scholarly metrics and understand them better if there is a clear link to their professional ad- vancement. In the humanities and arts, metrics are not emphasized in the RPT process; therefore, rates of understanding proved much lower. Faculty in the humanities and arts also had strong opinions that metrics should remain less important to RPT because the format of scholarship in these disciplines does not easily translate to traditional means of impact assessment. Perhaps the data relating to arts and humanities faculty would be different on a campus that emphasized altmetrics as an alternative way to quantify scholarly impact in the humanities and arts. Whether traditional metrics or altmetrics, the data clearly point to the need for a method of demonstrating scholarly impact to be valued in the RPT process before faculty members will take the time to learn about and understand it. Yet it remains difficult to value (or not value) scholarly metrics as part of the RPT process if a faculty member is unclear how scholarly metrics fit into their depart- ment’s RPT process. At our institution, around one fifth of faculty remain unclear about whether or not scholarly metrics are encouraged, required, or ignored in their departmental RPT processes. Clearly, there is work to be done on our campus educat- ing faculty members about the expectations of RPT as they relate to scholarly metrics. While each campus is different, we suspect that our findings are not wholly unique to our university. When faculty have questions about scholarly metrics unrelated to RPT, they largely turn to the libraries or to their colleagues. In discovering here that our faculty members still overwhelmingly turn to Google Scholar and Journal Citation Reports and that altmetrics are still very new to our campus, it seems that our libraries’ outreach efforts may be most effective by targeting traditional metrics. After gathering data from fac- ulty, we now have a much better idea of the resources to include in an outreach plan as well as the departments to target. In the immediate future, we hope to launch an online guide for faculty that could be promoted at new faculty workshops or within interested academic departments. We will encourage subject librarians in pertinent 166 College & Research Libraries February 2017 fields to begin a discussion of impact metrics with their faculty, perhaps using the guide as a starting place. We also hope to develop workshops for faculty in identified departments that place an emphasis on metrics in the RPT process. Other academic libraries planning to examine their faculty’s relationship to scholarly metrics would benefit by starting their project with an assessment of RPT practices at their institutions as well as faculty understanding about departmental RPT processes. By beginning a project of this sort with inquiry, librarians gather information to in- form and target outreach, demonstrate a respect for the scholarly practices within the discipline, and engage faculty members in discussions that they may be hesitant to bring up on their own. Conclusion As librarians engage teaching faculty in discussions of scholarly metrics, it is impor- tant to keep in mind the complexities and deeply felt opinions faculty members may possess about quantifying their scholarly output. Many of this survey’s open-response questions yielded very nuanced arguments, often based in a faculty member’s own disciplinary context, indicating their support, ambivalence, or disdain for scholarly metrics. Academic librarians would do well to search out the opinions of their own faculty and look into the scholarly contexts that may exist on their own campuses to avoid making anecdotal generalizations about why faculty members approach engaging with or ignoring scholarly metrics. As some campuses systematize their ap- proaches to gathering scholarly metrics, it will be increasingly important that librarians understand their faculty and engage the larger campus in discussions about scholarly metrics with a tone that is neither blindly critical nor wholly evangelical. Scholarly metrics, both traditional and altmetrics, are perhaps unique in our field because the information with which we are dealing is evaluative, controversial, and intimately tied to career advancement. While no information is without its complexi- ties, scholarly metrics stand apart because they deal with the scholarly and creative endeavors of our faculty colleagues. Debates about their use and appropriateness take place not only in our scholarly literature, they take place on our campuses among those with whom we work. Faculty engagement should start with asking questions, and it is imperative that we respect faculty members and their scholarship enough to begin discussions with questions about the tools and measures that assess a faculty member’s scholarly work. Scholarly Metrics Baseline 167 Appendix. Disciplinary Categories Sciences Humanities & Arts Social Sciences, Business, Social Services Animal Sciences Biology Chemistry Communication Sciences Computer Science Engineering ENVS Geology Math and Statistics Medical Laboratory & Radiation Science Nursing Nutrition and Food Science Physics Plant and Soil Science Biology Psychological Science Rehabilitation and Movement Science Rubenstein Art and Art History Asian Languages and Literatures Classics English German and Russian History Music and Dance Philosophy Religion Romance Languages and Linguistics Theater Anthropology Business CDAE Economics Education Geography Leadership and Development Science Political Science Social Work Sociology Survey Instrument Q.1: What is your academic rank? • Assistant Professor • Associate Professor • Full Professor • Other (please define) Q.2: What is your department? (departments listed) Q.3: How well do you feel you understand scholarly metrics (journal impact factor, H-index, H5 median)? • Not at all • Not very well • Somewhat • Fairly well • Extremely well Q.4: How familiar are you with “altmetrics” or nontraditional means of demonstrating scholarly impact (downloads, page views, Mendeley readers, social media followers, and the like)? • Not at all familiar (This term is completely new to me) • Marginally familiar (I have heard the term) • Somewhat familiar ( I have seen altmetrics before but do not personally track them) • Familiar (I have seen altmetrics and have started gathering altmetrics on my own scholarship) 168 College & Research Libraries February 2017 • Extremely familiar (I track my own altmetrics and use them to demonstrate scholarly impact) Q.5: Does your department encourage the inclusion of scholarly metrics in your tenure and promotion dossier? • Yes • No • Don’t know Q.6: Does your department require the inclusion of scholarly metrics in your tenure and promotion dossier? • Yes • No • Don’t know Q.7: How important are scholarly metrics to your department’s tenure and promotion process? • Not at all important • Not very important • Somewhat important • Fairly important • Extremely important • Don’t know Q.8: What resources do you use to find scholarly metric information? • Journal Citation Reports (ISA Web of Science) • Scimago Journal and Country Rank (Scopus) • Google Scholar • None • Other (please specify) Q.9: Do you currently use applications or programs to track metrics related to your scholarly output? • Yes • No Q.10: If you answered “Yes” to the previous question, which applications or programs do you use? • Impact Story • Google Scholar Citations • PlumX • Publish or Perish • Other (please specify) Q.11: Where on campus would you turn for help with scholarly metrics? (Open-ended) Q.12: How accurately do scholarly metrics reflect the importance of a researcher’s scholarly work? Why do you feel that way? Not accurately at all Not very accurately Scholarly Metrics Baseline 169 • Somewhat accurately • Fairly accurately • Extremely accurately (Space provided for open-ended responses) Q.13: How much weight do you feel your department should place on scholarly metrics in their promotion and tenure decisions? Why? • Very little weight • Some weight • A great deal of weight (Space provided for open-ended responses) Q.14: Besides putting together your reappointment or promotion dossiers, when do you look at scholarly metrics? (Open-ended) Q.15: What information regarding scholarly metrics or impact-tracking would be most helpful to you? (Open-ended) Q.16: Do you have any concerns about university administrators tracking the scholarly metric data of their faculty? (Open-ended) Notes 1. Lutz Bornmann, Rüdiger Mutz, and Hans-Dieter Daniel, “The H-index Research Output Measurement: Two Approaches to Enhance Its Accuracy,” Journal of Informetrics 4, no. 3 (2010): 407–14; Juan Miguel Campanario, “The Effect of Citations on the Significance of Decimal Places in the Computation of Journal Impact Factors,” Scientometrics 99, no. 2 (2014): 289–98. 2. Jasleen Kaur, Filippo Radicchi, and Filippo Menczer, “Universality of Scholarly Impact Metrics,” Journal of Informetrics 7, no. 4 (2013): 924–32; Anne-Wil Harzing,Satu Alakangas, and David Adams, “hIa: An Individual Annual H-index to Accommodate Disciplinary and Career Length Differences,” Scientometrics 99, no. 3 (2014): 811–21. 3. Khaled Moustafa, “The Disaster of the Impact Factor,” Science and Engineering Ethics 21, no. 1 (2014): 139–42; Ludo Waltman and Nees Jan Van Eck, “The Inconsistency of the H-index,” Journal of the American Society for Information Science and Technology 63, no. 2 (2012): 406–15. 4. Emilio Delgado López-Cózar, Nicolás Robinson-García, and Daniel Torres-Salinas, “The Google Scholar Experiment: How to Index False Papers and Manipulate Bibliometric Indicators,” Journal of the Association for Information Science and Technology 65, no. 3 (2014): 446–54; Emilio Del- gado López-Cózar, Nicolás Robinson-García, and Daniel Torres-Salinas, “Science Communication: Flawed Citation Indexing,” Science 342, no. 6163 (2013): 1169; Tim Brody and Stevan Harnand, “Earlier Web Usage Statistics as Predictors of Later Citation Impact,” available online at http:// arxiv.org/abs/cs/0503020 [accessed 08 September 2015]. 5. Jason Priem, Dario Taraborelli, Paul Groth, and Cameron Neylon, “Altmetrics: A Manifesto” (2010), available online at altmetrics.org/manifesto/ [accessed 08 September 2015]. 6. Oliver T. Coomes, Tim Moore, Jaclyn Paterson, et al., “Academic Performance Indicators for Departments of Geography in the United States and Canada,” The Professional Geographer 65, no. 3 (2013): 433–50; Gary Holden, Gary Rosenberg, and Kathleen Barker, “Bibliometrics: A Potential Decision Making Aid in Hiring, Reappointment, Tenure and Promotion Decisions,” Social Work in Health Care 41, no. 3/4 (2005): 67–92; Per O. Seglen, “Why the Impact Factor of Journals Should not be Used for Evaluating Research,” BMJ 314, no. 7079 (1997): 497. 7. Alison Abbott, David Cyranoski, Nicola Jones, et al., “Metrics: Do Metrics Matter?” Nature News 465, no. 7300 (2010): 860–62. 8. Janet Dagenais Brown, “Citation Searching for Tenure and Promotion: An Overview of Issues and Tools,” Reference Services Review 42, no. 1 (2014): 70–89. 9. López-Cózar et al., “The Google Scholar Experiment,” 446–54; Campanario, “The Effect 170 College & Research Libraries February 2017 of Citations,” 289–98; Moustafa, “The Disaster of the Impact Factor,” 139–42; Waltman and Van Eck, “The Inconsistency of the H-index,” 406–15. 10. Brown, “Citation Searching for Tenure and Promotion,” 70–89; Fredrik Åström and Joacim Hansson, “How Implementation of Bibliometric Practice Affects the Role of Academic Libraries,” Journal of Librarianship and Information Science 45, no. 4 (2013): 316–22. 11. Brown, “Citation Searching for Tenure and Promotion,” 70–89; Robin Chin Roemer and Rachel Borchardt, “From Bibliometrics to Altmetrics: A Changing Scholarly Landscape,” College & Research Libraries News 73, no. 10 (2012): 596–600. 12. Åström and Hansson, “How Implementation of Bibliometric Practice Affects the Role of Academic Libraries,” 316–22; Roemer and Borchardt, “From Bibliometrics to Altmetrics,” 596–600. 13. Chris Leeder and Steven Lonn, “Faculty Usage of Library Tools in a Learning Management System,” College and Research Libraries 75, no. 5 (2014): 250–63. 14. Faith Oguz and Deborah Davis, “Developing an Institutional Repository at a Medium- Sized University: Getting Started and Going Forward,” Georgia Library Quarterly 48, no. 4 (2011): 13–16; Faith Oguz and Shimelis Assefa, “Faculty Members’ Perceptions Towards Institutional Repository at a Medium-Sized University,” Library Review 63, no. 2 (2014): 189–202. 15. Wilhelm Peekhaus and Nicholas Proferes, “How Library and Information Science Faculty Perceive and Engage with Open Access,” Journal of Information Science 41, no. 5 (2015): 640–61. 16. Ludo Waltman, “A Review of the Literature on citation Impact Indicators,” available online at http://arxiv.org/ftp/arxiv/papers/1507/1507.02099.pdf [accessed 10 September 2015]; A.J. Nederhof, “Bibliometric Monitoring of Research Performance in the Social Sciences and the Humanities: A Review,” Scientometrics 66, no. 1 (2006): 81–100. 17. M.H. Huang and Y.W. Chang, “Characteristics of Research Output in Social Sciences and Humanities: From a Research Evaluation Perspective,” Journal of the American Society for Informa- tion Science and Technology 59, no. 11 (2008): 1819–28.