80 Faculty Publications and Citations: A Longitudinal Examination John M. Budd John M. Budd is Professor Emeritus in the School of Information Science & Learning Technologies at the University of Missouri; e-mail: buddj@missouri.edu. ©2017 John M. Budd, Attribution-NonCommercial (http://creativecommons.org/licenses/by-nc/3.0/) CC BY-NC. This investigation seeks to study the publication and citation activity of faculty at research universities, as defined by membership in the Asso- ciation of Research Libraries (ARL). It constitutes the fourth iteration in a study of publishing behaviors, conducted over more than twenty years. The present data indicate a substantial rise in publications, both in total and as measured on a per capita basis. These data are compared with those of the previous three studies. In addition, and for the first time, cita- tion data are also examined. The reason for the addition of citations is that there is cause to believe that citations are becoming common evaluative criteria for individuals, academic programs, and departments. There are implications for academic libraries with regard to all these data. nstitutional rankings abound. There are numerous rankings based on rat- ings by individuals, including graduates of programs, peers, prestige, and other factors (including numbers of publications by faculty), which cover United States and world universities. The existing rankings may include metrics, but sometimes are based on perceived prestige. The institutions ranking high in the various studies tend to advertise their success, in the hopes that the high rankings may attract the best students and faculty and have additional benefits. While there is considerable attention paid to these rankings, perception is not the focus here. Attention in the present study is on publications and citations in particular. At research universities in the United States, it is a given that faculty must publish to earn tenure and promotion; absence of a substantial publication record usually means that earn- ing tenure may be in jeopardy. Of course, many academics publish for other reasons, including personal motivations to communicate the fruits of their work to as wide an audience as possible. This motivation is enhanced at this time by the access mechanisms of research libraries; subscriptions and licensing of databases and aggregators results in ready access to the contents of thousands of serial titles at most research universities. Although the substantial access to serial literature is important, it is only an indirect component of the present investigation. This examination builds upon some previous studies and investigates publication and citation data related to faculty at United States institutions that are members of the Association of Research Libraries (ARL). The limitation to United States universities is a result of the possibility that tenure and promotion decisions at Canadian institutions may not be precisely the same as their United States counterparts. The publication and doi:10.5860/crl.78.1.80 Faculty Publications and Citations 81 citation data for more than one hundred universities are analyzed, and the rankings of the top twenty institutions are reported. The questions asked here are similar to those asked in prior studies: (1) What is the total publishing output of the top members of ARL? (2) What is the per capita publishing output of top ARL members? (3) What changes in rankings have occurred over time? (4) What is the total citation output be ARL members? (5) What is the per capita citation output of ARL members?1 The Literature There have been commentaries on the pressures placed on faculty at research universi- ties to publish. For example, Crane and Pearson observe, “We and our peers are now devoted almost single-mindedly to forms of productivity that can be captured in line items on our CVs.”2 While some may consider this something of an overstatement, the fact remains that faculty must attend to their publication records and have to seek opportunities to publish if they wish to be competitive when it comes to tenure and promotion. The pressure is recognized anecdotally in reports of faculty members who have been denied tenure on the grounds of insufficient numbers of publications.3 Department, college, and university committees are almost always sensitive to differ- ences in publishing dynamics across fields, but they do tend to make comparisons within fields. So, an historian is not expected to have the same kind of record as, say, a chemist, but the historian must present a record that is comparable to the records of others in the humanities. The observation of Crane and Pearson carries the implicit understanding that a great deal of time must be spent developing work that has the potential for publication. In a similar vein, Johnson quotes Kevin Patrick as saying, “I don’t know any strong academic settings in which both the number and quality of publications isn’t a major part of the calculus used for appointment, promotion and retention.”4 The “strong” academic settings Patrick speaks of include research universities (and, perhaps, other types of institutions that aspire to greater research reputations at this point in time). Other commentators offer more specific observations (van Dalen and Henkens): How does the publication pressure in modern-day universities affect the intrinsic and extrinsic rewards in science? By using a worldwide survey among demog- raphers and population scientists in developed and developing countries, we have shown that the large majority of these scholars perceive the publication pressure as high, but significantly more so in the United States and its Anglo- Saxon competitors. However, scholars see both the pros (upward mobility) and cons (excessive publication and uncitedness, burdens placed on the peer-review system, monodisciplinary bias in research, neglect of policy issues, etc.) of the publish-or-perish culture.5 The research affirms claims that are made above and adds some realizations of com- plicating factors, such as the bias against interdisciplinary work. It may be that the bias is diminishing, with a greater influence of funding agencies on crossing disciplinary lines and the emphasis on interdisciplinarity on the parts of university administrators. The emphasis itself can carry pros and cons—interdisciplinary research can possibly have a more substantial impact, but there is effort required to learn terms and methods of other fields plus making connections with scholars in various disciplines. Van Dalen and Henkens also include an extensive review of the literature on the “publish-or- perish” dynamic; the review need not be repeated here. 82 College & Research Libraries January 2017 A team of researchers (Albert, Laberge, and Maguire) studied assessment of qual- ity in scientific research and concluded, “Some discrepancies about what counts as a valuable piece of knowledge production were also apparent in our own study. Books and published abstracts received contrasting appreciations based on participants views of the authenticity of the peer-review process undertaken and the nature of the knowledge produced.”6 The mention of books has particular import here, as we shall see below. Unfortunately, there are few applied measures of quality, unless one equates numbers of citations received with quality. What tends to be evaluated is the total number of publications (among some other things, such as grants received) that a faculty member can claim. Jemielniak and Greenwood suggest something deeper at work in the publish-or- perish environment: the neo-liberal takeover of universities will not work just as neo-liberal schemes to privatize public services and to promote international economic development have never worked. Converting knowledge into a commodity, students into cus- tomers, faculty into service providers, administrators into bosses, and research into a money machine turns universities into combined vocational schools, and mini-industrial or theme parks. In the process, higher education itself as a combination of teaching and research, a place for the free development and exchange of ideas, a location for pure and applied research, as a source of broad social mobility, and as the ground on which public-spirited citizens acquire the values and practices of citizenship is disappearing.7 This political critique deserves attention in the context of the pressures to publish (and their outcomes), but that must wait for another venue. One more recent development in the general landscape of publishing is that of Open Access publication. Open Access is something of an umbrella term that indicates, customarily, free access to some published output. At times the authors pay nothing for the publication of their work, and the product is available at no charge to readers. At other times the authors must pay a fee (or have the fee partially or completely sub- vened) to have the work appear in the journal. An example of the latter is the journals produced by the Public Library of Science (PLoS; see http//www.plos.org). For more on Open Access publication, see the work of Peter Suber.8 In some instances, the fees to publish can be substantial. Given that the fee may at times be waived, it is an open question whether the charging of fees affects who publishes in the journals. Another consideration, as Reinsfelder and Anderson point out, is the extent to which academic administrators are amenable to Open Access publication for purposes of tenure and promotion.9 The methodology employed here (citation analysis) has a tradition of use. One of the landmarks written on the method is by Blaise Cronin.10 Cronin details the utility of the method and provides extremely useful help in applying to a number of research questions. Sugimoto and colleagues have also provided a means of linking citations to impact; their work advances the utility of citation analysis.11 Crossick examines the utility of citation analysis for evaluation of serials in the arts and humanities, in part to investigate whether there is potential collection management use.12 Waltman and colleagues use citation analysis as a means to conduct a ranking of international universities.13 Their attention is on several hundred institutions in forty-one different countries. Given the breadth of their examination, some comparison with the present study may be of minimal interest, but their method could be instructive to anyone engaging in large-scale institutional studies. Faculty Publications and Citations 83 The Study The present work builds upon three studies conducted by John Budd. The first cov- ered the years 1991–1993 and drew data from Science Citation Index, Social Sciences Citation Index, and Arts & Humanities Citation Index.14 The investigation looked at aggregate publication data and per capita publication data from members of ARL. The second study also employed the above citation indexes and covered the years 1995–1997.15 The third examination covered the years 2002–2004 and used Web of Science® as the data source (the next few references apply to the study published in 2006).16 At the time, Web of Science included coverage of 9,000 serial titles. This was not an exhaustive resource, but it did allow searching by institution (which made it an invaluable tool). The present examination covers the years 2011–2013. This study employs the Scopus® database, due to the inaccessibility of Web of Science at the time of data collection and the extensive coverage of Scopus. Scopus claims to cover more than 21,000 journals, 70,000 books, and 6.5 million conference papers (www.elsevier.com/ online-tools/scopus/content-overview). Scopus also allows searching by institution; this feature was used to obtain total publications for the faculty of the ARL member universities. It should be noted that temporary access to Web of Science was searchable and a small sample of institutions in the 2011–2013 investigation was searched. The University of Michigan was found to have 25,079 publications, as opposed to 23,871. The University of Washington has 21,690 instead 21,283 in Scopus. The University of Pennsylvania yielded 21,139 in Web of Science, rather than the 19,110 total in Scopus. It would appear that full access to Web of Science would lead to potential slightly different results, but Scopus was the database available for searching. In the present study, Harvard University can be searched and all pertinent data for publications emanating from Harvard and its schools and departments can be retrieved. A caveat must be added here, as stated by Budd: The scope of a university’s activities may vary from institution to institution. To be specific, in some cases a medical school is attached to a university’s main campus, so the publications of the medical school faculty are counted. If, on the other hand, a medical school is located in a different city from the main campus, its publications are not counted. The numbers, then, contain some discrepancies, but adhering to this means of data gathering more readily enables comparison over time.17 Since all three previous iterations of the study employed the foregoing data collec- tion method, the present study also uses it, for consistency’s sake. The numbers of faculty at the universities are retrieved from the ARL interactive statistics (www.arl.org); the year 2012 (the midpoint of the period in question) was used to obtain the numbers. Findings As is noted above, the United States ARL university members were searched using Scopus. The institutional totals are presented for the top twenty universities (along with the top twenty for the three previous studies) in table 1. It is evident that the numbers have been increased from one period to the next. The implication that can be drawn from this is that there is increasing emphasis on publication over time. A hypothesis can be stated (in similar fashion to Budd18): There is no statistically significant difference among the total publications of the top twenty institutions across the four studies. In fact, a chi-square test can be conducted to de- termine if there is a goodness of fit among the time periods (that is, the variables and the mean values for each of the time periods). The result is that there is not a good 84 College & Research Libraries January 2017 fit; the difference is statistically significant at P > 0.01. The null hypothesis, stated above, is rejected. A further comparison can be made among the various studies. The mean number of publications per university in 1991–1993 was 4,595.8; the figure for 1997–1999 was 5,493.5; that for 2002–2004 was 6,078.2.19 The mean for 2011–2013 was 9,662.0, or more than double that for 1991–1993. The figure for the most recent study is starkly higher than those for the earlier examinations. It should be noted that Stanford is missing from 2002–2004 due to their purposeful withdrawal from ARL. Their data, though, are added in the present study for two reasons: (1) to be completely consistent across all time periods, and (2) Stanford is possibly the only major research university that would not be included in the full examination. This addition may have a small effect on calculations of means. To reiterate, if medical schools reside in a different location from the universities’ main campuses, the figures are not counted. This affects, among other institutions, the University of Texas. The total number of publications for the University of Texas at Austin for the time period was 18,729. If the UT Medical Branch at Galveston were added, an additional 3,323 publications would augment the Austin total. The previous studies did not include these medical schools and centers in other locations, so they are not added to the main campus totals in the present study. TABLE 1 Total Publications by Institution 1991–1993 1995–1997 2002–2004 2011–2013 Harvard 16,945 Harvard 21,913 Harvard 23,728 Harvard 24,476 UCLA 12,566 UCLA 13,620 UCLA 15,083 Michigan 23,871 MIT 11,788 Michigan 13,006 Washington 14,335 Washington 21,283 Michigan 10,997 UC Berkeley 12,237 Michigan 13,857 Stanford 20,780 Washington 10,645 Washington 12,117 J. Hopkins 13,760 J. Hopkins 19,804 Cornell 10,518 Minnesota 11,369 UC Berkeley 13,055 U. Penn. 19,110 UC Berkeley 10,378 Stanford 11,169 UCSD 12,947 Columbia 17,488 Minnesota 10,304 Wisconsin 10,952 U. Penn 12,274 UCSD 17,427 Stanford* 9,723 Cornell 10,918 Wisconsin 11,427 U. Pitt. 16,494 Wisconsin 9,663 J. Hopkins 10,576 Columbia 10,990 Florida 15,997 J. Hopkins 9,636 U. Penn. 10,247 Cornell 10,795 Wisconsin 15,847 U. Penn. 8,636 UCSD 10,059 MIT 10,083 Duke 15,765 Illinois 7,884 U. Pitt. 9,148 Penn State 10,018 UC Berkeley 15,503 Columbia 7,824 Yale 8,938 Ohio State 9,589 Minnesota 15,366 Yale 7,779 Columbia 8,886 Florida 9,577 Ohio State 15,314 UCSD 7,732 MIT 8,732 Minnesota 9,479 UCLA 14,985 UC Davis 7,621 Ohio State 8,552 Yale 9,377 UC Davis 14,205 Ohio State 7,155 Penn State 8,543 U. Pitt. 9,343 MIT 14,289 U. Pitt. 7,155 Illinois 8,400 Duke 8,952 UNC 14,203 Penn State 6,925 UC Davis 8,380 UC Davis 8,945 Illinois 13,107 *Stanford withdrew from ARL between the second and third studies, but has been added to the most recent one. Faculty Publications and Citations 85 In addition to total numbers of publications, the per capita publications can be cal- culated. As is the case above, the variables are the mean values for each time period. Per capita means are intended to allow for any changes in faculty size over time. The changes, in many instances, have not been great. For example, in 2006 the University of Missouri had 1,224 tenured and tenure-track faculty; in 2015 the university had 1,122 faculty (see http://ir.missouri.edu). To repeat, the ARL interactive data are used. The data are presented in table 2. These data can be compared according to calculated means. The mean figure for 1991–1993 was 3.56; the mean for 1995–1997 was 4.20; the figure for 2002–2004 was slightly higher at 4.24. The mean for the present study, 2011–2013, was 5.96. The range of per capita publications in this investigation was 1.14 to 14.67. As is the case with total publications, a hypothesis can be stated: there is no statistically significant difference TABLE 2 Per Capita Publications by Institution 1991-1993 1995-1997 2002-2004 2011-2013 J. Hopkins 12.71 Harvard 12.94 Harvard 11.88 MIT 14.67 Harvard 11.46 J. Hopkins 12.03 J. Hopkins 11.46 Duke 14.14 MIT 11.26 Wash. U. (MO) 11.14 Duke 9.86 U. Penn. 13.84 Wash. U. (MO) 10.24 MIT 10.39 Wash. U. (MO) 9.72 J. Hopkins 13.09 UCLA 7.51 Duke 10.32 UC Berkeley 9.41 UCSD 11.63 UCSD 7.34 UC Berkeley 9.87 UCSD 8.64 Princeton 10.65 UC Berkeley 7.06 Rochester 9.85 U. Penn. 8.60 UC Berkeley 10.31 Stanford* 6.92 UCSD 9.38 Case Western 8.36 Case Western 10.28 Minnesota 6.90 UCLA 7.93 UCLA 8.06 Stanford 10.17 Cornell 6.81 Stanford 7.79 Columbia 7.47 Harvard 10.01 Brown 5.79 Minnesota 7.58 UCSB 7.35 Brown 9.27 Princeton 5.46 Cornell 7.36 Princeton 6.81 Georgia Tech 8.99 Chicago 5.16 Brown 7.12 Brown 6.61 UNC 8.58 USC 5.04 Emory 7.10 Cornell 6.57 U. Pitt. 8.37 UC Davis 4.96 UC Davis 6.49 Minnesota 6.08 Minnesota 8.32 Virginia 4.82 Princeton 6.20 Yale 5.89 Columbia 8.02 Utah 7.79 Iowa 6.04 Georgia Tech 5.81 Wisconsin 7.82 Michigan 4.64 U. Pitt. 5.88 UC Irvine 5.69 UCLA 7.50 Maryland 4.61 Chicago 5.83 Iowa 5.67 UC Davis 7.18 U. Penn. 4.61 UC Riverside 5.72 Wisconsin 5.55 Utah 6.94 *Stanford withdrew from ARL between the second and third studies, but has been added to the most recent one. 86 College & Research Libraries January 2017 among the per capita publications of the top twenty institutions of the four studies. Once again, a chi-square test can be conducted to determine if there is a goodness of fit across the studies. As is also the case with total publications, there is not a good fit among the data; P > 0.01. The null hypothesis is rejected. In the study of 2002–2004 data, one comparison that was made had to do with the materials expenditures of the ARL libraries that correlate with the top twenty in total publications. In that study, the rank-order correlation coefficient was .74, a rather strong correlation.20 That correlation has been repeated for the present study. The Spearman’s rank-order correlation coefficient in this study was only .32, a weak positive correla- tion. For the present study, the rank-order correlation was also conducted for the per capita publications. The coefficient is only .29. It is apparent that measures of library expenditures have little impact on the increases in publications by university faculty. The data analyses conducted here cannot explain the differences, though, since they are limited to the examination of the data dynamics only. For the first time in the series of investigations, the present study includes examina- tion of citation data. These data are included because a number of institutions have included citation metrics of various sorts in their evaluation of faculty performance. As is the case with publications, this study includes total citations received by the institutions for the 2011–2013 period, plus per capita citations for the time period. The results are presented in tables 3 and 4. TABLE 3 Total Numbers of Citations by Institution, 2011–2013 Institution Total Number Yale 103,397 Wash. U. (MO) 103,147 UNC 101,699 UCSD 98.018 Chicago 95,600 Harvard 92,696 Case Western 91,819 Duke 91,465 Boston U. 90,447 Brown 89,417 Emory 88,340 Virginia 83,886 MIT 83,683 Minnesota 74,721 UC Berkeley 73,448 Utah 71,794 New York U. 71,360 UC Davis 69,005 Cornell 68,511 UCSD 67,065 TABLE 4 Per Capita Citations by Institution, 2011–2013 Institution Number Case Western 143.92 Brown 112.62 UCSB 102.21 MIT 85.92 Duke 82.03 Virginia 72.82 UNC 61.45 Wash. U. (MO) 54.85 Utah 53.02 Chicago 52.88 UC Berkeley 48.84 U. Penn. 45.70 UCSD 44.77 Emory 44.71 J. Hopkins 42.55 Yale 41.95 Cornell 41.83 Minnesota 40.46 USC 39.97 Harvard 37.91 Faculty Publications and Citations 87 As is the case with publications, the figures are limited by the coverage of Scopus. While that coverage is substantial, it cannot be altogether inclusive. That said, it is obvi- ous that the total and per capita numbers are consequential. Rank-order calculations between the citations (total and per capita) and materials expenditures can be done. The calculations of citation data are problematic in that the results are negative. The coefficient for total citations is –.08, and that for per capita citations is –.50. In other words, there are inverse correlations; there is no direct, or positive, correlation between expenditures and citations data. Discussion The five questions asked above are answered primarily by the data presented in the various tables. The hypotheses are discussed in the text. Given the increases in num- bers of publications and citations, including the per capita data, it is evident that one of two outcomes (or, perhaps, both outcomes) occurs. Faculty are self-motivated to produce more, or faculty are pressured to produce more. The citation data may well be a function of there being more publications available for citing. In any event, there remains (and perhaps we are seeing an increase) in publishing emphasis on univer- sity campuses. At research universities there may be inequality in the putative “three legs of the stool”—teaching, service, and research. Anecdotal evidence has, for some years, suggested that (as is stated above) publication is a major factor in tenure and promotion decisions. Faculty who do not have a critical mass of publications may not earn tenure. There are, however, very few, if any, formal indications emanating from universities or their programs of what that critical mass may be for any given disci- pline. It could safely be said that more journal articles in refereed publications may be required in the natural sciences than in other disciplines. It also may be the case that some humanities disciplines concentrate more on the publication of books than publication of journal articles; there could be entire disciplines where journal article publication is not the coin of the realm. If that latter supposition is true, the overall data on publications could be a bit skewed by the coverage of a resource like Scopus. It is likely that the data presented here underestimate the numbers of publications and citations for which faculty are responsible. Budd suggested, based on linear regression, that, if the publication trends continued into a fourth time period, the mean total number of publications would rise to 6,455.21 Granted, there is a substantial time lapse between that third study and the present inves- tigation, but the mean here of 9,662 exceeds statistical expectations. That unanticipated growth could be due to factors that include an increased emphasis on research and publication. Farlin and Majewsky offer this opinion: “We recognize two negative conse- quences of a model based on competition: on the one hand, it distracts the members of the scientific community from the real purpose of the scientific method, which is to solve relevant problems, and just as importantly, it weakens the ability to think creatively.”22 The matter of citations is becoming a major one at research universities. As DaCosta points out, the issue is not merely one that affects individuals: While the criteria for receiving tenure may be clearly described in an institution’s procedures for promotion and tenure, the path to attaining tenure is never that straightforward. This is because the criteria by which candidates are evaluated are not simply based on objective criteria (e.g. the quantity of work produced), but on evaluative assessments of reputation. Determinations of the ‘quality’ of a candidate’s scholarship (and the candidate him/herself) are based on assessments of the perceived prestige and selectivity of the outlets in which he/she publishes, of the scholars whose work he/she engages, of those who engage the candidate’s 88 College & Research Libraries January 2017 work, of the institutions in which he/she is located and of the people attesting to the candidate’s competence. The importance of reputation in assessing a scholar’s work is manifest in the widespread use of citation rankings in academia. Citation rankings quantify and systematize assessments of ‘quality’ and reputation. They combine measures of quantity (how much one has published) with indices of prestige, granting higher numerical values to publications in the ‘best’ journals. They are used to evaluate not only individual scholars (e.g. as a metric of a scholar’s ‘influence’ in his or her field), but also the departments and schools of which the scholar is a part.23 DaCosta’s assessment is an indication why citation data were included in the pres- ent study. Citations may well become an even more essential measure for individual and institutional evaluation. What is not entirely clear at this time is which specific measures will be employed by administrators in making evaluations. This investigation does demonstrate that there is a considerable amount of com- municative activity taking place among research university faculty. One factor that is not accounted for here is attraction of external funding. There is no doubt that the funding support is another major assessment criterion, but it is, arguably, a bit less directly related to the need for library resources and services. Publications and citations have a tendency to swell the overall literature, which can exert pressure on libraries to provide resources for faculty (and students). An open question is whether the increased pressure to publish and to be cited in the literatures constitute sustainable evaluative criteria. These do indeed indicate quantity, but whether they relate to quality is moot. Future inquiry may address these questions. Future research could also address the very important “publish-or-perish” issue that continues to plague higher education. The work of van Dalen and Henkens could serve as a model for the examination.24 Theirs is an international study, but they address the rewards structure in institutions as it has an impact upon the work of faculty. Notes 1. John M. Budd, “Faculty Publishing Productivity: Comparisons over Time,” College & Research Libraries 67 (May 2006): 231. 2. Nicholas Jon Crane and Zoe Pearson, “Can We Get a Pub from This? Reflections on Com- petition and the Pressure to Publish While in Graduate School,” Geographical Bulletin 52 (Nov. 2011): 77. 3. Robin Wilson, “A Higher Bar for Earning Tenure,” Chronicle of Higher Education (Jan. 1, 2001): B12–B14. 4. Teddi Dineley Johnson, “In Academic World, ‘Publish or Perish’ Still Rings True,” Nation’s Health 38 (June/July 2008): 14. 5. Hendrik P. van Dalen and Kène Henkens, “Intended and Unintended Consequences of a Publish-or-Perish Culture: A Worldwide Survey,” Journal of the American Society for Information Science & Technology 63 (July 2012): 1291. 6. Mathieu Albert, Suzanne Laberge, and Wendy McGuire, “Criteria for Assessing Quality in Academic Research: The Views of Biomedical Scientists, Clinical Scientists and Social Scientists,” Higher Education 64 (Nov. 2012): 674. 7. Dariusz Jemielniak and Davydd J. Greenwood, “Wake Up or Perish: Neo-liberalism, the Social Sciences, and Salvaging the Public University,” Cultural Studies/Critical Methodologies 15 (Feb. 2015): 80. 8. Peter Suber, Open Access (Cambridge, Mass.: MIT Press, 2012). 9. Thomas L. Reinsfelder and John A. Anderson, “Observations and Perceptions of Academic Administrator Influence on Open Access Initiatives,” Journal of Academic Librarianship 39 (Nov. 2013): 481–87. 10. Blaise Cronin, The Citation Process: The Role and Significance of Citations in Scientific Com- munication (London: T. Graham, 1984). Faculty Publications and Citations 89 11. Cassidy R. Sugimoto, Terrell G. Russell, Lokman I. Meho, and Gary Marchionini, “MPACT and Citation Impact: Two Sides of the Same Scholarly Coin?” Library & Information Science Research 30 (Dec. 2008): 273–81. 12. Geoffrey Crossick, “Journals in the Arts and Humanities: Their Role in Evaluation,” Serials 20 (Nov. 2007): 184–87. 13. Ludo Waltman et al., “The Leiden Ranking 2011/2012: Data Collection, Indicators, and Evaluation,” Journal of the Society for Information Science & Technology 63 (Dec. 2012): 2419–32. 14. John M. Budd, “Faculty Publishing Productivity: An Institutional Analysis and Comparison with Library and Other Measures,” College & Research Libraries 56 (Nov. 1995): 547–54. 15. John M. Budd, “Increases in Faculty Publishing Activity: An Analysis of ARL and ACRL Institutions,” College & Research Libraries 60 (July 1999): 308–15. 16. Budd, “Faculty Publishing Productivity,” 230–39. 17. Budd, “Faculty Publishing Productivity,” 231. 18. Budd, “Faculty Publishing Productivity,” 234. 19. Budd, “Faculty Publishing Productivity,” 232. 20. Budd, “Faculty Publishing Productivity,” 234. 21. Budd, “Faculty Publishing Productivity,” 232. 22. Julien Farline and Marius Majewsky, “How Benchmarking in Science Can Lead to Reversal of Priorities,” Environmental Science & Technology 59 (Mar. 2015): 2604–05. 23. Kimberly McClain DaCosta, “The Tenure System: Disciplinary Boundaries and Reflexiv- ity,” Ethnic & Racial Studies 35 (Apr. 2012): 626–32. 24. Hendrik P. Van Dalen and Kène Henkens, “Intended and Unintended Consequences of a Publish-or-Perish Culture: A Worldwide Survey,” Journal of the American Society for Information Science & Technology 63 (July 2012): 1282–93. citation