Previous Contents Next
Issues in Science and Technology Librarianship
Spring 2009
DOI:10.5062/F4639MPT

URLs in this document have been updated. Links enclosed in {curly brackets} have been changed. If a replacement link was located, the new URL was added and the link is active; if a new site could not be identified, the broken link was removed.

[Refereed]

Percentile-Based Journal Impact Factors: A Neglected Collection Development Metric

A. Ben Wagner
Sciences Librarian
Science & Engineering Library, Arts and Sciences Libraries
University at Buffalo
Buffalo, New York
abwagner@buffalo.edu

Copyright 2009, A. Ben Wagner. Used with permission.

Abstract

Various normalization techniques to transform journal impact factors (JIFs) into a standard scale or range of values have been reported a number of times in the literature, but have seldom been part of collection development librarians' tool kits. In this paper, JIFs as reported in the Journal Citation Reports (JCR) database are converted to percentiles (0%-100%) using Microsoft Excel's PERCENTRANK function. This permits a more intuitive evaluation of journal ranking within JCR subject categories and broader disciplines. The top journal by impact factor in any category is set to 100% while the lowest impact factor journal is set to 0% with all other journals on the list scaled according to their rank. Percentile-based impact factors (PIFs) also allow valid cross-disciplinary comparisons of impact factors, something not possible using JIFs. Finally, since a given journal title is often assigned to multiple subject categories, the relative impact of the same journal title can be compared and evaluated across each of those categories. This paper argues that PIFs should become a standard component of journal collection evaluation projects. The history, use, and misuse of impact factors are also discussed.

Introduction

Librarians have long consulted journal impact factors (JIFs) in making journal acquisition and cancellation decisions. Given the misuse of these impact factors by many tenure/promotion committees and national governments as a surrogate evaluation of the quality of an individual's publications, it also is one of the few bibliometric tools well known to many within the scholarly community. This misuse prompted a major report from the International Mathematical Union that cites specific examples of JIFs being used to evaluate individual researchers (Adler et al. 2008). A prominent University of Cambridge researcher and a report in the Chronicle of Higher Education express similar concerns (Lawrence 2003; Monastersky 2005). Sombatsompop and Markpin (2005) cite eight different countries whose governments use JIFs as a key determinate of grant money and awards to individuals and research institutions. In Finland, use of impact factors to allocate research funding is mandated by legislation.

Journal impact factors were first created in the early 1960s by Dr. Eugene Garfield and Irving H. Sher to help select journals for the then new Science Citation Index (Garfield 2005). Science Citation Index is now a component database of the Web of Science. After using this metric in-house for many years to compile the Science Citation Index, the Institute for Scientific Information (ISI), now Thomson Scientific, began publication of the Journal Citation Report (JCR). Although JCR calculates three other metrics (immediacy index, cited half life and citing half life), JIFs receive nearly all the attention, due to their simplicity and perhaps their evocative name.

The Journal Citation Report database, part of Thomson Scientific's Web of Knowledge platform, assigns one or more fairly specific subject categories, e.g., biophysics or ethnic studies, to every journal title it covers. Ranking journals by impact factors within a given discipline is a popular activity among librarians, authors, editors, and publishers.

JIFs give the average number of times articles appearing in a given journal during a two-year period are cited by the entire body of journal literature in the subsequent year. For example, the journal, Ecology Letters, published 274 articles in 2005-2006. During 2007, these 274 articles were cited a total of 2,248 times by all scientific articles covered in Thomson's Web of Science. Hence the 2007 impact factor for Ecology Letters is calculated by dividing the 2,248 citing articles in 2007 by the 274 Ecology Letters articles published in 2005-06 which equals 8.2. Hence, articles appearing in this journal from 2005 to 2006 were cited an average of 8.2 times by the universe of 2007 scientific journal literature.

Use and Abuse of Journal Impact Factors

There are many discussions and reports in the literature regarding the validity, use, and abuse of JIFs (Adler et al. 2008; Amin and Mabe 2000; Campbell 2008; Monastersky 2005). The entire June 2008 issue (v. 8, no. 1) of the journal Ethics in Science and Environmental Politics is devoted to the use and misuse of bibliometric indices in evaluating scholarly performance. A number of articles from this journal issue discuss the pitfalls of journal impact factors and other citation metrics (Bornmann et al. 2008; Harnad 2008; Todd and Ladle 2008). These concerns include:

As recently as October 2008, an editorial in Science magazine pointed out that the impact factor continues to be consistently misused to judge the quality of an individual paper (Simons 2008).

In addition, one other concern, the inability to compare JIFs across disciplines, is the focus of this paper. It has long been recognized, even by Thomson Scientific, that it is not valid to compare JIFs across disciplines due to such differences as citation practices and time required to conduct and publish research (Amin and Mabe 2000; Thomson Scientific 2008). For example, JIFs are based almost exclusively on journal article-to-journal article citations, thereby benefiting disciplines where journals are the predominant means of communication such as chemistry and penalizing disciplines like computer science where more emphasis is placed on conference papers.

The problems of comparing impact factors across disciplines can easily be demonstrated by comparing the JIF of the top journal in various fields. The top biochemistry journal, Annual Review of Biochemistry, has an impact factor of 31.190 compared to 2.352 for the top social work journal, Child Maltreatment. It is nonsensical to suggest that biochemistry journals are 12 times better or more important than social work journals.

Using Percentiles to Compare Impact Factors

A simple and statistically valid way to compare impact factors both within a given discipline and across disciplines would be to convert them into percentiles within each subject categories. On this basis, the top ranked journal in each subject category would receive a percentile impact factor value of 100% and the bottom journal would be assigned 0%.

No cross-disciplinary comparisons are perfect, but at least the PIF approach goes a long way towards leveling the playing field. Another value of this approach is that one can instantly see from the percentile rank whether a journal is above average (50% or more) or below average (below 50%) within a given discipline. Too often JIFs are reported simply as an absolute number divorced from the context of other journals in the field. Is a raw journal impact factor of 5.023 good, average, or poor? It depends completely on the relative rank of that journal within its field. However, a PIF of 95% immediately communicates that this is a very highly rated journal irrespective of the discipline. This approach is exactly analogous to the common practice of reporting standardized test scores in schools as a percentile. A score at the 80th percentile indicates that 20% of the student's peer group scored better than he/she did and 80% scored worse.

Note that this conversion to percentiles does not change the rank order of any journal title within the subject category. The top journal by JIF is still the top journal on the PIF list; the second by JIF is still the second on the PIF list, etc. The journal rank is simply converted into a percentage between 0% and 100% for any given category or list.

PIFs also counter the apparent high precision of the impact factor due to Thomson Scientific's practice of calculating impact factors to three significant figures. Like all data, citation data are messy. Despite Thomson Scientific's diligence in processing and standardizing the data, one would be hard pressed to show that three significant digits are statistically valid.

For example, Table 1 was produced by combining the journal titles in the various chemistry subject categories in JCR into a single chemistry supercategory and then ranking them by their 2007 impact factor and converting the rank to percentile rank. The top eight titles are shown.

TABLE 1: Top Chemistry Journals by Impact Factor -- 2007 data

Journal Title

JIF

PIF

Chemical Reviews

22.757

100%

Nature Materials

19.782

100%

Accounts of Chemical Research

16.214

100%

Chemical Society Reviews

13.082

99%

Aldrichimica Acta

11.929

99%

Surface Science Reports

11.923

99%

Angewandte Chemie Intl. Ed.

10.031

99%

Nano Letters

9.627

98%

If one looks at the raw journal impact factor, one might be tempted to say that Chemical Reviews is over two times "better" than Angewandte Chemie. The conversion to percentiles reinforces the point that all eight journals listed are top tier journals within the inherent limitations of the impact factor metric.

It is not accidental that seven of the top eight titles are review publications. Journal editors have long known that the easiest way to bump up their impact factor is to publish more review articles which, as a class, are cited much more heavily than individual research studies. If all editors sought to maximize their "impact" in this way, there would be no place left to publish the individual studies; hence no individual studies would be available for review articles to review! Though absurd when stated to this extreme, it is one reason that JIFs have been described as the number that is devouring science (Monastersky 2005).

Review of the "Lost" Prior Art on Normalization

Though converting a range of values to percentiles is perhaps the simplest example of normalization, there are many possible ways to normalize data; i.e., statistically transform raw data values into a standard scale or set range of values. A literature search was conducted to determine what normalization techniques for JIFs had been reported and whether simple percentiles had been described and used in real-life collection development decisions.

Sen (1992) reported using normalized impact factors (top journal in each field set to a value of 10) and cites as its first use, a 1987 Indian government report (Council on Scientific and Industrial Research [CSIR] 1987). Nagpual (1995) used Sen's technique to evaluate sets of articles produced by Indian universities rather than the journals themselves.

Only one article was found where percentile-based impact factors were actually used in a real library for collection analysis, at Brigham Young University (Ward et al. 2006). As one of many factors in their collection decisions, they converted the rank number of journals sorted by JIFs within a given discipline to a percentile. However, only a single hypothetical example is provided; and the tables in the article list only the raw JIF. In addition, they inverted the percentile calculation so that the top ranked journal was assigned a value of 1 and the bottom ranked journal was assigned a value of 100. This sets up an inverse, and hence somewhat non-intuitive, metric, where the lower the percentile, the higher the impact of the journal. The technique described in this article follows the standard percentile calculation with the top ranked journal is set to 100% and the bottom ranked journal to 0%, giving a direct relationship where the higher the percentile, the higher the impact factor. An Italian study assigned journals into 10 percentile-based groups using impact factors but that was for the purpose of assigning scores to individual articles and thereby assessing research productivity of a research institute (Ugolini, Parodi et al. 1997). Hence, evaluating journal titles within subject categories for purposes of collection decisions was not part of their investigation.

Why have practicing librarians almost universally ignored PIFs as a collection development tool? A review of the articles describing various normalizations of impact factors shows they are almost exclusively published in "hard" information science journals like Scientometrics or discipline-based journals like Journal of Dental Education and Acta Anaesthesiologica Scandinavica. Neither category of journals is likely to be on most collection development librarians' reading lists. A closer examination of the actual normalization techniques used reveals that most authors propose rather complex transformations that might well discourage the average library practitioner.

These somewhat complex transformations in the literature include:

Though all these transformations are interesting and have statistical merit, the formulas appear quite formidable when set down in mathematical notation, with one paper even using an inverse tangent function (Balaban 1996). A good general review of bibliometric methods includes an entire section on impact factors and normalization techniques (Wallin 2005).

Method for Converting JIFs to Percentiles via Excel's Percentrank Function

As will be seen in the step-by-step method described in the Appendix, the conversion of JIFs to percentiles can be accomplished in a few minutes using MS Excel's PERCENTRANK function. Though it is not nearly as elegant as the many other transformations described in the literature, practicing librarians seldom make decisions solely on impact factors or on small differences between them. Hence, a fast and simple metric is the most useful. This is especially true during a budget crisis when one often has a matter of days or a few weeks to gather a wide range of data across the entire journal collection in preparation for cancellation decisions.

Once percent rank values (PIFs) have been established for each title within a discipline, evaluation of journals within the discipline becomes very intuitive. Any PIF over 50% is "above average." Any PIF above 90% by definition puts the journal in the top 10% of that field. As this suggests, it is more useful to look at PIF data in terms of broad ranges, say quartiles (0-24%, 25-49%, etc.), rather than small differences (84% vs. 87%). In practical collection decision making terms, journals in the upper 20% or so would very likely be retained, short of some other compelling data to cancel them like use, cost/use, or changing needs of one's patrons. Likewise, journals in the bottom 20% would likely be candidates for cancellation, short of a compelling reason to keep them, such as one's patrons heavily publishing in the journal or high use. Those titles in the bottom half would typically receive more scrutiny than those in the upper half.

It is important to remember that journal impact factors are only a surrogate for value/quality, not a direct measure thereof. They are based on a single and sometimes disputed metric; i.e., citations. High quality, valuable niche journals that may be essential to the work of one's patrons will never be able to approach impact factors of "super" journals like Cell and Nature on either an absolute or percentile basis. Journal impact factors, even when expressed as percentiles, should never be the sole reason for canceling or not acquiring a journal.

Comparing across Disciplines using PIFs

As valuable as it is to compare PIFs within a discipline, it is even more interesting when one starts to combine individual percentile-based discipline lists into a single master spreadsheet. Journals in the Journal Citation Reports are typically assigned to two or more subject categories. By preparing an individual list for each discipline containing their PIFs, one determines the relative impact of a given title within that single field. When one combines all the individual lists, one can see the relative impact of a given title within each of the subject categories to which it has been assigned. Table 2 provides a sample of this type of analysis. The raw JIF is always the same for a given title. But on a percentile basis, one can see the impact a journal has within different discipline settings. In most cases, the Discipline Category in Table 2 was created by merging a group of more detailed JCR subject categories.

Table 2: Sample of Merged Lists of Journals from Individual Subject Lists

Journal Title

JIF

PIF

Discipline Category

American Behavioral Scientist

0.393

36%

Sociology

American Behavioral Scientist

0.393

14%

Psychology

Economics of Education Review

0.495

50%

Education

Economics of Education Review

0.495

34%

Economics

Evolutionary Ecology

2.905

89%

Biology – organism

Evolutionary Ecology

2.905

86%

Environment

Evolutionary Ecology

2.905

61%

General & Cell Biology

Journal of Social Policy

1.037

92%

Social Work

Journal of Social Policy

1.037

77%

Sociology

Journal of Social Policy

1.037

66%

Management

Ocean Engineering

0.663

54%

Civil Engineering

Ocean Engineering

0.663

23%

Geoscience

Ocean Engineering

0.663

19%

Environment

Optical Materials

1.519

76%

Materials Science

Optical Materials

1.519

73%

Engineering Physics

There are two main benefits to creating this master list. First, the librarian can better understand the impact that adding or canceling a given title might have on every department within the scope of the journal. For example, the journal Ocean Engineering would seem to be far more important to a civil engineering department than to faculty in the environmental sciences based on the respective PIFs in those disciplines. In practice, one can readily create a master list for the sciences and one for the social sciences simply by creating the individual discipline lists following the procedure in the appendix. Before merging the discipline sheets into the master sheet, all one needs to do is to add a column containing the discipline name to each individual spreadsheet. Then when the individual lists are merged, one can tell which PIF is being reported for what discipline.

Second, since collection development librarians often work in teams and recognize that most research is now multi-disciplinary, one is reminded of other subject specialists and departments that should be consulted before making a journal decision. Sometimes this is fairly obvious, such as realizing that the cancellation of Economics of Education Review might affect both the education and economics faculty. However, it may be less obvious (or easily forgotten) that the Journal of Social Policy impacts not just the social work researchers, but also sociology and management.

The author is not suggesting that PIFs become the sole criteria for any collection decision. However, combined with the expertise of the librarian, consultation with faculty, use statistics, and all the other typical metrics used in journal evaluation, percentile-based impact factors are an easily determined, useful and broad collection development tool.

Conclusion

Percentile-based impact factors have great practical value for the collection development librarian. The advantages of using PIFs as a regular part of collection development decisions are:

There may be situations where a different and more sophisticated normalization technique would be advisable. There are many to choose from in the literature. However, for most collection development decisions, impact factors based on simple conversion to percentiles will serve the purposes of most library professionals far better than use of raw journal impact factors.

References

Adler, Robert, Ewing, John, and Taylor, Peter. 2008. Citation Statistics. [Online]. Available: {http://www.ams.org/ewing/Documents/CitationStatistics-FINAL-1.pdf} [Accessed: February 24, 2009].

Amin, M., and Mabe, M.. 2000. Impact factor: use and abuse. Perspectives in Publishing (1):1-6.

Balaban, A. T. 1996. How should citations to articles in high- and low-impact journals be evaluated, or what is a citation worth? Scientometrics 37 (3):495-498.

Bornmann, Lutz, Mutz, Rudiger, Neuhaus, Christoph, and Daniel, Hans-Dieter. 2008. Citation counts for research evaluation: Standards of good practice for analyzing bibliometric data and presenting and interpreting results. Ethics in Science and Environmental Politics 8 (1):93-102.

Campbell, Philip. 2008. Escape from the impact factor. Ethics in Science and Environmental Politics 8(1):5-6.

Cleaton-Jones, Peter, and Myers, Glenda. 2002. A method for comparison of biomedical publication quality across ISI discipline categories. Journal of Dental Education 66 (6):690-6.

Coelho, P. M. Z., et al.. 2003. The use and misuse of the "impact factor" as a parameter for evaluation of scientific publication quality: a proposal to rationalize its application. Brazilian Journal of Medical and Biological Research 36(12):1605-1612.

Council on Scientific and Industrial Research [CSIR]. 1987. Research output analysis. Indian National Scientific Documentation Centre [INSDOC]: New Delhi, India.

Fassoulaki, A., et al.. 2002. Impact factor bias and proposed adjustments for its determination. Acta Anaesthesiologica Scandinavica 46 (7):902-905.

Garfield, Eugene. 2005. The agony and the ecstasy: the history and meaning of the journal impact factor. International Congress on Peer Review and Biomedical Publication. [Online]. Available: http://garfield.library.upenn.edu/papers/jifchicago2005.pdf [Accessed: May 1, 2009].

Gonzalez-Sagrado, M., et al.. 2008. Evaluation of two methods for correcting the impact factor using the investigation done at the "Del Rio Hortega" University Hospital (1999-2004) as the data source. Nutricion Hospitalaria 23(2):111-118.

Harnad, Stevan. 2008. Validating research performance metrics against peer rankings. Ethics in Science and Environmental Politics 8(1):103-107.

Lawrence, Peter A. 2003. The politics of publication. Nature 422 (6929):259-261.

Maunder, R. G. 2007. Using publication statistics for evaluation in academic psychiatry. Canadian Journal of Psychiatry-Revue Canadienne De Psychiatrie 52:790-797.

Monastersky, Richard. 2005. The number that's devouring science. Chronicle of Higher Education 52 (8):A12-A17.

Nagpaul, P. S. 1995. Contribution of Indian universities to the mainstream scientific literature - a bibliometric assessment. Scientometrics 32 (1):11-36.

Radicchi, Filippo, Fortunato, Santo, and Castellano, Claudio. 2008. Universality of citation distributions: toward an objective measure of scientific impact. Proceedings of the National Academy of Sciences 105 (45):17268-17272.

Ramirez, A. M., Garcia, E. O., and Del Rio, J. A.. 2000. Renormalized impact factor. Scientometrics 47 (1):3-9.

Rostami-Hodjegan, A., and Tucker, G. T.. 2001. Journal impact factors: a "bioequivalence" issue? British Journal of Clinical Pharmacology 51 (2):111-117.

Rousseau, R. 2005. Median and percentile impact factors: A set of new indicators. Scientometrics 63 (3):431-441.

Schwartz, S., and Hellin, J. L. 1996. Measuring the impact of scientific publications. The case of the biomedical sciences. Scientometrics 35 (1):119-132.

Sen, B. K. 1992. Documentation Note: Normalized Impact Factor. Journal of Documentation 48 (3):318-325.

Simons, K. 2008. The misused impact factor. Science 322 (5899):165-165.

Solari, A., and Magri, M. H. 2000. A new approach to the SCI Journal Citation Reports, a system for evaluating scientific journals. Scientometrics 47 (3):605-625.

Sombatsompop, N., and T. Markpin. 2005. Making an equality of ISI impact factors for different subject fields. Journal of the American Society for Information Science and Technology 56 (7):676-683.

Sombatsompop, N., Markpin, T., and Premkamolnetr, N. 2004. A modified method for calculating the Impact Factors of journals in ISI Journal Citation Reports: Polymer Science Category in 1997-2001. Scientometrics 60 (2):217-235.

Tang, J. L., Wong, T. W., and Liu, J. L. Y. 1999. Adjusted impact factors for comparisons between disciplines. Journal of Epidemiology and Community Health 53 (11):739-740.

Thomson Scientific. 2008. Preserving the Integrity of the Journal Impact Factor Guidelines from the Scientific business of Thomson Reuters [Blog Entry]. [Online]. Available: http://forums.thomsonscientific.com/t5/blogs/blogarticlepage/blog-id/citation/article-id/14#M14 [Accessed: May 1, 2009].

Todd, Peter A., and Ladle, Richard J. 2008. Hidden dangers of a 'citation culture'. Ethics in Science and Environmental Politics 8 (1):13-16.

Ugolini, D., Bogliolo, A., Parodi, S., Casilli, C., and Santi, L. 1997. Assessing research productivity in an oncology research institute: The role of the documentation center. Bulletin of the Medical Library Association 85 (1):33-38.

Ugolini, D., Parodi, S., and Santi, L. 1997. Analysis of publication quality in a cancer research institute. Scientometrics 38 (2):265-274.

Vinkler, P. 1991. Possible Causes of Differences in Information Impact of Journals from Different Subfields. Scientometrics 20 (1):145-161.

Wallin, J. A. 2005. Bibliometric methods: Pitfalls and possibilities. Basic & Clinical Pharmacology & Toxicology 97 (5):261-275.

Ward, R. K., Cbristensen, J. O., and Spackman, E. 2006. A systematic approach for evaluating and upgrading academic science journal collections. Serials Review 32 (1):4-16.


Appendix: Using Microsoft Excel to Calculation PIFs

This method assumes online access to Journal Citation Reports and Microsoft Excel. The first step is to decide whether and how to combine the fairly detailed Journal Citation Report (JCR) subject categories into a single broader discipline category. For example, a broad Agriculture category could consist of the following JCR subject categories:

Agricultural Economics & Policy

Agricultural Engineering

Agriculture, Dairy & Animal Science

Agriculture, Multidisciplinary

Agriculture, Soil Science

Agronomy

Fisheries

Food Science & Technology

Forestry

Horticulture

Plant Sciences

Obviously, the degree to which individual categories are combined will depend on the level of detail required in a given situation.

1) From the home page of JCR, select the SCI (science) or SSCI (social science) edition and choose the default "View a group of journals by Subject Category."

2) Select (highlight) all the subject categories that will make up the desired broader discipline list (like Agriculture). Use the CTRL key to select multiple categories, keeping it depressed so that earlier selections are not accidentally deselected.

3) Once the proper categories are marked, click the SUBMIT button. A list of journal titles will appear.

4) Click the MARK ALL button. Note that a maximum of 500 titles can be selected at one time for downloading.

5) Click on the MARKED LIST button at the top. Click the SAVE TO FILE button and save the file as a text file.

6) Open the text file in MS Excel. Use the text import wizard accepting the default "Delimited" data type. Uncheck the Tab option and check the Semicolon option in the next window, clicking NEXT, and then FINISH.

6) At least the Abbreviated Journal Title, and Impact Factor columns should be retained, resizing the column widths to make it readable. Sort the spreadsheet by Impact Factor-descending; then by Title-ascending.

7) Go to the bottom of the listing and delete any titles with missing/zero impact factors. Write down the last cell designation with an impact factor value at the bottom of the list, e.g. C307 assuming your impact factors are in column C.

NOTE: The rest of these instructions assume that the raw impact factors are in Column C, that Column D is completely empty, and that there is header information in the first row of the spreadsheet.

8) Assuming one has 306 impact factors in cells C2 through C307, type into cell D2 the formula: '=PERCENTRANK($C$2:$C$307,C2)'. Then replicate this formula down the length of the entire list by clicking on the black dot in the lower right hand corner of Cell D2 and dragging down the length of the list.

9) If Step 8 is done correctly (and the titles are sorted by impact factors), then the first title will have in cell D2 a value of 1 and the last title (cell D307) will have a zero.

10) The formulas in Column D need to be converted to percentages and locked in as set values rather than remaining dependent on the percentrank array formula. Highlight the entire Column D and click on Edit: Copy on the command line. Then highlight Column E (presumed blank) and click on Edit: Paste Special. Choose 'Values' option and click O.K.

11) Delete Column D, thereby making Column E the new Column D. Column D is the percentile impact factor as described in this article.

12) Change the format of the cells in Column D to ‘Percentage' (1 decimal point recommended). Change the Raw Impact Factor Column (C) to ‘Number' format with 3 decimal places.

13) Fill in the first blank column to the right of the PIF with the name/code of the subject area; e.g., Biochem. Replicate this subject category down the entire length of the list. This will associate the particular PIF with the subject category and allow creation of a master spreadsheet by recombining the individual subject lists into a master list as shown in Table 2.

14) Save the file as an MS Excel spreadsheet.

Previous Contents Next

W3C 4.0   Checked!