Major Science Journals Eliminate Impact Factor

By Alex Berezow, PhD — Sep 13, 2016
Perhaps the most problematic classification system in the scientific community is that of the impact factor, which attempts to rank journals by their relative importance. This factor for a particular journal is equal to the average number of times an article in the journal is cited in a given year. While this sounds useful, in practice, it has been a slow-motion train wreck.
Credit: Shutterstock

Some quirk of human psychology compels us to categorize and rank things. Top Ten lists are always among the most popular features on a website, stimulating much discussion and controversy. In science, the overwhelming obsession to classify is what pushed Carl Linnaeus to become the father of modern taxonomy and prompted Dmitri Mendeleev to decipher the patterns that led to the current periodic table.

The scientific community carries on these legacies to this day, but not always in constructive ways. Perhaps the most problematic classification system is that of the impact factor, which attempts to rank the scientific journals by their relative importance. The impact factor for a particular journal is equal to the average number of times an article in the journal is cited in a given year. (For example, an impact factor of 5 means that the average article in the journal is cited 5 times in a year.)

While this sounds useful, in practice, it has been a slow-motion train wreck. An editorial published simultaneously in several different journals owned by the American Society for Microbiology (ASM) put it best: 

First of all, the journal IF is a journal-level metric, not an article-level metric, and its use to determine the impact of a single article is statistically flawed since citation distribution is skewed for all journals, with a very small number of articles driving the vast majority of citations. Furthermore, impact does not equal importance or advancement to the field, and the pursuit of a high IF, whether at the article or journal level, may misdirect research efforts away from more important priorities... High-IF journals limit the number of their publications to create an artificial scarcity and generate the perception that exclusivity is a marker of quality. The relentless pursuit of high-IF publications has been detrimental for science... Individual scientists receive disproportionate rewards for articles in high-IF journals, but science as a whole suffers from a distorted value system, delayed communication of results as authors shop for the journal with the highest IF that will publish their work, and perverse incentives for sloppy or dishonest work. [Emphasis added.]

Everything the editorial says is accurate. The diagnosis is spot-on. However, ASM's solution, which is to eliminate advertisement of the impact factor of their journals, is flawed for two major reasons.

First, human competitiveness, combined with that psychological quirk to classify things, means that there always will be a ranking of the scientific journals. Many of the same arguments the authors made also apply to college rankings, yet they persist. Every year, students, parents, and administrators look forward to the annual lists. (Administrators are perhaps uniquely shameless in their hypocrisy; they humblebrag by saying, "I don't really believe in these lists," right before they go on to gloat about their school's high ranking.)

So, if we're stuck with ranked lists, the question is what to do about it. The solution is to create a better ranking system. Fortunately, there is precedent that scientific journals can draw upon. Though far from perfect, college basketball and football teams are ranked every week of the season. There are different methods, but they generally rely upon some combination of win-loss record, strength of schedule, and voting by coaches and journalists. 

A new ranking of academic journals could do something similar. Instead of simply taking an arithmetic average of citations, a weighted average could be employed such that citations from the most prominent journals (such as Nature and Science) count more than citations from lesser known journals. Additionally, professors could vote on which journals they find most authoritative in their fields.

Second, there is utility in a ranking system. Students, journalists, and lay readers, who may be unfamiliar with scientific journals, can get a decent idea of which ones are most likely to contain authoritative, high-quality information. Crucially, a ranking system can also help identify pay-to-play journals, which are perhaps most likely to contain wrong or even fraudulent information. 

While the ASM editorial admits that their solution is largely symbolic and "may have little effect," it is worthy of attention. Hopefully, it catalyzes a much needed discussion on how to properly reform the impact factor.

Category

Alex Berezow, PhD

Former Vice President of Scientific Communications

Dr. Alex Berezow is a PhD microbiologist, science writer, and public speaker who specializes in the debunking of junk science for the American Council on Science and Health. He is also a member of the USA Today Board of Contributors and a featured speaker for The Insight Bureau. Formerly, he was the founding editor of RealClearScience.

Recent articles by this author: