Is Impact Really a Factor?

Discovery and analytics

Is Impact Really a Factor?

John Sack, Founding Director, HighWire

It is that time of the year again, when editors, publishers, research assessors, tenure committee members and others look for any changes in the Journal Impact Factor (JIF) sorting provided by Clarivate / ISI.   Thanks to the increased focus of cOAlition S on DORA, the worldwide initiative developed to improve the ways in which the outputs of scholarly research are evaluated, there is increased attention on issues around inappropriate use of the JIF.

In my view, the signature ‘bad’ use is to use the Journal Impact Factor to imply an Article Impact Factor, as a way to avoid carrying out assessment of individual articles (indeed, even Clarivate points this out: “If you plan to cite JCR, we require that references be phrased as ‘Journal Citation Reports’ and ‘Journal Impact Factor’.”).  Typically, this might be expressed as a commendation or its opposite: “The article was published in Science”, or “The article could only get published in the Journal of Inconspicuous Results”.  Many people correctly rail against the use of the publication venue as a determinant of the importance of an individual article and its research. The most precise thing you might be able to get away with is, “The average article in Journal X is higher impact than the average article in Journal Y“, but this pertains only to averages, giving no insight into individual articles. And all would agree that research assessment and promotion is about individuals. DORA aims to eliminate this misuse.   

So let’s try and use the full name: Journal Impact Factor. This reminds us that the metric is about journals, rather than a blanket assessment of every article in a journal – and especially not as a way to assess individual researchers based on where they publish.

But let’s also not act as if the JIF is not measuring something. You can certainly argue that the JIF is ‘bogus’ (a decades-ago Silicon Valley term for something not to be trusted) by saying it is crazy to calculate it to three decimal places, or that it should be measured as a median rather than an average (even ISI will tell you it isn’t an average, it is a ratio). Or that it can be ‘gamed’ in a couple of legitimate and illegitimate ways. Or that it concentrates attention (the rich get richer – let me call this the Kardashian effect) on higher JIF journals. But even with those detractions, it still does measure something that researchers know to be true in general about articles in particular venues.   

My use of the term “venues” as a generic term for “journals” might lead you to think of other JIF-like rating systems in popular culture. We see it quite clearly in restaurant reviewing sites (e.g. Yelp), where stars are awarded to a restaurant.  We know that it is possible to get an average or mediocre dish at a highly-rated restaurant, but we also know that every dish at a highly-rated restaurant will likely be prepared to level of quality; it might just not be a standout. But there is just no doubt that popular culture and markets often attribute the qualities of a species (a restaurant, a journal) to an individual (a meal, an article).   

Getting back to scholarly publishing: there is nothing wrong with article-level citation metrics. And DORA doesn’t say citations ‘don’t matter’.  A few years ago, I interviewed a dozen Stanford authors and asked them what metrics they use to evaluate articles to decide what they should read (e.g., in a search result list).  I asked them to respond to the question separately as authors and as readers. The answers were relatively uniform:

  • As authors: “I want all the metrics, they all matter, especially the large numbers.”
  • As reader-researchers: “Citations. Only citations matter.”   

So let’s not pretend that citations don’t matter. Researchers will discount the whole conversation if it starts by saying “we can’t talk about citations anymore”.   

While citations are clearly important to researchers, they are not the only metric of impact. Stakeholders care about additional types of impact, including societal impact (measured by altmetric indicators such as news and social media attention) and other measures of influence on a field of practitioners, or even on an individual researcher.   A lot of articles that address clinical practice may themselves not get cited, but can be very important.

Many of us have heard that Gene Garfield developed the JIF as a way to help libraries make collection decisions. It was a type of quality measure applied at a time when the business of publishing was largely quality-driven. Now, however,some parts of the business of publishing are increasingly driven by quantity. High-volume APC-based journals can be very financially successful, as we’ve seen from the rise of mega journals (which perform a very legitimate service) and even of predatory journals (which do not). The JIF tries to take selectivity into account of course through its denominator, but in a search-engine-filtering world, the size of a journal container probably matters less than it used to. As we heard from Dr Daniel Himmelstein at the recent HighWire Lunch and Learn meeting in London, the days of readers reading or even skimming any one journal front to back are largely gone, replaced by various other forms of filtering such as search engines with their own ranking algorithms.

But even in a search-engine driven world, citations are still important. Ranking of articles from a journal towards the top of a search result gives a journal visibility to readers and authors by having the journal’s name show up on the result page over and over again. If I do a search in a scholarly search engine, and the first page of results are, say, 40% from one journal, 40% from another journal and 20% from a handful of other journals, I will probably conclude that my topic is published frequently in two journals, and if those papers are like my own then as an author looking for a place to publish I will probably now include those two journals in my list. We should not discount this “off-label” use of a search result list just because it is subtle like a perfume.  

There are other journal metrics that do not follow exactly the JIF formula.  The eigenfactor and the h-factor are most often mentioned, and Scopus has introduced the CiteScore recently.   Every factor has its strength and weakness.

So is impact really a factor? The answer, despite growing complexity in the citations landscape and the various and valid criticisms of the JIF, remains a resounding “yes”.   Journals have impact; articles have impact. Let’s not confuse the two.

Impact Vizor, our CODiE award-winning visual analytics tool, brings citation and usage data together to help publishers make fast, informed decisions about their content. Vizors combines data across different systems to build a picture of how content is being used and cited, to give the earliest possible indicators of research merit, and see what patterns there are in the data.

 


John Sack, Founding Director

John Sack is one of the founders of HighWire Press and focuses on market assessment, client relations, technology innovation, and the kind of thought leadership and industry-forward thinking that has defined HighWire’s mission since 1995. John’s role is to determine where the technology and publishing industries are going and how one of those might leverage the other. While this frequently involves working with new technology, as often it involves working with publishers on new ideas, opportunities, or problems they wish to address. John is a “futurist” or “trend-spotter” in that he tries to watch what is happening in consumer and scholarly services and identify patterns that are beginning to emerge. These patterns, once articulated, can give publishers and editors a chance to think about how they might prepare for changes, or take advantage of them.

Latest news and blog articles