With the recent release of the 2016 Journal Impact Factors (JIF) we’ve seen the accompanying annual flurry of publicity and controversy. The recent announcement that (ASM) – as a society, not only as a journal publisher — will join the growing list of signatories to DORA (Declaration on Research Assessment) is a reminder that there is a strongly recognized need to improve the ways in which the outputs of scientific research are evaluated, and that the JIF alone is not it. It is the JIF standing alone – as a proxy for so many other ways of evaluating impact – that is the well-spring of both ire and metric innovation. It is easy to misuse the JIF: using it to evaluate individual articles or individual researchers are the the most typical misuses. And recently a paper posted on bioRxiv gave a good sense of why the JIF alone isn’t ideal for what it is supposed to do (which is to evaluate journals). (I will have an upcoming blog post on this topic.)
At HighWire, we’re home to publishing programs ranked highly in the JCR – the catalog of JIFs published annually by Thomson Reuters — as well as several signatories of DORA, including American Society for Cell Biology, host of the meeting where DORA recommendations were developed in 2012. Because we partner with diverse publishers, for years we’ve known that publishers wanted faster, easier access to meaningful metrics to provide more insight into publishing programs. And we know from our interviews with researchers that authors and readers also want metrics, but different ones depending on their role as author or reader with respect to individual papers. I wrote about this in “Which Metrics Matter?”, based on many conversations with publishers, researchers, and editors: all want to understand how newer article-level metrics and alternative metrics (ALMs) can improve the direction of research publication and evaluation. The short answer is – the metrics that matter depend on the question – and there are lots of questions.
The slightly-longer answer is that people in various roles are trying to understand “impact” and that impact is multi-faceted. Recently, I’ve been using the word “resonance” to avoid linking all the forms of impact with Impact Factor, but everyone (including me) slides back to “impact”. Impact (!) Resonance facets include:
- research resonance – which readers largely agree is legitimately measured by citations;
- reader resonance – which can be measured by usage (downloads or views, for example) or some scholarly altmetrics;
- societal resonance – which can be measured by some altmetrics attuned to public engagement. Authors, societies and funders now care about this.
All of these and more are part of understanding how resonance develops. But, the more measures we have, the more tempting it is to fall back on a single number like the Journal Impact Factor – because looking at multiple facets is complex.
That’s why we launched Impact Vizor in 2015: to provide simpler, visual ways to look at research resonance, and not reduce it to the JIF. Today several publishers use Impact Vizor to analyze their publishing programs to understand how their articles, sections and journals resonate with their different audiences and to make decisions. Impact Vizor helps them see the facets and uncover answers to strategic and practical questions. A few are described below.
Is our journal article transfer policy effective in identifying quality research that can be published in a journal other than the one it was submitted to?
The American Chemical Society (ACS) has used the Rejected Article Tracker (RAT) viewer within Impact Vizor to evaluate manuscript flow across journals. Sonja Krane, Managing Editor of the Journal of the American Chemical Society (JACS), recently described how Impact Vizor has helped assess the impact of implementing a manuscript transfer program:
“When a JACS editor determines that a manuscript might be more suitable for another ACS journal, authors can now transfer their submission to the suggested journal using a streamlined process. Impact Vizor showed us that during an initial nine-month period, the ACS manuscript transfer program produced a measurable increase in the number of articles that were subsequently published in another ACS journal, providing evidence that authors and ACS benefit from the new transfer capability.”
Should we launch a new journal?
Similarly, Impact Vizor can be used to analyze opportunities to launch a new journal. For example, publishers can evaluate their pipeline and identify articles that they may have considered out-of-scope which later may gain traction in other journals. This evidence can make a strong business case for launching a new publication in what may be emerging fields of study.
What articles are generating citations, usage, and engagement quickly? What differences can we see across our programs?
Perhaps the strongest draw of Impact Vizor, is the fact that you don’t need to wait to see what impact articles are having. Aggregated data feeds from sources including Scopus, COUNTER, Mendeley and Faculty of 1000 provide clear visibility and go beyond citations by coming before citations.
You can see over the early months of publication how articles are gaining citations, downloads, and Mendeley saves and assess performance year-on-year (as shown below). For example, marketers can more easily discover what articles would appeal to a wider audience sooner and promote articles that are gaining citations at a rapid rate in their first year of publication.
We’ll be discussing questions like these with Keith Gigliello, Senior Manager, Digital Publishing, from the American Society of Hematology (ASH) in our upcoming webinar: How to discover article-level impact and strategic insights with Impact Vizor. [Learn more; Register]