Can you answer these questions?
- Do you reject quality content in one journal that you could use to start a new journal? Are the articles you published this year higher impact than last year’s?
- Are you anxious about the impact of the new journal you just launched?
- Has your tighter acceptance rate helped your competitors get better articles?
- Are you rejecting some very high impact articles?
- What topics that you publish are “trending”? Is this a shift for your discipline?
- Who publishes your rejected articles?
- Are your review articles this year as good as they were the last two years?
- What can you tell the new EIC about the impact of editorial changes?
What questions are you and your editors asking along these lines? Let us know in the comments!
The above editorial and publishing questions have been around forever, but the answers have been based on instinct or feeling or what the EIC’s friends think. Now we can get better answers.
Where do better answers come from?
Almost from the start of HighWire, we have been using the tag phrase “evidence-based publishing”. With its roots at Stanford University — and in the Stanford University Library in particular — HighWire was formed around the idea that data — evidence — could inform best practices and the best decisions.
Data hasn’t been in short supply lately. The challenges have been to “mill” the data into information that works for decision-makers: this often means integrating data across disparate systems, making it easy to understand, and timely to access. We think we are on to how to do this with our “Vizor” suite of analytics products.
HighWire’s Vizors are visualized analytics. The first Vizor that has begun rolling out is Impact Vizor, which helps editors and publishers see the research impact of the articles they publish, and the articles they reject.
Impact Vizor includes the “Rejected-Article Tracker” (the “RAT”) — shown here — which visualizes where articles an editor or publisher rejects get published, and how much they get cited. The “RAT” has been rolled out to sixteen publishers. Soon we will be rolling out the Section Performance Analyzer (the “SPA”) and the Hot Object Tracker (the “HOT”). Following those will be the Advanced Correlator of Citation and Usage (the “ACCU”) and the Cohort Comparator (“CC”).
Editors get very very excited by the RAT; previously this kind of information has been hard to develop, and hard to visualize. Impact Vizor does both. And this information tells you important things about the potential for starting new sections, new journals, or other new products.
And in other big news: a journal does not have to be hosted by HighWire to use Impact Vizor. In fact, our first deployment was for a major publisher who is not hosted by HighWire.
How does it work?
Impact Vizor combines data across different systems to put together (literally) a picture of how content is being used and cited. The key is to get the earliest possible indicators of research merit — primarily citations — and see what patterns there are in the data. In addition to all the visuals, there are also full data tables for those who want to further process and investigate the data, by easily moving it into Excel for example.
How did it get developed?
The Vizor development process incorporated our “early adopter” group of sixteen publishers right from the start: this group prioritized the components to be developed, and three or four publishers are involved in creating the prototypes of each component.
Impact Vizor is a Finalist in the
ALPSP 2015 Award for Innovation
Please let us know your questions and suggestions in the comments.
Latest news and blog articles
A quick rundown from the 9th Peer Review Congress – Part 3 – Preprints and their effectiveness in peer review