by COREY BRADSHAW
Most people are aware of the ubiquitous h-index, and its experience-corrected variant, the m-quotient (h-index ÷ years publishing), but maybe you haven’t heard of the battery of other citation-based indices on offer that attempt to correct various flaws in the h-index. While many of them are major improvements, almost no one uses them.
Why aren’t they used? Most likely because they aren’t easy to calculate, or require trawling through both open-access and/or subscription-based databases to get the information necessary to calculate them.
Hence, the h-index still rules, despite its many flaws, like under-emphasising a researcher’s entire body of work, gender biases, and weighting towards people who have been at the game longer. The h-index is also provided free of charge by Google Scholar, so it’s the easiest metric around to gather for any group of researchers.
So, how does one correct for at least some of these biases while still being able to calculate an index quickly? Our team of eight scientists from eight different science disciplines (archaeology, chemistry, ecology, evolution & development, geology, microbiology, ophthalmology, and palaeontology) have developed a new metric we call the ‘ε-index’ (epsilon index) and published a paper on how it performs.
Named for the Greek letter ε used to symbolise residuals in statistics, this algorithm can be used via either the R code or an online app, and requires just four items of information from public databases such as Google Scholar or Scopus to calculate a researcher’s ε-index:
- the number of citations acquired for the researcher’s top-cited paper
- the i10-index (number of articles with at least 10 citations)
- the h-index, and
- the year in which the researcher’s first peer-reviewed paper was published.
Enter the data you want to compare and the ε-index gets to work. It benchmarks subsets of researchers into women-only or men-only to adjust the threshold such that the ranks are more comparable between these two genders. We are currently working on a variant of the index that takes into account non-binary researchers, or those who have transitioned during their career.
Another option is to divide the genders and benchmark them separately then combine them again — a re-ranking that effectively removes the gender bias in the ε-index, something that is difficult or impossible to do with other ranking metrics.
To standardise between disciplines and to provide a fairer comparison of researchers in different areas, the tool enables the user to scale the index across disciplines with variable citation trends (ε′-index). In essence, some disciplines simply tend to have fewer citations than others, and the tool can correct for that phenomenon and deliver a fairer comparison at a relative scale.
Why should an older male in a crowded discipline have a better performance ranking than a brilliant woman in a less-crowded discipline, who incidentally might have taken a career break? They shouldn’t. It’s not fair, and that has major implications for grant applications, promotions and career paths, and internal performance reviews. So, while not perfect, we contend the ε-index goes a long way to resolving the many inequities that have long blighted research evaluation.
Corey Bradshaw, Matthew Flinders Fellow in Global Ecology, Flinders University
Corey J. A. Bradshaw, Justin M. Chalker, Stefani A. Crabtree, Bart A. Eijkelkamp, John A. Long, Justine R. Smith, Kate Trinajstic, and Vera Weisbecker, “A fairer way to compare researchers at any career stage and in any discipline using open-access citation data”
PLOS ONE, September 10 2021