Staffers at rankings compiler QS have a new paper setting out why rankings (including their own) are based on good data, provide a much-needed service and will be increasingly consulted. And they address a common complaint, that the use of surveys as a factor in rating institutions makes outcomes subjective; “one survey is subjective, as is one citation, but when measuring five years’ worth of results, fluctuations can be smoothed out to provide the most reliable outcome.”
This isn’t quite the way seriously sciencey staff at the Web of Science citation index operation see it in a scathing (unless it is scarifying) paper on why “simplified” metrics as used in league tables, can “obscure real research performance when misused.”
To help against this they propose four “visualisations,” that, “unpack the richer information that lies behind each headline indicator.”
For individuals; separate to an H index (x papers each cited times) they propose a beam-plot, with the citation count for each of an author’s papers “normalised” for articles in subject-category journals for the relevant year.
They suggest a journal citation report, which puts journal impact factors into a broader context.
To measure research-group performance, they call for an impact profile, which measures citation counts against research averages in the field. This is to present performance differences between research teams with similar profiles but which work in different areas of the same broad fields.
For universities, they advocate a “research footprint” map instead of a league-table ranking. “Any institution scores better on some parameters and less well on others, continuously varying its position relative to others. A global university ranking may be fun, but it is only a reference point. It hides far too much detail even for careful short-listing for students, let alone as a tool to inform management.”
The overall intent is to use visuals to display comparisons of data which are lost, in simple rankings. “There is no sensible way to compare two complex research systems with a single number: it’s a bit more complicated than that!”