Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Saturday, August 24, 2019
Seven modest suggestions for Times Higher
The latest fabulous dynamic exciting trusted prestigious sophisticated etc etc Times Higher Education (THE) world academic summit is coming.
The most interesting, or at least the most amusing, event will probably be the revelation of the citations indicator which supposedly measures research impact. Over the last few years this metric has discovered a series of unexpected world-class research universities: Alexandria University, Tokyo Metropolitan University, Anglia Ruskin University, the University of Reykjavik, St. George's London, Babol Noshirvani University of Technology, Brighton and Sussex Medical School. THE once called this their flagship indicator but oddly enough they don't seem to have got round to releasing it as a standalone ranking.
But looking at the big picture, THE doesn't appear to have suffered much, if at all, from the absurdity of the indicator. The great and the good of the academic world continue to swarm to THE summits where they bask in the glow of the charts and tables that confirm their superiority.
THE have hinted that this summit will see big reforms to the rankings especially the citations indicator. That would certainly improve their credibility although they may be less interesting.
I have discussed THE's citation problems here, here, here, and here. So, for one last time, I hope, here the main flaws and we will see whether THE will fix them.
1. A 30% weighting for any single indicator is far too high. It would be much better to reduce it to 10 or 20%.
2. Using only one method to measure citations is not a good idea. Take at look at the Leiden Ranking and play around with the settings and parameters. You will see that you can get very different results with just a bit of tweaking. It is necessary to use a variety of metrics to get a broad picture of research quality, impact and influence.
3. THE have a regional modification or country bonus that divides the research impact score of universities by the square root of the scores of the country where they are located. The effect of this is to increase the score of every university except those in the top ranking country with the increase being greater for those with worse research records. This applies to half of the indicator and is supposed to compensate for some researchers lacking access to international networks. For some reason this was never a problem for the publications, income or international indicators. Removing the bonus would do a let to make the metric more credible.
4. The indicator is over-normalized. Impact scores are bench marked to the world average for over three hundred fields plus year of publication. The more fields the greater the chance that a university can benefit from an anomalous paper that receives an unusually high number of citations. It would help if THE reduced the number of fields although that seems unlikely.
5. Unless a paper has over a thousand authors THE treat every single contributor as receiving every single citation. Above that number they use fractional counting. The result is that the THE rankings privilege medical institutions such as St George's and the Brighton and Sussex Medical School that take part in multi-author projects such as the Global Burden of Disease study. All round fractional counting would seem the obvious answer although it might add a bit to costs.
6. Self-citation has become an issue recently. THE have said several times that it doesn't make very much difference. That may be true but there have been occasions when a single serial self citer can make a university like Alexandria or Veltech soar into the research stratosphere and that could happen again.
7. A lot of researchers are adding additional affiliations to their names when they publish. Those secondary, tertiary, sometimes more affiliations are counted by rankers as though they were primary affiliations. It would make sense to count only primary affiliations as ARWU does with highly cite researchers.
Subscribe to:
Post Comments (Atom)
1 comment:
This is Very very nice article. Everyone should read. Thanks for sharing. Don't miss WORLD'S BEST GAMES
Post a Comment