Times Higher Education (THE) have now officially announced the methodology of next week's World University Rankings. There are some changes although major problems are still not addressed.
First, THE is now getting data from Scopus rather than Thomson Reuters. The Scopus database is more inclusive -- it covers 22,000 publications compared to 12,000 -- and includes more papers from non-English speaking countries so this may give an advantage to some universities in Eastern Europe and Asia.
Second, THE has tried to make its reputation survey more inclusive, making forms available in an additional six languages and reducing the bias to the USA.
Third, 649 papers with more than 1,000 listed authors, mainly in physics, will not be counted for the citations indicator.
Fourth, the citations indicator will be divided into two with equal weighting. One half will be be with and one half without the "regional modification" by which the overall citations impact score of a university is divided by the square root of the score for the country in which it is located. In previous editions of these rankings this modification gave a big boost to universities in low scoring countries such as Chile, India, Turkey and Russia.
It is likely that Institutions such as Bogazici University, Panjab University, Federico Santa Maria Technical University and Universite Cadi Ayyad, which have benefited from contributing to mega-papers such as those emanating from the Large Hadron Collider project,will suffer from the exclusion of these papers from the citations indicator. Their pain will be increased by the dilution of the regional modification.
It is possible that such places may get some compensation in the form of more responses in the reputation survey or higher publication counts in the Scopus database but that is far from certain. I suspect that several university administrators are going to be very miserable next Thursday.
There is something else that should not be forgotten. The scores published by THE are not raw data but standardised scores derived from standard deviations and means. Since THE are including more universities in this year's rankings and since most of them are likely to have low scores for most indicators it follows that the overall mean scores of ranked universities will fall. This will have the effect of raising the standardised scores of the 400 or so universities that score above the mean. It is likely that this effect will vary from indicator to indicator and so the final overall scores will be even more unpredictable and volatile.
Too bad that they still cannot get rid of the subjective "reputation survey".
ReplyDeleteNow that the results are out it seems that rather many universities have failed to understand the basic point about the scores being standardised scores derived from standard deviations and means: they proudly announce that although their ranking position may not have changed from last year, at least the scores show that the university is moving forward!
ReplyDelete