Wednesday, October 02, 2013

Waiting for the Rankings

There are hints from the twittersphere that the latest Times Higher Education - Thomson Reuters World University Rankings will have something interesting. Will Caltech remain at number one? Will a couple of institutions from the BRICs fall out of the elite? Will Japan remain on top in Asia?

At first sight it would be odd if there were any significant changes since last year. The rankings suck in data about thousands of students, faculty, publications, income and citations and it is unlikely that there could be dramatic changes in any of these over twelve months or less. Moreover there have been no methodological changes. So how could there be any changes worth a headline or two?

There is first of all, the influence of what might be called the dark rankings, those universities that do not appear in the top 200 or the top 400 but are nevertheless influential in that the contribute to the mean scores against which the elite places are benchmarked.

The overall scores and the indicator scores given by THE to the elite universities do not represent absolute numbers but the distance from the average of all universities in the Thomson Reuters database. If the database expands and the new arrivals tend on average to perform less well than those already there then the mean scores of the elite would rise even if everything else remained unchanged.

If all the indicator scores rose at exactly the same rate then it would make no real difference. But if they did not it could mean that an expanding database could cause rises or falls in the rankings without any significant changes. In 2011 and 2012 it was noticeable that the mean scores of universities included in the rankings were higher for citations than for any other indicator and that the gap increased between the two years. This was most probably because the gap between the elite and the also-rans in the database was greater for citations.

If there has been a further influx of new institutions into the database this year then it might further benefit those universities that perform better for citations than for the other indicators.

We should also not forget the impact of TR's "regional modification" which rewards universities for being in a country that preforms poorly for citations. This means that a university's citation impact score is divided by the square root of the citation impact for the country as a whole.

If there is a decline in the citations of papers from a particular country then a university whose performance remains unchanged would get a boost because it is being compared to a lower national average. Equally, a rise in national performance could lead to a fall in the scores of flagship universities.

So we shall have to wait until tonight to see what happens.





No comments:

Post a Comment