This year and next the international university rankings appear to be set for more volatility with unusually large upward and downward movement, partly as a result of changes to the methodology for counting citations in the QS and THE rankings.
ARWU
The global ranking season kicked off last week with the publication of the latest edition of the Academic Ranking of World Universities from the ShanghaiRanking Consultancy (SRC), which I hope to discuss in detail in a little while. These rankings are rather dull and boring, which is exactly what they should be. Harvard is, as always, number one for all but one of the indicators. Oxford has slipped from joint ninth to tenth place. Warwick has leaped into the top 100 by virtue of a Fields medal. At the foot of the table there are new contenders from France, Korea and Iran.
Since they began in 2003 the Shanghai rankings have been characterised by a generally stable methodology. In 2012, however, they had to deal with the recruitment of a large and unprecedented number of adjunct faculty by King Abdulaziz University. Previously SRC had simply divided the credit for the Highly Cited Researchers indicator equally between all institutions listed as affiliations. In 2012 and 2013 they wrote to all highly cited researchers with joint affiliations and thus determined the division of credit between primary and secondary affiliations. Then, in 2014 and this year they combined the old Thomson Reuters list, first issued in 2001, and the new one, issued in 2014, and excluded all secondary affiliations in the new list.
The result was that in 2014 the rankings showed an unusual degree of volatility although this year things are a lot more stable. My understanding is that Shanghai will move to counting only the new list next year, again without secondary affiliations, so there should be a lot of interesting changes then. It looks as though Stanford, Princeton, University of Wisconsin -- Madison, and Kyoto University will suffer because of the change while University of California Santa Cruz, Rice University, University of Exeter and University of Wollongong. will benefit.
While SRC has efficiently dealt with the issue of secondary affiliation with regard to its Highly Cited indicator, the issue has now resurfaced in the unusual high scores achieved by King Abdulaziz University for publications largely because of its adjunct faculty. Expect more discussion over the next year or so. It would seem sensible for SRC to think about a five or ten year period rather than one year for their Publications indicator and academic publishers, the media and rankers in general may need to give some thought to the proliferation of secondary affiliations.
QS
On July 27 Quacquarelli Symonds (QS) announced that for 18 months they had been thinking about normalising the counting of citations across five broad subject areas. They observed that a typical institution would receive about half of its citations from the life sciences and medicine, over a quarter from the natural sciences but just 1% from the arts and humanities.
In their forthcoming rankings QS will assign a 20% weighting for citations to each of the five subject areas something, according to Ben Sowter Research Director at QS, that they have been doing for the academic opinion survey.
It would seem then that there are likely to be some big rises and big falls this September. I would guess that places strong in humanities, social sciences and engineering like LSE, New York University and Nanyang Technological University may go up and some of the large US state universities and Russian institutions may go down. That's a guess because it is difficult to tell what happens with the academic and employer surveys.
QS have also made an attempt to deal with the issue of hugely cited papers with hundreds, even thousands of "authors" -- contributors would be a better term -- mainly in physics, medicine and genetics. Their approach is to exclude all papers with more than 10 contributing institutions, that is 0.34% of all publications in the database.
This is rather disappointing. Papers with huge numbers of authors and citations obviously do have distorting effects but they have often dealt with fundamental and important issues. To exclude them altogether is to ignore a very significant body of research.
The obvious solution to the problem of multi-contributor papers is fractional counting, dividing the number of citations by the number of contributors or contributing institutions. QS claim that to do so would discourage collaboration, which does not sound very plausible.
In addition, QS will likely extend the life of survey responses from three to five years. That could make the rankings more stable by smoothing out annual fluctuations in survey responses and reduce the volatility caused by the proposed changes in the counting of citations.
The shift to a moderate version of field normalisation is helpful as it will reduce the undue privilege given to medical research, without falling into the huge problems that result from using too many categories. It is unfortunate, however, that QS have not taken the plunge into fractional counting. One suspects that technical problems and financial considerations might be as significant as the altruistic desire not to discourage collaboration.
After a resorting in September the QS rankings are likely to become a bit more stable and and credible but their most serious problem, the structure, validity and excessive weighting of the academic survey, has still not been addressed.
THE
Meanwhile, Times Higher Education (THE) has also been grappling with the issue of authorship inflation. Phil Baty has announced that this year 649 papers with over 1,000 authors will be excluded from their calculation of citations because " we consider them to be so freakish that they have the potential to distort the global scientific landscape".
But it is not the papers that do the distorting. It is methodology. THE and their former data partners Thomson Reuters, like QS, have avoided fractional counting (except for a small experimental African ranking) and so every one of those hundreds or thousands of authors gets full credit for the hundreds or thousands of citations. This has given places like Tokyo Metropolitan University, Scuola Normale Superiore Pisa, Universite Cadi Ayyad in Morocco and Bogazici University in Turkey remarkably high scores for Citations: Research Impact, much higher than their scores for the bundled research indicators.
THE have decided to simply exclude 649 papers, 0r 0.006% of the total from their calculations for the world rankings. This seems a lot less than QS. Again, this is a rather crude measure. Many of the "freaks" are major contributions to advanced research and deserve to be acknowledged by the rankings in some way.
THE did use fractional counting in their recent experimental ranking of African universities and Baty indicates that they are considering doing so in the future.
It would be a big step forward for THE if they introduce fractional counting of citations. But they should not stop there. There are other bugs in the citations indicator that ought to be fixed.
First, it does not at present measure what it is supposed to measure. It does not measure a university's overall research impact. At best, it is a measure of the average quality of research papers no matter how few (above a certain threshold) they are.
Second, the "regional modification", which divides the university citation impact score by the square root of the the score of the country where the university is located, is another source of distortion. It gives a bonus to universities simply for being located in underperforming countries. THE or TR have justified the modification by suggesting that some universities deserve compensation because they lack funding or networking opportunities. Perhaps they do, but this can still lead to serious anomalies.
Thirdly, THE need to consider whether they should assign citations to so many fields since this increases the distortions that can arise when there is a highly cited paper in a normally lowly cited field.
Fourthly, should they assign a thirty per cent weighting to an indicator that may be useful for distinguishing between the likes of MIT and Caltech but may be of little relevance for the universities that are now signing up for the world rankings?
1 comment:
ARWU ranking is pretty shit. Seriously, king abdolaziz uni. is ranked among 50 uni. in computer science and engineering and 6 in mathematics higher than Cambridge and Oxford. Didn't know some of great mathematicians are living in Saudi Arabia. If ARWU continues this way in five yeras most of top 50 will be from rich Arab countries!!!
Post a Comment