Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Wednesday, June 22, 2016
THE's bespoke Asian rankings: the strange decline of the University of Tokyo and the rise of Singapore
Times Higher Education (THE), in conjunction with their prestigious summit in Hong Kong, have revealed this year's Asian University Rankings which use essentially the same methodology as the world rankings but with some recalibration.
The most noticeable aspect of the new rankings is that the University of Tokyo (UT), which was first in 2013, 2014 and 2015, has now suddenly dropped to seventh place, behind the National University of Singapore (NUS) in first place, Nanyang Technological University (NTU) in Singapore up from tenth to second, Peking University, the University of Hong Kong, Tsinghua University and Hong Kong University of Science and Technology.
Tokyo is not the only Japanese university to suffer in these rankings. Tokyo Institute of Technology has gone from 15th last year to 24th, Osaka University from 18th to 30th and Tokyo Metropolitan University from 33rd to 52nd.
The rise of NTU and the fall of Tokyo need some explanation. When we are talking about institutions with thousands of students and faculty that produce thousands of papers, citations and patents, it is not good enough to say that one has been investing and networking and the other has not. The time from the publication of budgets via research proposals to publication and citation is usually closer to a decade than to a year.
Let's take a look at the details. Between 2015 and this year UT suffered a modest fall for teaching (a cluster of five indicators) international outlook and industry income, a substantial fall of 5.6 points for research (a cluster of three indicators) and a large fall from 76.1 to 67.8 points for field-normalised citations.
Evidently the methodological changes introduced last year by THE and Elsevier, their new data partners, have had an effect on the citations indicator score of UT. The changes were excluding papers, mostly in physics, with a large number of authors, switching from the Web of Science to Scopus as a source of data about papers and citations and reducing the impact of the "regional modification" that awards a bonus to universities in countries with a low citation impact.
Meanwhile NUS rose 4.7 points for citations and NTU 9.7 points. It would seem then that these changes contributed significantly to Tokyo's decline and to the ascent of NUS and even more so that of NTU.
There is another factor at work. THE have told told us that they did some recalibration, that is changing the weighting of the indicators. They reduced the weighting of the teaching reputation survey from 15% to 10% and that of the research reputation survey from 18% to 15%. The weighting for research productivity and research income was increased from 6% to 7.5% each and for income from industry from 2.5% to 7.5%.
So why did THE do this? It seems that it was done after consulting with Asian universities because "many Asian institutions have only relatively recently arrived on the world stage, with investment focused on recent decades, so have had less time to accumulate reputation around the world."
But one could say something similar about all the indicators: Asian universities have only recently arrived on the world stage and so have had less time to accumulate research funds or research expertise, build up their faculty, develop international networks and so on.
And why give the large extra weighting to industry income because "many Asian nations have put their universities at the forefront of economic growth plans, where industry links are crucial?" Perhaps some countries have plans where industry links are not crucial or perhaps other criteria are equally or more crucial. In any case, industry income is a very questionable indicator. Alex Usher of Higher Education Strategy Associates has already pointed out some of its flaws.
Anyway, whatever THE 's ostensible reasons for this recalibration, the consequences are quite clear. Taking points from the reputation survey has worked to the disadvantage of UT, which in THE's 2015 reputation ranking had scores of 18.0 for teaching reputation and 19.8 for research reputation, and in favor of NUS which had scores of 9.2 and 10.9. The scores for NTU, the University of Hong Kong and Hong Kong University of Science and Technology are much lower and are withheld. It is not clear what the exact effect is since this year the reputation scores are subject to an "exponential component" which has presumably reduced the spread of scores and therefore UT's advantage.
It is not possible to determine the effect of giving extra weighting to research productivity and research income since these are bundled with other indicators.
Giving a greater weight to industry income has hurt UT, which has a score of only 50.8, and helped NTU with a score of 99.9, the University of Hong Kong with a perfect score of 100 and Kong Kong University of Science and Technology with a score of 68.1.
It appears that Japanese universities do relatively badly in these rankings and those in Singapore and Hong Kong do so well largely because of the changes last year in the collection and processing of citations data and the recalibration this year of the indicator weightings.
The co-host of the Asian summit was Hong Kong University of Science and Technology and the list of "prestigious university leaders from around the world" includes those from Hong Kong, Singapore and China but not from Japan.
Your analysis is perfect and obvious!! genious!!! Now it becomes clear that THE plays with university ranking.
ReplyDeleteMoney and business in university ranking = suck and shit
ReplyDeleteThe biggest swindle I 've ever seen. It was true that british rankings are usless indicators for international students.
ReplyDelete