Monday, July 11, 2016

More on THE’s bespoke rankings






Times Higher Education (THE) have just announced another regional ranking, this time for Latin America, at another prestigious summit in Bogota, Colombia.

It seems that THE have entered a stage of imperial overreach, announcing new projects, publishing various indicators as stand-alone rankings and moving into previous unranked corners of the world. They have tried rankings for the BRICS Plus countries, Africa, the Middle East and North Africa (MENA), Asia and Latin America, and apparently have plans to enter the lucrative US college ranking business and the UK teaching-orientated market.

The venture into regional rankings has been accompanied by a noticeable tendency to tailor their rankings to the benefit of the hosts of their summit series, which is presumably what they mean by bespoke. 

The MENA universities summit in Qatar in February 2015 was introduced by a single indicator (citations) “snapshot” ranking which put Texas A and M University at Qatar, a branch campus that offered nothing but engineering courses, at the top. In case anyone is wondering, this was the result of a single faculty member with a joint appointment with the mother campus in Texas who was on the list of authors for a hugely cited physics paper. Qatar University was fourth.  If the Teaching or Research indicator cluster had been used  the ranking would have been rather different.

In this snapshot, United Arab Emirates University was 11th and the American American University of Sharjah 17th.

In January 2016 THE produced another ranking for the MENA summit held at the United Emirates University in Al Ain, UAE, in February. This time THE simply used all of the indicators in the world rankings, not just the citations indicator. The UAE University was fifth and the American University of Sharjah eighth. Texas A and M University was not included and Qatar University was sixth.

THE then went to Africa for a summit at the University of Johannesburg. Once again they produced a citations based ranking but this time they used fractional counting, dividing the citations to mega-papers among the contributing researchers and institutions. When the ranking was announced, the University of Johannesburg was in ninth place ahead of University of Marrakech Cadi Ayyad of Morocco, which was a participant in various large scale physics projects. If THE had calculated citations without using fractional counting, as it did in the 2014-15 world rankings, then Cadi Ayyad would have been second in Africa and if they had used the overall results of the world rankings it would have been fourth.

At the next African summit at the University of Ghana in April 2016 THE used all the indicators of their world rankings without further adjustment. The world ranking methodology had been changed in 2015 to dilute the distorting effects of the regional modification to the citations indicator and multi-author papers were not counted.

In these rankings Cadi Ayyad was outscored by the University of Ghana, the local flagship. Using the world ranking methodology the University of Ghana went from the twelfth place it was given at the Johannesburg summit to seventh.

Next we come to the recent Asian University Rankings announced by THE at a summit held at the Hong Kong University of Science and Technology.

The big surprise of these rankings was the fall of the University of Tokyo (Todai) from first to seventh place behind the University of Singapore, Nanyang Technological University (NTU), rising from tenth, Peking University, the University of Hong Kong, Tsinghua University and the Hong Kong University of Science and Technology (HKUST).

The fall of Tokyo seems implausible. In the Shanghai research based ranking the University of Tokyo has always been in first place among Asian universities and last year the top six in Asia were all Japanese or Israeli institutions. There was no Hong Kong university in the top 150 and HKUST was behind at least 26 other Asian universities. Singaporean institutions also trailed behind the leading Japanese universities. Tokyo was also the top Asian university in the URAP and CWUR rankings.

In addition, Tokyo had been top of THE’s Asian rankings ever since they started. It seems hard to see how Tokyo could fall and NTU rise so quickly. Universities are big things often with tens of thousands of students, thousands of faculty, millions of dollars, thousands of papers, citations and patents. Changes like this are often associated with changes in methodology or occasionally with mergers or structural reorganisation.

Early this year there was an important seminar at Ural Federal University in Ekaterinberg that discussed the consequences of changes in rankings methodology. With the  latest edition of the THE Asian university rankings we have a perfect example of the damage that such changes can do to some universities and the benefits they might confer on others.

THE has informed the University of Tokyo and other Japanese universities that the reason why they are falling in the rankings is that they are not international enough and that they arenot funded well enough. THE is not being entirely disinterested here. Their world rankings have three indicators that measure income and three that measure international orientation in various ways.

And it does seem strange that after cruising along for a few years at the top of the Asian charts Todai should suddenly plunge to seventh place, overtaken by two Chinese, two Singaporean and two Hong Kong Universities, one of  which was summit host HKUST. 

What happened was that in 2015 and 2016 THE made a number of methodological decisions that worked to the advantages of universities in Hong Kong and Singapore and to the disadvantage of those in Japan, especially the University of Tokyo.

First, there were several methodological changes to the 2015 world rankings. The first of these was not counting  multinational papers with more than 1,000 authors. This had been a major problem of the THE rankings and in combination with some other features of their citation indicator, meant that, a university with a few hyper-papers and a low number of total papers, could soar to the top of the citations chart. Over the years a succession of unlikely institutions have been proclaimed as world or regional leaders for research impact: Alexandria University, Moscow State Engineering Physics Institute, Rice University, Tokyo Metropolitan University, Federico Santa Maria Technical University, Scuola Normale Superiore di Pisa.

It was a good idea for THE to do something about the multi-author problem but the obvious thing to do was to introduce fractional counting of citations so that if a university contributed one out of 100 authors to a paper then it would get 1/100th of the total citation count. This is a perfectly feasible option. It has been done by the Leiden Ranking and recently by the US News subject rankings and by THE for the 2015 Africa ranking

The solution chosen by THE worked to the disadvantage of some Japanese universities that had contributed to such mega-projects, especially  Tokyo Metropolitan University which had a perfect score for research impact in 2014, something that it liked to brag about in adverts. In contrast, universities in Hong Kong and Singapore did better for citations in 2015 because they were not involved in such projects.

Something else that helped universities in Hong Kong was that in 2015 THE started counting students and faculty from mainland China as international when they went to Hong Kong, which boosted the international outlook scores for Hong Kong universities. Peter Mathieson of the University of Hong Kong noticed and warned everybody not to get too excited.

In addition to this, THE has, as noted in a earlier post, recalibrated its world ranking indicators, reducing the weighting for the research and teaching reputation survey, where Todai does very well, and increasing that for industry income where Peking and Tsinghua have perfect scores of 100, and NTU, HKUST and the University of Hong Kong do better than the University of Tokyo.

By the way, the speakers at the Asian summit included the heads of the University of Hong Kong, the National University of Singapore, the Hong Kong University of Science and Technology, but nobody from Japan. 

And getting back to the Latin American summit in Colombia, THE did another bit of recalibration, lopping off 10% from the citations indicator and giving it to the research and teaching indicators. The result was that Federico Santa Maria University, Valparaiso, third in Latin America and best in Chile in the world rankings, was demoted to 13th place. The University of the Andes, Bogota, was, as one would now expect, tenth.

08/10/16 Updated to include a reference to the 2016 MENA summit.

4 comments:

  1. I got it what i have been looking for many days. Thanks for such a great work and keep share article like that!

    Government Jobs in India

    ReplyDelete
  2. Anonymous3:49 AM

    Thanks for your wonderful article. Ha..... I think this time THE went too far. This is not measuring each univ's abiliities or strength but just a fraud. I mean seriously.
    They try to lure asian youngs to hong kong and singapore. Singaporean universities are famous for getting the THE home page's advertisement banners the most. Also they participate in every program in THE. How pitiful and ridiculous that one nation's entire tertiary education is manipulated by just single education magazine company.
    We need to stop this.


    ReplyDelete
  3. Anonymous5:54 AM

    Hope SG and HK can catch any Nobel Laureates by spending more money to THE soon. :P

    ReplyDelete
  4. Anonymous4:33 PM

    I think your hypothesis only partially explains the case of Japan and the Tokyo Metropolitan University. TMU's most cited papers (relating to MEGA software) are among the most highly cited papers in the world, only have a few authors and would not have been part of THE’s separate treatment of massively multi-authored papers. Somewhat counter intuitively, THE's policy would actually make the relative impact of TMU's highly cited papers even higher and this explains why TMU is still #1 in Japan (ignoring the Toyota Technology Institute)
    As massively multi-authored papers have authors from all over the world not just Japan, THE's policy change does not explain why Japan has fallen. Additionally Japanese universities tend to be large and the numbers of massively multi-author papers as a proportion of all publications is low.
    As is often the case I think there are multiple factors at play, such as:
    - The change to the "regional modification" of citations since the 2015-16 ranking has impacted Japan negatively.
    - Japanese universities face genuine challenges with a lack of international diversity and budget constraints. The difference in the pace of research funding growth in Asia is apparent when looking at GERD and HERD from OECD.
    - The trend for Japanese researchers to publish a lot (possibly an influence of local research funding policies that encourage more publications instead of impactful publications)
    - The trend for relatively high numbers of academic staff at Japanese universities (which in the case of the THE methodology is negative as many of the indicators use the academic staff count as the denominator).

    ReplyDelete