Saturday, November 10, 2018

A modest suggestion for THE

A few years ago the Shanghai rankings did an interesting tweak on their global rankings. They deleted the two indicators that counted Nobel and Fields awards and produced an Alternative Ranking.

There were some changes. The University of California San Diego and the University of Toronto did better while Princeton and Vanderbilt did worse.

Perhaps it is time for Times Higher Education (THE) to consider doing something similar for their citations indicator. Take a look at their latest subject ranking, Clinical, Pre-clinical and Health. Here are the top ten for citations, supposedly a measure of research impact or influence.

1.   Tokyo Metropolitan University
2.   Auckland University of Technology
3.   Metropolitan Autonomous University, Mexico
4.   Jordan University of Science and Technology
5.   University of Canberra 
6.   Anglia Ruskin University
7.   University of the Philippines
8.   Brighton and Sussex Medical School
9.   Pontifical Javeriana University, Colombia
10. University of Lorraine.

If THE started producing alternative subject rankings without the citations indicator they would be a bit less interesting but a lot more credible.











Friday, November 02, 2018

Ranking Rankings: Measuring Stability

I have noticed that some rankings are prone to a large amount of churning. Universities may rise or fall dozens of places over a year, sometimes as a result of methodological changes, changes in the number or type of universities ranked, errors and corrections of errors (fortunately rare these days), changes in data collection and reporting procedures, or because there is a small number of data points.

Some ranking organisations like to throw headlines around about who's up or down, the rise of Asia, the fall of America, and so on. This is a trivialisation of any serious attempt at the comparative evaluation of universities, which do not behave like volatile financial markets. Universities are generally fairly stable institutions: most of the leading universities of the early twentieth century are still here while the Ottoman, Hohenzollern, Hapsburg and Romanov empires are long gone.

Reliable rankings should not be expected to show dramatic changes from year to year, unless there has been radical restructuring like the recent wave of mergers in France. The validity of a ranking system is questionable if universities bounce up or down dozens, scores, even hundreds of ranks every year.

The following table shows the volatility of the global rankings listed in the IREG Inventory of international rankings. U-Multirank is not listed because it does not provide overall ranks and UniRank and Webometrics do not give access to previous editions. 

Average rank change is the number of places that each of the top thirty universities has fallen or climbed between the two most recent editions of the ranking.

The most stable rankings are the Shanghai ARWU, followed by the US News global rankings and the National Taiwan University rankings. The GreenMetric rankings, Reuters Innovative Universities and the high quality research indicator of Leiden Ranking show the highest levels of volatility.

This is a very limited exercise. We might get different results if we examined all of the universities in the rankings or analysed changes over several years.



rank
ranking
address
average rank change
1
Shanghai ARWU
China
0.73
2
US News Best Global Universities
USA
0.83
3
National Taiwan University Rankings
Taiwan
1.43
4
THE World University Rankings
UK
1.60
5
Round University Rankings
Russia
2.28
6
University Ranking by Academic Performance
Turkey
2.23
7
QS World University Rankings
UK
2.33
8
Nature Index
UK
2.60
9
Leiden Ranking Publications
Netherlands
2.77
10
Scimago
Spain
3.43
11
Emerging/Trendence
France
3.53
12
Center for World University Ranking
UAE
4.60
13
Leiden Ranking % Publications in top 1%
Netherlands
4.77
14
Reuters Innovative Universities
USA
6.17
15
UI GreenMetric
Indonesia
13.14

Monday, October 29, 2018

Is THE going to reform its methodology?


An article by Duncan Ross in Times Higher Education (THE) suggests that the World University Rankings are due for repair and maintenance. He notes that these rankings were originally aimed at a select group of research orientated world class universities but THE is now looking at a much larger group that is likely to be less internationally orientated, less research based and more concerned with teaching.

He says that it is unlikely that there will be major changes in the methodology for the 2019-20 rankings next year but after that there may be significant adjustment.

There is a chance that  the industry income indicator, income from industry and commerce divided by the number of faculty, will be changed. This is an indirect attempt to capture innovation and is unreliable since it is based entirely on data submitted by institutions. Alex Usher of Higher Education Strategy Associates has pointed out some problems with this indicator.

Ross seems most concerned, however, with the citations indicator which at present is normalised by field, of which there are over 300, type of publication and year of publication. Universities are rated not according to the number of citations they receive but by comparison with the world average of citations to documents of a specific type in a specific field in a specific year. There are potentially over 8,000 boxes into which any single citation could be dropped for comparison.

Apart from anything else, this has resulted in a serious reduction in transparency. Checking on the scores for Highly Cited Researchers or Nobel and fields laureates in the Shanghai rankings can be done in few minutes. Try comparing thousands of world averages with the citation scores of a university.

This methodology has produced a series of bizarre results, noted several times in this blog. I hope I will be forgiven for yet again listing some of the research impact superstars that THE has identified over the last few years: Alexandria University, Moscow Nuclear Research University MEPhI, Anglia Ruskin University, Brighton and Sussex Medical School, St George's University of London, Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology, Babol Noshirvani University of Technology, Oregon Health and Science University, Jordan University of Science and Technology, Vita-Salute San Raffaele University.

The problems of this indicator go further than just a collection of quirky anomalies. It now accords a big privilege to medical research as it once did to fundamental physics research. It offers a quick route to ranking glory by recruiting highly cited researchers in strategic fields and introduces a significant element of instability into the rankings.

So here are some suggestions for THE should it actually get round to revamping the citations indicator.

1. The number of universities around the world that do a modest amount of  research of any kind is relatively small, maybe five or six thousand. The number that can reasonably claim to have a significant global impact is much smaller, perhaps two or three hundred. Normalised citations are perhaps a reasonable way of distinguishing among the latter, but pointless or counterproductive when assessing the former. The current THE methodology might be able to tell whether  a definitive literary biography by a Yale scholar has the same impact in its field as cutting edge research in particle physics at MIT but it is of little use in assessing the relative research output of mid-level universities in South Asia or Latin America.

THE should therefore consider reducing the weighting of citations to the same as research output or lower.

2.  A major cause of problems with the citations indicator is the failure to introduce complete fractional counting, that is distributing credit for citations proportionately among authors or institutions. At the moment THE counts every author of a paper with less than a thousand authors as though each of them were the sole author of the paper. As a result, medical schools that produce papers with hundreds of authors now have a privileged position in the THE rankings, something that the use of normalisation was supposed to prevent.

THE has introduced a moderate form of fractional counting for papers with over a thousand authors but evidently this is not enough.

It seems that some, rankers do not like fractional counting because it might discourage collaboration. I would not dispute that collaboration might be a good thing, although it is often favoured by institutions that cannot do very well by themselves, but this is not sufficient reason to allow distortions like those noted above to flourish.

3. THE have a country bonus or regional modification which divides a university's citation impact score by the square root of the score of the country in which the university is located. This was supposed to compensate for the lacking of funding and networks that afflicts some countries, which apparently does not affect their reputation scores or publications output. The effect of this bonus is to give some universities a boost derived not from their excellence but from the mediocrity or worse of their compatriots. THE reduced the coverage of this bonus to fifty percent of the indicator in 2015.  It might well be time to get rid of it altogether

4. Although QS stopped counting self-citations in 2011 THE continue to do so. They have said that overall they make little difference. Perhaps, but as the rankings expand to include more and more universities it will become more likely that a self-citer or mutual-citer will propel undistinguished  schools up the charts. There could be more cases like Alexandria University or Veltech University.

5. THE needs to think about what they are using citations to measure. Are they trying to assess research quality in which they case they should use citations per papers? Are they trying to estimate overall research impact in which case the appropriate metric would be total citations.

6. Normalisation by field and document type might be helpful for making fine distinctions among elite research universities but lower down it creates or contributes to serious problems when a single document or an unusually productive author can cause massive distortions. Three hundred plus fields may be too many and THE should think about reducing the number of fields. 

7. There has been a proliferation in recent years In the number of secondary affiliations. No doubt most of these are making a genuine contribution to the life of both or all of the universities with which they are affiliated. There is, however, a possibility of serious abuse if the practice continues. It would be greatly to THE's credit if they could find some way of omitting or reducing the weighting of secondary affiliations. 

8. THE are talking about different models of excellence. Perhaps they could look at the Asiaweek rankings which had a separate table for technological universities or Maclean's with its separate rankings for doctoral/medical universities and primarily undergraduate schools. Different weightings could be given to citations for each of these categories.