Monday, October 29, 2018

Is THE going to reform its methodology?


An article by Duncan Ross in Times Higher Education (THE) suggests that the World University Rankings are due for repair and maintenance. He notes that these rankings were originally aimed at a select group of research orientated world class universities but THE is now looking at a much larger group that is likely to be less internationally orientated, less research based and more concerned with teaching.

He says that it is unlikely that there will be major changes in the methodology for the 2019-20 rankings next year but after that there may be significant adjustment.

There is a chance that  the industry income indicator, income from industry and commerce divided by the number of faculty, will be changed. This is an indirect attempt to capture innovation and is unreliable since it is based entirely on data submitted by institutions. Alex Usher of Higher Education Strategy Associates has pointed out some problems with this indicator.

Ross seems most concerned, however, with the citations indicator which at present is normalised by field, of which there are over 300, type of publication and year of publication. Universities are rated not according to the number of citations they receive but by comparison with the world average of citations to documents of a specific type in a specific field in a specific year. There are potentially over 8,000 boxes into which any single citation could be dropped for comparison.

Apart from anything else, this has resulted in a serious reduction in transparency. Checking on the scores for Highly Cited Researchers or Nobel and fields laureates in the Shanghai rankings can be done in few minutes. Try comparing thousands of world averages with the citation scores of a university.

This methodology has produced a series of bizarre results, noted several times in this blog. I hope I will be forgiven for yet again listing some of the research impact superstars that THE has identified over the last few years: Alexandria University, Moscow Nuclear Research University MEPhI, Anglia Ruskin University, Brighton and Sussex Medical School, St George's University of London, Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology, Babol Noshirvani University of Technology, Oregon Health and Science University, Jordan University of Science and Technology, Vita-Salute San Raffaele University.

The problems of this indicator go further than just a collection of quirky anomalies. It now accords a big privilege to medical research as it once did to fundamental physics research. It offers a quick route to ranking glory by recruiting highly cited researchers in strategic fields and introduces a significant element of instability into the rankings.

So here are some suggestions for THE should it actually get round to revamping the citations indicator.

1. The number of universities around the world that do a modest amount of  research of any kind is relatively small, maybe five or six thousand. The number that can reasonably claim to have a significant global impact is much smaller, perhaps two or three hundred. Normalised citations are perhaps a reasonable way of distinguishing among the latter, but pointless or counterproductive when assessing the former. The current THE methodology might be able to tell whether  a definitive literary biography by a Yale scholar has the same impact in its field as cutting edge research in particle physics at MIT but it is of little use in assessing the relative research output of mid-level universities in South Asia or Latin America.

THE should therefore consider reducing the weighting of citations to the same as research output or lower.

2.  A major cause of problems with the citations indicator is the failure to introduce complete fractional counting, that is distributing credit for citations proportionately among authors or institutions. At the moment THE counts every author of a paper with less than a thousand authors as though each of them were the sole author of the paper. As a result, medical schools that produce papers with hundreds of authors now have a privileged position in the THE rankings, something that the use of normalisation was supposed to prevent.

THE has introduced a moderate form of fractional counting for papers with over a thousand authors but evidently this is not enough.

It seems that some, rankers do not like fractional counting because it might discourage collaboration. I would not dispute that collaboration might be a good thing, although it is often favoured by institutions that cannot do very well by themselves, but this is not sufficient reason to allow distortions like those noted above to flourish.

3. THE have a country bonus or regional modification which divides a university's citation impact score by the square root of the score of the country in which the university is located. This was supposed to compensate for the lacking of funding and networks that afflicts some countries, which apparently does not affect their reputation scores or publications output. The effect of this bonus is to give some universities a boost derived not from their excellence but from the mediocrity or worse of their compatriots. THE reduced the coverage of this bonus to fifty percent of the indicator in 2015.  It might well be time to get rid of it altogether

4. Although QS stopped counting self-citations in 2011 THE continue to do so. They have said that overall they make little difference. Perhaps, but as the rankings expand to include more and more universities it will become more likely that a self-citer or mutual-citer will propel undistinguished  schools up the charts. There could be more cases like Alexandria University or Veltech University.

5. THE needs to think about what they are using citations to measure. Are they trying to assess research quality in which they case they should use citations per papers? Are they trying to estimate overall research impact in which case the appropriate metric would be total citations.

6. Normalisation by field and document type might be helpful for making fine distinctions among elite research universities but lower down it creates or contributes to serious problems when a single document or an unusually productive author can cause massive distortions. Three hundred plus fields may be too many and THE should think about reducing the number of fields. 

7. There has been a proliferation in recent years In the number of secondary affiliations. No doubt most of these are making a genuine contribution to the life of both or all of the universities with which they are affiliated. There is, however, a possibility of serious abuse if the practice continues. It would be greatly to THE's credit if they could find some way of omitting or reducing the weighting of secondary affiliations. 

8. THE are talking about different models of excellence. Perhaps they could look at the Asiaweek rankings which had a separate table for technological universities or Maclean's with its separate rankings for doctoral/medical universities and primarily undergraduate schools. Different weightings could be given to citations for each of these categories.

No comments: