Thursday, July 08, 2010

Presentation by Phil Baty

A presentation by Phil Baty of Times Higher Education at the ISTIC meeting in Beijing reviewed the background of the now defunct THES-QS World University Rankings and the rationale for the development of a new ranking system.

There are some quotations that highlight familiar complaints about the THE-QS rankings:


“Results have been highly volatile. There have been many sharp rises and falls… Fudan in China has oscillated between 72 and 195…” Simon Marginson, University of Melbourne.


“Most people think that the main problem with the rankings is the opaque way it constructs its sample for its reputational rankings”. Alex Usher, vice president of Educational Policy Institute, US.


“The logic behind the selection of the indicators appears obscure”. Christopher Hood, Oxford University.



Baty also indicates several problems with the "peer review", citations, faculty student ratio and internationalisation indicators.



All of this is very sound. But it is not yet certain how much of an improvement the new THE rankings will be.



THE will now obtain citations and publication data from Thomson Reuters rather than Scopus. The Thomson Reuters data is based on the ISI indexes, which are somewhat more selective than the Scopus database. There is, however, a great deal of overlap and simply using ISI data rather than Scopus will not in itself make very much difference except perhaps that there will be a somewhat greater bias towards English using researchers and the research output that is measured may be of a somewhat higher quality. We should also remember that from 2004 and 2006, the THE-QS citations data were collected by the very same Jonathon Adams who is now overseeing the development of the new THE rankings.



Some of the "confirmed improvements" noted by Baty are certainly that. Normalising citation scores between various disciplinary groups to take account of varying patterns of publication and citation is something overdue. The presentation of information about various types of income will, if the raw data is publicly available, make it possible to evaluate universities in terms of value for money.


In some ways the reputational survey may be better then the QS "peer review" but exactly how much better is not yet clear. Baty says that only published researchers were asked to take part but this apparently could mean no more than being listed as the corresponding author for an article once in a lifetime. No doubt this yields a better qualified group of respondents than that made up those with the energy to sign up with World Scientific but is it really significantly better?


Also, there is much that we have not been told about the reputational survey. We know the total number of respondents, which was much lower than the original target, but not the response rate. Nor has there been indication of the number of responses from individual countries. This is particularly irksome since rumour and subjective impression suggest that many countries have been neglected by the recently closed THE survey.


The methodology still appears in need of refinement. Research income of various kinds appears four times as an indicator or part of an indicator: research income from industry as the sole indicator in the Economic Activity/Innovation category; as part of overall research income and as part of research income from industry and public sources in Research Indicators; and as part of total institutional income in Institutional Indicators. This is a bit messy.

There is still time for THE to produce an improved ranking system. Let's hope they can do it.

No comments:

Post a Comment