Wednesday, November 07, 2007

Changes in the THES-QS Rankings

QS Quacquarelli Symonds have announced that the 2007 World Universities Rankings will be published on November 9th and that there will be a number of changes.

Firstly, QS will not allow respondents to their academic survey to vote for their own institutions. I am not sure how this can could be enforced if QS send out over a quarter of a million e-mails to World Scientific subscribers but it would in principle appear to a be a sensible change. However, this in itself will not affect other more serious problems with the “peer review” such as its marked regional bias and a suspiciously and unbelievably low response rate.

The second change is that QS will now use Scopus rather than the Web of Science for data about citations. .This will favour universities outside the US, those that have strengths in the humanities and social sciences and those that do research in languages other than English.

I suspect that the difference will not be very great. The dominance of the US in research is unlikely to be substantially undermined although probably some European universities, especially in Switzerland and the Netherlands, may do better on the citations per faculty measure.

Thirdly, QS will give Full Time Equivalent (FTE) counts for numbers of students and faculty rather than headcounts. This would eliminate some of the worst errors in previous rankings such as those relating to Ecole Polytechnique and Ecole Normale Superieure. However, there could be problems if the procedure is not applied consistently. QS say that where an FTE number has not been supplied, one will be calculated from the relationship between headcount and FTE numbers at other institutions in the same country or region.

This raises questions about the country or region that is used for benchmarking and whether QS will indicate how the ratio between headcount and FTE`is derived. Also, it seems rather dangerous to allow universities to submit their own data.

Finally, QS will calculate z scores for all components. Basically a z score is calculated by subtracting the population mean from the raw score and then dividing by the standard deviation. The effect of this will be to flatten the curves for each component and to ensure that similar changes will have similar effects on each section of the ranking.

QS are to be commended for introducing these changes providing that they implemented transparently and competently. There is no point in calculating z scores if you enter data for every university in the wrong row, as someone did for the student faculty ratio in QS’s book Guide to the World’s Top Universities, and create hundreds of errors

I have a further reservation. QS seems to have done nothing about using a database for the “peer review” that is provided by an Asian based and orientated publishing company, explaining how they could get an unprecedentedly low response rate without filtering the data in some way or .giving a large weighting to such an obviously biased and suspect set of data. It will be interesting to recalculate the scores to see what they look like without the peer review.

The combined effect of these changes is likely to be that some countries outside the top 100 may go down several places, even though nothing has really changed, leading to anguished debates about declining standards.

No comments: