Tuesday, September 22, 2015

Looking Inside the Engine: The Structure of the Round University Rankings

Many of those interested in international university rankings have been frustrated by the lack of transparency in the Quacquarelli Symonds (QS) and the Times Higher Education (THE) rankings .

The QS rankings assign a fifty per cent weighting to two surveys collected from a variety of channels -- I think six for the employer survey and five for the academic survey -- with different and fluctuating response rates.

The THE rankings have lumped five indicators in a Teaching cluster, three in a Research cluster and three in an International cluster. So how can anyone figure out just what is causing a university to rise or fall in the rankings?

A major step forward in transparency has now come with the recent publication of the Round University Rankings (RUR) by a Russian organisation that uses data from Thomson Reuters (TR), who provided the data for the Times Higher Education world and regional rankings from 2009 until the end of last year.

RUR have published the separate scores for all of the indicators. They have retained 12 out of the 13 indicators used in the THE rankings from 2011 to 2014, dropping income from industry as a percentage of research income, and added another eight.

I doubt that RUR could afford to pay TR very much for the data and I suspect that TR's motive in allowing the dissemination of such a large amount of information is to preempt THE or anyone else trying to move upstream in the drive to monetise data.

It is now possible to see whether the various indicators are measuring the same thing and hence are redundant, whether and to what extent  they are associated with other indicators and whether there is any link between markers of input and markers of output.

Here is a crude analysis of a very small sample of sixteen, one in fifty, of the RUR rankings starting with Harvard and ending with the Latvia Transport and Telecom Institute. I hope that a more detailed analysis of the entire corpus can be done in a few weeks.

The combined indicator groups

Three groups, Teaching, Research, Financial Sustainability, are fairly closely associated with one another. The teaching cluster correlates .634 with Research and .735 with Financial Sustainability. Research correlates .702 with Financial Sustainability.

The International Diversity group appears to be the odd one out here. It correlates significantly with Research (.555) but not with Teaching or Financial Sustainability. This suggests that internationalisation, at least in the form of recruiting more international students, may not always be a strong marker of quality.


The Reputation Indicators

Looking at the three reputation indicators, teaching, international teaching and research, we can see that for practical purposes they are measuring the same thing. The correlation between the Research Reputation and Teaching Reputation scores is .986 and between Research Reputation and International Teaching Reputation .925. Between Teaching Reputation and International Teaching Reputation it is .941.

Alex Usher of Higher Education Strategy Associates has claimed a correlation of .99 between teaching and research reputation scores in the THE rankings up to 2014. The figures from the RUR rankings are a bit lower but essentially the reputation indicators are measuring the same thing, whatever it is, and there is no need to count them more than once.

Other Unnecessary Indicators

Turning to the international indicators, the correlation between Academic Staff per Students and Academic Staff per Bachelor Degrees is very close at .834. The latter, which has not been in any previous ranking, could be omitted without a significant loss of information,

There is an extremely high correlation, .989,  between Citations per Academic and Research Staff and Papers per Academic and Research Staff. It sounds rather counter-intuitive but it seems that as measure of research productivity one is as good as the other, at least when dealing with more than a few hundred elite universities

There is a correlation of .906 between  Institutional Income  per Academic Staff and Institutional Income per Student.

It would appear then that the THE rankings of 2011-2014 with 13 indicators had passed the point beyond which additional indicators become redundant and provide no additional information.

Input and Outputs

There are some clues  about the possible relationship between indicators that could be regarded as inputs and those that might be counted as outputs.

Academic Staff per Student does not  significantly affect teaching reputation (.350, sig .183). It is positively and significantly associated only with doctoral degrees per bachelor degrees  (.510). The correlation with the overall score is, however, quite high and significant at .552.

There is some evidence that a diverse international faculty might have a positive impact on research output and quality. The correlations between International Faculty and Normalised Citation Impact, Papers per Academic and Research Staff and the overall score are positive and significant. On the other hand, the correlations between international collaboration and overall score and international students and overall score are weak and insignificant.

Money seems to help at least as as far as research is concerned. There are moderately high and significant correlations between International Income per Academic Staff and Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.

Research Income per Academic Staff correlates highly and significantly with Teaching Reputation, International Teaching Reputation, Research Reputation, Citations per Academic  and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.

















1 comment:

  1. Anonymous6:27 AM

    Yes, thats not new. TU Dresden and U Tübingen in Germany calculated all this... I'll mail you the study

    ReplyDelete