Times Higher Education (THE) have now officially announced the methodology of next week's World University Rankings. There are some changes although major problems are still not addressed.
First, THE is now getting data from Scopus rather than Thomson Reuters. The Scopus database is more inclusive -- it covers 22,000 publications compared to 12,000 -- and includes more papers from non-English speaking countries so this may give an advantage to some universities in Eastern Europe and Asia.
Second, THE has tried to make its reputation survey more inclusive, making forms available in an additional six languages and reducing the bias to the USA.
Third, 649 papers with more than 1,000 listed authors, mainly in physics, will not be counted for the citations indicator.
Fourth, the citations indicator will be divided into two with equal weighting. One half will be be with and one half without the "regional modification" by which the overall citations impact score of a university is divided by the square root of the score for the country in which it is located. In previous editions of these rankings this modification gave a big boost to universities in low scoring countries such as Chile, India, Turkey and Russia.
It is likely that Institutions such as Bogazici University, Panjab University, Federico Santa Maria Technical University and Universite Cadi Ayyad, which have benefited from contributing to mega-papers such as those emanating from the Large Hadron Collider project,will suffer from the exclusion of these papers from the citations indicator. Their pain will be increased by the dilution of the regional modification.
It is possible that such places may get some compensation in the form of more responses in the reputation survey or higher publication counts in the Scopus database but that is far from certain. I suspect that several university administrators are going to be very miserable next Thursday.
There is something else that should not be forgotten. The scores published by THE are not raw data but standardised scores derived from standard deviations and means. Since THE are including more universities in this year's rankings and since most of them are likely to have low scores for most indicators it follows that the overall mean scores of ranked universities will fall. This will have the effect of raising the standardised scores of the 400 or so universities that score above the mean. It is likely that this effect will vary from indicator to indicator and so the final overall scores will be even more unpredictable and volatile.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Sunday, September 27, 2015
Tuesday, September 22, 2015
Looking Inside the Engine: The Structure of the Round University Rankings
Many of those interested in international university rankings have been frustrated by the lack of transparency in the Quacquarelli Symonds (QS) and the Times Higher Education (THE) rankings .
The QS rankings assign a fifty per cent weighting to two surveys collected from a variety of channels -- I think six for the employer survey and five for the academic survey -- with different and fluctuating response rates.
The THE rankings have lumped five indicators in a Teaching cluster, three in a Research cluster and three in an International cluster. So how can anyone figure out just what is causing a university to rise or fall in the rankings?
A major step forward in transparency has now come with the recent publication of the Round University Rankings (RUR) by a Russian organisation that uses data from Thomson Reuters (TR), who provided the data for the Times Higher Education world and regional rankings from 2009 until the end of last year.
RUR have published the separate scores for all of the indicators. They have retained 12 out of the 13 indicators used in the THE rankings from 2011 to 2014, dropping income from industry as a percentage of research income, and added another eight.
I doubt that RUR could afford to pay TR very much for the data and I suspect that TR's motive in allowing the dissemination of such a large amount of information is to preempt THE or anyone else trying to move upstream in the drive to monetise data.
It is now possible to see whether the various indicators are measuring the same thing and hence are redundant, whether and to what extent they are associated with other indicators and whether there is any link between markers of input and markers of output.
Here is a crude analysis of a very small sample of sixteen, one in fifty, of the RUR rankings starting with Harvard and ending with the Latvia Transport and Telecom Institute. I hope that a more detailed analysis of the entire corpus can be done in a few weeks.
The combined indicator groups
Three groups, Teaching, Research, Financial Sustainability, are fairly closely associated with one another. The teaching cluster correlates .634 with Research and .735 with Financial Sustainability. Research correlates .702 with Financial Sustainability.
The International Diversity group appears to be the odd one out here. It correlates significantly with Research (.555) but not with Teaching or Financial Sustainability. This suggests that internationalisation, at least in the form of recruiting more international students, may not always be a strong marker of quality.
The Reputation Indicators
Looking at the three reputation indicators, teaching, international teaching and research, we can see that for practical purposes they are measuring the same thing. The correlation between the Research Reputation and Teaching Reputation scores is .986 and between Research Reputation and International Teaching Reputation .925. Between Teaching Reputation and International Teaching Reputation it is .941.
Alex Usher of Higher Education Strategy Associates has claimed a correlation of .99 between teaching and research reputation scores in the THE rankings up to 2014. The figures from the RUR rankings are a bit lower but essentially the reputation indicators are measuring the same thing, whatever it is, and there is no need to count them more than once.
Other Unnecessary Indicators
Turning to the international indicators, the correlation between Academic Staff per Students and Academic Staff per Bachelor Degrees is very close at .834. The latter, which has not been in any previous ranking, could be omitted without a significant loss of information,
There is an extremely high correlation, .989, between Citations per Academic and Research Staff and Papers per Academic and Research Staff. It sounds rather counter-intuitive but it seems that as measure of research productivity one is as good as the other, at least when dealing with more than a few hundred elite universities
There is a correlation of .906 between Institutional Income per Academic Staff and Institutional Income per Student.
It would appear then that the THE rankings of 2011-2014 with 13 indicators had passed the point beyond which additional indicators become redundant and provide no additional information.
Input and Outputs
There are some clues about the possible relationship between indicators that could be regarded as inputs and those that might be counted as outputs.
Academic Staff per Student does not significantly affect teaching reputation (.350, sig .183). It is positively and significantly associated only with doctoral degrees per bachelor degrees (.510). The correlation with the overall score is, however, quite high and significant at .552.
There is some evidence that a diverse international faculty might have a positive impact on research output and quality. The correlations between International Faculty and Normalised Citation Impact, Papers per Academic and Research Staff and the overall score are positive and significant. On the other hand, the correlations between international collaboration and overall score and international students and overall score are weak and insignificant.
Money seems to help at least as as far as research is concerned. There are moderately high and significant correlations between International Income per Academic Staff and Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
Research Income per Academic Staff correlates highly and significantly with Teaching Reputation, International Teaching Reputation, Research Reputation, Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
The QS rankings assign a fifty per cent weighting to two surveys collected from a variety of channels -- I think six for the employer survey and five for the academic survey -- with different and fluctuating response rates.
The THE rankings have lumped five indicators in a Teaching cluster, three in a Research cluster and three in an International cluster. So how can anyone figure out just what is causing a university to rise or fall in the rankings?
A major step forward in transparency has now come with the recent publication of the Round University Rankings (RUR) by a Russian organisation that uses data from Thomson Reuters (TR), who provided the data for the Times Higher Education world and regional rankings from 2009 until the end of last year.
RUR have published the separate scores for all of the indicators. They have retained 12 out of the 13 indicators used in the THE rankings from 2011 to 2014, dropping income from industry as a percentage of research income, and added another eight.
I doubt that RUR could afford to pay TR very much for the data and I suspect that TR's motive in allowing the dissemination of such a large amount of information is to preempt THE or anyone else trying to move upstream in the drive to monetise data.
It is now possible to see whether the various indicators are measuring the same thing and hence are redundant, whether and to what extent they are associated with other indicators and whether there is any link between markers of input and markers of output.
Here is a crude analysis of a very small sample of sixteen, one in fifty, of the RUR rankings starting with Harvard and ending with the Latvia Transport and Telecom Institute. I hope that a more detailed analysis of the entire corpus can be done in a few weeks.
The combined indicator groups
Three groups, Teaching, Research, Financial Sustainability, are fairly closely associated with one another. The teaching cluster correlates .634 with Research and .735 with Financial Sustainability. Research correlates .702 with Financial Sustainability.
The International Diversity group appears to be the odd one out here. It correlates significantly with Research (.555) but not with Teaching or Financial Sustainability. This suggests that internationalisation, at least in the form of recruiting more international students, may not always be a strong marker of quality.
The Reputation Indicators
Looking at the three reputation indicators, teaching, international teaching and research, we can see that for practical purposes they are measuring the same thing. The correlation between the Research Reputation and Teaching Reputation scores is .986 and between Research Reputation and International Teaching Reputation .925. Between Teaching Reputation and International Teaching Reputation it is .941.
Alex Usher of Higher Education Strategy Associates has claimed a correlation of .99 between teaching and research reputation scores in the THE rankings up to 2014. The figures from the RUR rankings are a bit lower but essentially the reputation indicators are measuring the same thing, whatever it is, and there is no need to count them more than once.
Other Unnecessary Indicators
Turning to the international indicators, the correlation between Academic Staff per Students and Academic Staff per Bachelor Degrees is very close at .834. The latter, which has not been in any previous ranking, could be omitted without a significant loss of information,
There is an extremely high correlation, .989, between Citations per Academic and Research Staff and Papers per Academic and Research Staff. It sounds rather counter-intuitive but it seems that as measure of research productivity one is as good as the other, at least when dealing with more than a few hundred elite universities
There is a correlation of .906 between Institutional Income per Academic Staff and Institutional Income per Student.
It would appear then that the THE rankings of 2011-2014 with 13 indicators had passed the point beyond which additional indicators become redundant and provide no additional information.
Input and Outputs
There are some clues about the possible relationship between indicators that could be regarded as inputs and those that might be counted as outputs.
Academic Staff per Student does not significantly affect teaching reputation (.350, sig .183). It is positively and significantly associated only with doctoral degrees per bachelor degrees (.510). The correlation with the overall score is, however, quite high and significant at .552.
There is some evidence that a diverse international faculty might have a positive impact on research output and quality. The correlations between International Faculty and Normalised Citation Impact, Papers per Academic and Research Staff and the overall score are positive and significant. On the other hand, the correlations between international collaboration and overall score and international students and overall score are weak and insignificant.
Money seems to help at least as as far as research is concerned. There are moderately high and significant correlations between International Income per Academic Staff and Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
Research Income per Academic Staff correlates highly and significantly with Teaching Reputation, International Teaching Reputation, Research Reputation, Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
Saturday, September 19, 2015
Who's Interested in the QS World University Rankings?
And here are the first ten results (excluding this blog and the QS page) from a Google search for this year's QS world rankings. Compare with ARWU and RUR. Does anyone notice any patterns?
Canada falls in World University Rankings' 2015 list
UBC places 50th, SFU 225th in QS World University Rankings
Who's interested in the Round University Rankings?
The top results from a Google search for responses to the recently published Round Universities Rankings
New Ranking from Russia
Subscribe to:
Comments (Atom)