Waldemar Siwinski
The Era of Rankings by Subject is Coming
University World News
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Wednesday, October 28, 2015
Tuesday, October 27, 2015
Thursday, October 22, 2015
Even the Spectator reads the THE rankings
The influence of the global rankings, especially the Times Higher Education (THE) World University Rankings, appears to have no limits.
An article by Harry Mount in the Spectator describes the changing educational background of the leaders of the Labour Party. The top ranks used to be filled by graduates of Oxford (Denis Healey, Harold Wilson, Tony Blair, the Milibands, Ed Balls), Cambridge (Tristram Hunt) and Edinburgh (Gordon Brown).
Now they have been replaced by the alumni of Brunel and Birkbeck (John McDonnell), Sussex (Hilary Benn and Owen Smith, Nottingham (Michael Dugher ), Westminster (Gloria De Piero) and Hull (Tom Watson and Rosie Winterton) . Jeremy Corbyn lasted a year at the Polytechnic of North London now London Metropolitan University.
Mount observes that Oxford was second in the latest edition of the THE world rankings, Hull 401st and London Metropolitan unranked.
It is only fair to point out that participation in the THE rankings is voluntary so maybe London Metropolitan could have been ranked if they had bothered to send in the data.
Not everyone is impressed by the THE rankings. "Tony Dark" comments
An article by Harry Mount in the Spectator describes the changing educational background of the leaders of the Labour Party. The top ranks used to be filled by graduates of Oxford (Denis Healey, Harold Wilson, Tony Blair, the Milibands, Ed Balls), Cambridge (Tristram Hunt) and Edinburgh (Gordon Brown).
Now they have been replaced by the alumni of Brunel and Birkbeck (John McDonnell), Sussex (Hilary Benn and Owen Smith, Nottingham (Michael Dugher ), Westminster (Gloria De Piero) and Hull (Tom Watson and Rosie Winterton) . Jeremy Corbyn lasted a year at the Polytechnic of North London now London Metropolitan University.
Mount observes that Oxford was second in the latest edition of the THE world rankings, Hull 401st and London Metropolitan unranked.
It is only fair to point out that participation in the THE rankings is voluntary so maybe London Metropolitan could have been ranked if they had bothered to send in the data.
Not everyone is impressed by the THE rankings. "Tony Dark" comments
"Amusing to note the reference to the Times Higher Education world ranking: this allegedly authoritative table is produced by a handful of hacks, and their hired statisticians, from a journal so insignificant that hardly anyone even in universities reads it. The other allegedly authoritative table, emanating from an organisation called QS, is largely driven by another clique of journos who split from the Times Higher . And the heads of multi million pound universities quail before the wondrous listings generated by these miniscule cabals. A mad world, my masters."
Sunday, October 18, 2015
Going Up and Going Down
A revised version of a previous post has been posted at University World News. Readers are welcome to comment here.
Sunday, October 11, 2015
More on Politics and Rankings
The Higher Education Minister of Malaysia has praised the country's leading university, Universiti Malaya (UM) for getting into the top 150 of the Quacquarelli Symonds (QS) World University Rankings. He also noted that UM and other Malaysian universities had done well in the QS subject rankings.
The problem with relying on QS or Times Higher Education (THE) is that they are prone to volatility because of reliance on reputation surveys that can be unstable outside the top dozen or so universities. Things have been made worse this year by methodological changes. In the case of QS one change was to give more credit to citations in the humanities and social sciences thereby helping universities that publish mainly or entirely in English.
A more consistent view of university performance might be found in the Shanghai or US News rankings.
The problem with relying on QS or Times Higher Education (THE) is that they are prone to volatility because of reliance on reputation surveys that can be unstable outside the top dozen or so universities. Things have been made worse this year by methodological changes. In the case of QS one change was to give more credit to citations in the humanities and social sciences thereby helping universities that publish mainly or entirely in English.
A more consistent view of university performance might be found in the Shanghai or US News rankings.
Rankings Become Big Politics
University performance in global rankings has become a favorite weapon of politicians around the world. Scotland's first Minister has noted that there are five Scottish universities in the Times Higher Education World University Rankings and that the Scottish government will "continue to work with our universities to make two sure that they continue to be that fantastic success story"
She did not mention that there are only two Scottish universities in the top 200 of the Shanghai rankings and in the US News Best Global Universities.
She did not mention that there are only two Scottish universities in the top 200 of the Shanghai rankings and in the US News Best Global Universities.
Thursday, October 08, 2015
Tokyo Metropolitan University is Still in the Japanese Top Ten
Until recently Tokyo Metropolitan University had an advertisement with Times Higher Education proclaiming their perfect score of 100 for citations. This year the score fell to 72.2 and so now they just say "TMU ranks 9th among Japanese universities in the Times Higher Education World University Rankings 2015-2016"
I hope they got a discount.
I hope they got a discount.
Saturday, October 03, 2015
Where Should Rankers get Data From?
Times Higher Education (THE) have started publishing some basic university statistics on their rankings page: number of students, student-staff ratio, international students and female-male ratio.
Already some observers have noted that the data does not always match that found in institutional and official sources. I have heard that the number of students given for several German universities is significantly lower than that found in other sources.
The Online Citizen in Singapore has found that the island's two leading tertiary institutions, National University of Singapore and Nanyang Technological University, have claimed 34% and 33% international students respectively on the THE site although in 2013 the Minister of Education had claimed that the proportion of international students in Singaporean universities was only 16 %.
There are several plausible and innocent explanations for this and similar discrepancies. It could be that part-time students, branch campuses, online students, permanent residents, research institutes, commuters living in Malaysia are counted in one set of figures but not the other.
But there is a serious and general problem with institutional data for university rankings. Even if everybody concerned is completely honest, there are many points at which ambiguous definitions, conflicting estimates, duplication or omission of data can undermine the accuracy of ranking indicators. In the case of Germany there might be some argument over whether doctoral candidates count as students or teaching and/or research staff
QS used to have a validation hierarchy starting with national statistics, followed by institutional data, data from websites, old data, third party data and smart averages in that order. If it is still applied rigorously this would be the best approach.
I understand that both QS and THE reserve the right to overrule institutional data although how severe they are I do not know. THE have a particularly difficult task since they allow universities to opt in or out as they please. Should THE be too strict about the data supplied a university might simply decide not to be ranked for a year.
On balance, it is probably good sense for ranking organisations to rely on publicly accessible data when they can and to minimise input from universities.
Already some observers have noted that the data does not always match that found in institutional and official sources. I have heard that the number of students given for several German universities is significantly lower than that found in other sources.
The Online Citizen in Singapore has found that the island's two leading tertiary institutions, National University of Singapore and Nanyang Technological University, have claimed 34% and 33% international students respectively on the THE site although in 2013 the Minister of Education had claimed that the proportion of international students in Singaporean universities was only 16 %.
There are several plausible and innocent explanations for this and similar discrepancies. It could be that part-time students, branch campuses, online students, permanent residents, research institutes, commuters living in Malaysia are counted in one set of figures but not the other.
But there is a serious and general problem with institutional data for university rankings. Even if everybody concerned is completely honest, there are many points at which ambiguous definitions, conflicting estimates, duplication or omission of data can undermine the accuracy of ranking indicators. In the case of Germany there might be some argument over whether doctoral candidates count as students or teaching and/or research staff
QS used to have a validation hierarchy starting with national statistics, followed by institutional data, data from websites, old data, third party data and smart averages in that order. If it is still applied rigorously this would be the best approach.
I understand that both QS and THE reserve the right to overrule institutional data although how severe they are I do not know. THE have a particularly difficult task since they allow universities to opt in or out as they please. Should THE be too strict about the data supplied a university might simply decide not to be ranked for a year.
On balance, it is probably good sense for ranking organisations to rely on publicly accessible data when they can and to minimise input from universities.
Friday, October 02, 2015
Very Interesting Rankings from Times Higher Education
The latest edition of the Times Higher Education (THE) World University Rankings has just been published, along with a big dose of self-flattery and
congratulations to the winners of what is beginning to look more like a lottery
than an objective exercise in comparative assessment.
The background to the story is
that at the end of last year THE broke with their data suppliers Thomson Reuters (TR) and
announced the dawn of a new era of transparency and accountability
There were quite a few things wrong
with the THE rankings, especially with the citations indicator which supposedly measured research impact and was given
nearly a third of the total weighting. This meant that THE was faced with a
serious dilemma. Keeping the old methodology would be a problem but radical
reform would raise the question of why THE would want to change what they claimed was a uniquely trusted and sophisticated methodology with carefully calibrated indicators.
It seems that THE have decided to
make a limited number of changes but to postpone making a decision about other
issues.
They have broadened the academic
reputation survey, sending out forms in more languages and getting more
responses from outside the USA. Respondents are now drawn from those with publications
in the Scopus database, much larger than the Web of Science, as was information
about publications and citations. In addition, THE have excluded 649 “freakish” multi
– author papers from their calculations and diluted the effect of the regional modification
that boosted the scores in the citations indicator of low performing countries.
These changes have led to implausible fluctuations with some institutions rising or falling dozens or hundreds of places. Fortunately for THE, the latest winners are happy to trumpet their success and the losers so far seem to have lapsed into an embarrassed silence.
When they were published on the 30th
of September the rankings provided lots of headline fodder about who was up or
down.
The Irish Times announced that the rankings showed Trinity College Dublin had fallen while University College Dublin was rising.
In the Netherlands the University of
Twente bragged about its “sensationally higher scores”.
Study
International asserted that “Asia Falters” and that Britain and the US were still
dominant in higher education.
The London Daily Telegraph claimed that
European universities were matching the US.
The Hindu found something to boast about
by noting that India was at last the equal of co-BRICS member Brazil.
Russian media celebrated the
remarkable achievement of Lomonosov Moscow State University in rising 35 places.
And, of course, the standard THE narrative was trotted out again. British universities are wonderful
but they will only go on being wonderful if they are given as much money as
they want and are allowed to admit as many overseas students as they want.
The latest rankings support this narrative
of British excellence by showing Oxford and Cambridge overtaking Harvard, which
was pushed into sixth place. But is such a claim believable? Has anything happened
in the labs or lecture halls at any of those places between 2014 and 2015 to cause
such a shift?
In reality, what probably happened
was that the Oxbridge duo were not actually doing anything better this year but
that Harvard’s eclipse came from a large drop from 92.9 to 83.6 points for THE’s
composite teaching indicator. Did Harvard’s teaching really deteriorate over twelve
months? It is more likely that there were relatively fewer American
respondents in the THE survey but one cannot be sure because there are four
other statistics bundled into the indicator.
While British universities appeared to do well, French ones appeared to perform disastrously. The École Normale Supérieure
recorded a substantial gain going from 78th to 54th place
but every other French institution in the rankings fell, sometimes by dozens of places. École Polytechnique went from
61st place to 101st, Université Paris-Sud from 120th to 188th , the University of Strasbourg from the 201-225 band to 301-350, in every case because of a
substantial fall in the citations indicator. If switching to Scopus was intended to help non-English speaking countries it did not do France any good.
Meanwhile, the advance of Asia has apparently come to an end or gone into screeching reverse. Many Asian universities slipped down the ladder although the top Chinese schools held their ground. Some Japanese and Korean
universities fell dozens of places. The University of Tokyo went from 23rd
to 43rd place, largely because of a fall in the citations indicator from 74.7 points to 60.9 and the University of Kyoto from 59th to 88th with another drop in the score for citations. Among the casualties was Tokyo Metropolitan University which used to advertise its perfect score of 100 for citations on the THE website. This year, stripped of the citations for mega-papers in physics, its citation score dropped to a rather tepid 72.2.
The Korean flagships have also foundered. Seoul National University fell 35 places and the Korean Advanced Institute of Technology 66, largely because of a decline in the scores for teaching and research. Pohang University of Science and Technology (POSTECH) fell 50 places, losing points in all indicators except income from industry
The most catastrophic fall was in
Turkey. There were four Turkish universities in the top 200 last year. All of
them have dropped out. Several Turkish universities contributed to the Large Hadron
Collider project with its multiple authors and multiple citations and they also benefited from producing comparatively few
research papers and from the regional modification, which gave them artificially high scores for the citations indicator in 2014 but not this year.
The worst case was Middle East Technical
University which had the 85th place in 2014, helped by an outstanding
score of 92 for citations and reasonable scores for the other indicators. This year
it was in the 501-600 band with reduced scores for everything except Industry
Income and a very low score of 28.8 for citations.
The new rankings appear to have
restored the privilege given to medical research. In the upper reaches we find
St George’s, University of London, a medical school, which according to THE is the world's leading university for research impact, Charité - Universitätsmedizin Berlin, a teaching hospital affiliated to Humboldt University and the Free University of Berlin, and Oregon Health and Science University.
It also appears that THE's methodology continues to gives an undeserved advantage to small or specialized institutions such as Scuola Superiore Sant’Anna in Pisa, which does not appear to be a truly independent university, the Copenhagen Business School, and Rush University in Chicago, the academic branch of a private hospital.
These rankings appear so far to have got a good reception in the mainstream press, although it is likely that that before long we will hear some negative reactions from independent experts and from Japan, Korea, France, Italy and the Middle East.
THE, however, have just postponed the hard decisions that they will eventually have to make.
Monday, September 28, 2015
Japanese Barbarians Out to Crush Humanities!
The international education media has been getting very excited recently about what appeared to be an extraordinary act of cultural vandalism by the Japanese Ministry of Education.
It seems that the ministry has been behaving like the Taliban on a rampage through the Louvre and has ordered public universities to stop teaching the humanities and social sciences.
Noah Smith, an Assistant Professor of Finance at Stony Brook University SUNY and a freelance writer, wrote that public universities had been ordered to stop teaching social sciences, humanities and law, although apparently the "order" was non-binding.
Meanwhile Takamitsu Sawa announced in the Japan Times that the humanities were under attack and that someone on the ministry's panel of learned persons had said that students should study accounting software instead of Samuelson's Economics and translation instead of Shakespeare.
Eventually, the Financial Times revealed that that the ministry had been misinterpreted and that the abolition of the humanities referred to a number of unneeded teacher training programs. This was supported by an authoritative comment by a former government official.
So it seems that Samuelson and Shakespeare are safe from the rampage of utilitarian barbarians.
Perhaps Japanese universities can now adopt the best practices of Columbia and the University at Buffalo for the teaching of art.
It seems that the ministry has been behaving like the Taliban on a rampage through the Louvre and has ordered public universities to stop teaching the humanities and social sciences.
Noah Smith, an Assistant Professor of Finance at Stony Brook University SUNY and a freelance writer, wrote that public universities had been ordered to stop teaching social sciences, humanities and law, although apparently the "order" was non-binding.
Meanwhile Takamitsu Sawa announced in the Japan Times that the humanities were under attack and that someone on the ministry's panel of learned persons had said that students should study accounting software instead of Samuelson's Economics and translation instead of Shakespeare.
Eventually, the Financial Times revealed that that the ministry had been misinterpreted and that the abolition of the humanities referred to a number of unneeded teacher training programs. This was supported by an authoritative comment by a former government official.
So it seems that Samuelson and Shakespeare are safe from the rampage of utilitarian barbarians.
Perhaps Japanese universities can now adopt the best practices of Columbia and the University at Buffalo for the teaching of art.
Sunday, September 27, 2015
Latest on the THE Rankings Methodology
Times Higher Education (THE) have now officially announced the methodology of next week's World University Rankings. There are some changes although major problems are still not addressed.
First, THE is now getting data from Scopus rather than Thomson Reuters. The Scopus database is more inclusive -- it covers 22,000 publications compared to 12,000 -- and includes more papers from non-English speaking countries so this may give an advantage to some universities in Eastern Europe and Asia.
Second, THE has tried to make its reputation survey more inclusive, making forms available in an additional six languages and reducing the bias to the USA.
Third, 649 papers with more than 1,000 listed authors, mainly in physics, will not be counted for the citations indicator.
Fourth, the citations indicator will be divided into two with equal weighting. One half will be be with and one half without the "regional modification" by which the overall citations impact score of a university is divided by the square root of the score for the country in which it is located. In previous editions of these rankings this modification gave a big boost to universities in low scoring countries such as Chile, India, Turkey and Russia.
It is likely that Institutions such as Bogazici University, Panjab University, Federico Santa Maria Technical University and Universite Cadi Ayyad, which have benefited from contributing to mega-papers such as those emanating from the Large Hadron Collider project,will suffer from the exclusion of these papers from the citations indicator. Their pain will be increased by the dilution of the regional modification.
It is possible that such places may get some compensation in the form of more responses in the reputation survey or higher publication counts in the Scopus database but that is far from certain. I suspect that several university administrators are going to be very miserable next Thursday.
There is something else that should not be forgotten. The scores published by THE are not raw data but standardised scores derived from standard deviations and means. Since THE are including more universities in this year's rankings and since most of them are likely to have low scores for most indicators it follows that the overall mean scores of ranked universities will fall. This will have the effect of raising the standardised scores of the 400 or so universities that score above the mean. It is likely that this effect will vary from indicator to indicator and so the final overall scores will be even more unpredictable and volatile.
First, THE is now getting data from Scopus rather than Thomson Reuters. The Scopus database is more inclusive -- it covers 22,000 publications compared to 12,000 -- and includes more papers from non-English speaking countries so this may give an advantage to some universities in Eastern Europe and Asia.
Second, THE has tried to make its reputation survey more inclusive, making forms available in an additional six languages and reducing the bias to the USA.
Third, 649 papers with more than 1,000 listed authors, mainly in physics, will not be counted for the citations indicator.
Fourth, the citations indicator will be divided into two with equal weighting. One half will be be with and one half without the "regional modification" by which the overall citations impact score of a university is divided by the square root of the score for the country in which it is located. In previous editions of these rankings this modification gave a big boost to universities in low scoring countries such as Chile, India, Turkey and Russia.
It is likely that Institutions such as Bogazici University, Panjab University, Federico Santa Maria Technical University and Universite Cadi Ayyad, which have benefited from contributing to mega-papers such as those emanating from the Large Hadron Collider project,will suffer from the exclusion of these papers from the citations indicator. Their pain will be increased by the dilution of the regional modification.
It is possible that such places may get some compensation in the form of more responses in the reputation survey or higher publication counts in the Scopus database but that is far from certain. I suspect that several university administrators are going to be very miserable next Thursday.
There is something else that should not be forgotten. The scores published by THE are not raw data but standardised scores derived from standard deviations and means. Since THE are including more universities in this year's rankings and since most of them are likely to have low scores for most indicators it follows that the overall mean scores of ranked universities will fall. This will have the effect of raising the standardised scores of the 400 or so universities that score above the mean. It is likely that this effect will vary from indicator to indicator and so the final overall scores will be even more unpredictable and volatile.
Tuesday, September 22, 2015
Looking Inside the Engine: The Structure of the Round University Rankings
Many of those interested in international university rankings have been frustrated by the lack of transparency in the Quacquarelli Symonds (QS) and the Times Higher Education (THE) rankings .
The QS rankings assign a fifty per cent weighting to two surveys collected from a variety of channels -- I think six for the employer survey and five for the academic survey -- with different and fluctuating response rates.
The THE rankings have lumped five indicators in a Teaching cluster, three in a Research cluster and three in an International cluster. So how can anyone figure out just what is causing a university to rise or fall in the rankings?
A major step forward in transparency has now come with the recent publication of the Round University Rankings (RUR) by a Russian organisation that uses data from Thomson Reuters (TR), who provided the data for the Times Higher Education world and regional rankings from 2009 until the end of last year.
RUR have published the separate scores for all of the indicators. They have retained 12 out of the 13 indicators used in the THE rankings from 2011 to 2014, dropping income from industry as a percentage of research income, and added another eight.
I doubt that RUR could afford to pay TR very much for the data and I suspect that TR's motive in allowing the dissemination of such a large amount of information is to preempt THE or anyone else trying to move upstream in the drive to monetise data.
It is now possible to see whether the various indicators are measuring the same thing and hence are redundant, whether and to what extent they are associated with other indicators and whether there is any link between markers of input and markers of output.
Here is a crude analysis of a very small sample of sixteen, one in fifty, of the RUR rankings starting with Harvard and ending with the Latvia Transport and Telecom Institute. I hope that a more detailed analysis of the entire corpus can be done in a few weeks.
The combined indicator groups
Three groups, Teaching, Research, Financial Sustainability, are fairly closely associated with one another. The teaching cluster correlates .634 with Research and .735 with Financial Sustainability. Research correlates .702 with Financial Sustainability.
The International Diversity group appears to be the odd one out here. It correlates significantly with Research (.555) but not with Teaching or Financial Sustainability. This suggests that internationalisation, at least in the form of recruiting more international students, may not always be a strong marker of quality.
The Reputation Indicators
Looking at the three reputation indicators, teaching, international teaching and research, we can see that for practical purposes they are measuring the same thing. The correlation between the Research Reputation and Teaching Reputation scores is .986 and between Research Reputation and International Teaching Reputation .925. Between Teaching Reputation and International Teaching Reputation it is .941.
Alex Usher of Higher Education Strategy Associates has claimed a correlation of .99 between teaching and research reputation scores in the THE rankings up to 2014. The figures from the RUR rankings are a bit lower but essentially the reputation indicators are measuring the same thing, whatever it is, and there is no need to count them more than once.
Other Unnecessary Indicators
Turning to the international indicators, the correlation between Academic Staff per Students and Academic Staff per Bachelor Degrees is very close at .834. The latter, which has not been in any previous ranking, could be omitted without a significant loss of information,
There is an extremely high correlation, .989, between Citations per Academic and Research Staff and Papers per Academic and Research Staff. It sounds rather counter-intuitive but it seems that as measure of research productivity one is as good as the other, at least when dealing with more than a few hundred elite universities
There is a correlation of .906 between Institutional Income per Academic Staff and Institutional Income per Student.
It would appear then that the THE rankings of 2011-2014 with 13 indicators had passed the point beyond which additional indicators become redundant and provide no additional information.
Input and Outputs
There are some clues about the possible relationship between indicators that could be regarded as inputs and those that might be counted as outputs.
Academic Staff per Student does not significantly affect teaching reputation (.350, sig .183). It is positively and significantly associated only with doctoral degrees per bachelor degrees (.510). The correlation with the overall score is, however, quite high and significant at .552.
There is some evidence that a diverse international faculty might have a positive impact on research output and quality. The correlations between International Faculty and Normalised Citation Impact, Papers per Academic and Research Staff and the overall score are positive and significant. On the other hand, the correlations between international collaboration and overall score and international students and overall score are weak and insignificant.
Money seems to help at least as as far as research is concerned. There are moderately high and significant correlations between International Income per Academic Staff and Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
Research Income per Academic Staff correlates highly and significantly with Teaching Reputation, International Teaching Reputation, Research Reputation, Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
The QS rankings assign a fifty per cent weighting to two surveys collected from a variety of channels -- I think six for the employer survey and five for the academic survey -- with different and fluctuating response rates.
The THE rankings have lumped five indicators in a Teaching cluster, three in a Research cluster and three in an International cluster. So how can anyone figure out just what is causing a university to rise or fall in the rankings?
A major step forward in transparency has now come with the recent publication of the Round University Rankings (RUR) by a Russian organisation that uses data from Thomson Reuters (TR), who provided the data for the Times Higher Education world and regional rankings from 2009 until the end of last year.
RUR have published the separate scores for all of the indicators. They have retained 12 out of the 13 indicators used in the THE rankings from 2011 to 2014, dropping income from industry as a percentage of research income, and added another eight.
I doubt that RUR could afford to pay TR very much for the data and I suspect that TR's motive in allowing the dissemination of such a large amount of information is to preempt THE or anyone else trying to move upstream in the drive to monetise data.
It is now possible to see whether the various indicators are measuring the same thing and hence are redundant, whether and to what extent they are associated with other indicators and whether there is any link between markers of input and markers of output.
Here is a crude analysis of a very small sample of sixteen, one in fifty, of the RUR rankings starting with Harvard and ending with the Latvia Transport and Telecom Institute. I hope that a more detailed analysis of the entire corpus can be done in a few weeks.
The combined indicator groups
Three groups, Teaching, Research, Financial Sustainability, are fairly closely associated with one another. The teaching cluster correlates .634 with Research and .735 with Financial Sustainability. Research correlates .702 with Financial Sustainability.
The International Diversity group appears to be the odd one out here. It correlates significantly with Research (.555) but not with Teaching or Financial Sustainability. This suggests that internationalisation, at least in the form of recruiting more international students, may not always be a strong marker of quality.
The Reputation Indicators
Looking at the three reputation indicators, teaching, international teaching and research, we can see that for practical purposes they are measuring the same thing. The correlation between the Research Reputation and Teaching Reputation scores is .986 and between Research Reputation and International Teaching Reputation .925. Between Teaching Reputation and International Teaching Reputation it is .941.
Alex Usher of Higher Education Strategy Associates has claimed a correlation of .99 between teaching and research reputation scores in the THE rankings up to 2014. The figures from the RUR rankings are a bit lower but essentially the reputation indicators are measuring the same thing, whatever it is, and there is no need to count them more than once.
Other Unnecessary Indicators
Turning to the international indicators, the correlation between Academic Staff per Students and Academic Staff per Bachelor Degrees is very close at .834. The latter, which has not been in any previous ranking, could be omitted without a significant loss of information,
There is an extremely high correlation, .989, between Citations per Academic and Research Staff and Papers per Academic and Research Staff. It sounds rather counter-intuitive but it seems that as measure of research productivity one is as good as the other, at least when dealing with more than a few hundred elite universities
There is a correlation of .906 between Institutional Income per Academic Staff and Institutional Income per Student.
It would appear then that the THE rankings of 2011-2014 with 13 indicators had passed the point beyond which additional indicators become redundant and provide no additional information.
Input and Outputs
There are some clues about the possible relationship between indicators that could be regarded as inputs and those that might be counted as outputs.
Academic Staff per Student does not significantly affect teaching reputation (.350, sig .183). It is positively and significantly associated only with doctoral degrees per bachelor degrees (.510). The correlation with the overall score is, however, quite high and significant at .552.
There is some evidence that a diverse international faculty might have a positive impact on research output and quality. The correlations between International Faculty and Normalised Citation Impact, Papers per Academic and Research Staff and the overall score are positive and significant. On the other hand, the correlations between international collaboration and overall score and international students and overall score are weak and insignificant.
Money seems to help at least as as far as research is concerned. There are moderately high and significant correlations between International Income per Academic Staff and Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
Research Income per Academic Staff correlates highly and significantly with Teaching Reputation, International Teaching Reputation, Research Reputation, Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
Saturday, September 19, 2015
Who's Interested in the QS World University Rankings?
And here are the first ten results (excluding this blog and the QS page) from a Google search for this year's QS world rankings. Compare with ARWU and RUR. Does anyone notice any patterns?
Canada falls in World University Rankings' 2015 list
UBC places 50th, SFU 225th in QS World University Rankings
Who's interested in the Round University Rankings?
The top results from a Google search for responses to the recently published Round Universities Rankings
New Ranking from Russia
Who's Interrested in the Shanghai Rankings?
First results from a Google search for responses to the latest edition of the Shanghai world rankings.
Radboud University: 132nd place on ARWU/Shanghai ranking 2015
Friday, September 18, 2015
The Italian Ranking Dance
As noted in the previous post, the latest QS world rankings have not been well received by the Italian blog ROARS. Their opinion of the reaction of the Italian media and public was summarised by posting the following video
Who believes QS?
From the Italian site ROARS: Return on Academic Research (Google translation):
According to the ranking Quacquarelli Symonds (QS) in Siena that something would happen in a year they have lost 220 (two hundred) rankings. But Pavia and Turin have collapsed by over 150 people coming out of the top-500; They have lost more than 100 positions Pisa, Tor Vergata, Federico II of Naples, Milan Catholic, Genoa, Perugia and Bicocca. The meltdown is simply due to the fact that QS has changed the methodology used in its construction ranking. Gaining places only the Polytechnics of Milan and Turin, as provided by Richard Holmes more than a month ago, when the news was spread of the change in methodology. I hope that the collapse of the Italian university in 2015 "certified" by QS and caused by the change of methodology, be a lesson: the rankings are not a serious way to evaluate the performance of universities.Unfortunately, judging from press and of POLIMI POLITO, it seems that the lesson has been helpful.
According to the ranking Quacquarelli Symonds (QS) in Siena that something would happen in a year they have lost 220 (two hundred) rankings. But Pavia and Turin have collapsed by over 150 people coming out of the top-500; They have lost more than 100 positions Pisa, Tor Vergata, Federico II of Naples, Milan Catholic, Genoa, Perugia and Bicocca. The meltdown is simply due to the fact that QS has changed the methodology used in its construction ranking. Gaining places only the Polytechnics of Milan and Turin, as provided by Richard Holmes more than a month ago, when the news was spread of the change in methodology. I hope that the collapse of the Italian university in 2015 "certified" by QS and caused by the change of methodology, be a lesson: the rankings are not a serious way to evaluate the performance of universities.Unfortunately, judging from press and of POLIMI POLITO, it seems that the lesson has been helpful.
Thursday, September 17, 2015
University Quality and Bias
Anticipating requests, here is the link for a significant paper by Christopher Claassen of the University of Essex.
Measuring University Quality by Christopher Claassen
http://www.chrisclaassen.com/University_rankings.pdf
Measuring University Quality by Christopher Claassen
http://www.chrisclaassen.com/University_rankings.pdf
Tuesday, September 15, 2015
Auto-Induced Fly Catching
Taking a break from the most exciting or second most exciting educational event this month, I have just received a message from Google Scholar Citations asking if I wanted to add the following to my profile:
Unfortunately, I couldn't claim credit for this work. In 1972 I was still immersed in the Irish Home Rule Debate and the rise of the Sokoto Caliphate.
I wonder whether this was an attempt to develop an environmentally friendly form of pest control or is this a serious mental disorder among certain kinds of dogs?
Is it too late to submit a nomination for the IgNobel awards?
JG LANE, RJ Holmes
BRITISH VETERINARY JOURNAL 128 (9), 477-&, 1972
Unfortunately, I couldn't claim credit for this work. In 1972 I was still immersed in the Irish Home Rule Debate and the rise of the Sokoto Caliphate.
I wonder whether this was an attempt to develop an environmentally friendly form of pest control or is this a serious mental disorder among certain kinds of dogs?
Is it too late to submit a nomination for the IgNobel awards?
Tuesday, September 08, 2015
Global Ranking From Russia
A very interesting new set of global rankings appeared seven days ago, the Round University Ranking from Russia. The organization is rather mysterious, although probably not so much in Russia and nearby places.
The rankings are based entirely on data from Thomson Reuters (TR) and the structure and methodology are similar to last year's Times Higher Education (THE) World University Rankings. They include 12 out of the 13 indicators used in the 2014 THE rankings, with only the percentage of research income derived from industry omitted. There are eight more measures making a total of twenty, five each for teaching, research, international diversity and financial sustainability.
There is a normalized citations indicator with a weighting of only eight per cent, balanced by a simple count of citations per academic and research staff, also with eight per cent.
Altogether the three reputation indicators count for 18 per cent of the weighting compared to 33 per cent in the 2014 THE rankings or 50 per cent in the Quacquarelli Symonds (QS) world rankings
To date these rankings appear to have been ignored by the world media except in Russia and its neighbors. Compared to the excitement with which the THE or even the QS or Shanghai rankings are greeted this might seem a bit odd. If the THE rankings were sophisticated because they had 13 indicators then these are even more so with 20. If the THE rankings were trusted because they were powered by Thomson Reuters so are these. If the survey in the THE rankings was real social science then so is this.
Could it be that the THE rankings are beloved of the Russell Group and its like around the world not because of their robustness, comprehensiveness, transparency or superior methodology but because of the glamour derived from a succession of prestigious events, networking dinners and exclusive masterclasses designed to appeal to the status anxieties of upwardly or downwardly mobile university administrators?
There are some problems with the RUR rankings. There is incoherence about what the indicators are supposed to measure.The methodology says that '[I]t is assumed that "undergraduate" level is the core of higher education' so there is an indicator measuring academic staff per bachelor degree. But then we have a weighting of eight per cent for doctoral degrees per bachelor degrees.
One excellent thing about these rankings is that the score for all of the indicators can be found in the profiles of the individual universities. If anyone has the energy and time there are some important questions that could be answered . Is the correlation between teaching and research reputation so high that a distinction between the two is redundant? Is income or number of faculty a better prediction of research performance?
The presentation leaves a lot to be desired. Cooper League? The explanation of the methodology verges on the incomprehensible. Can somebody tell RUR to get a competent human to translate for them and forget about the Google Translator?
The economics of the relationship between TR and RUR are puzzling. There are no obvious signs that RUR has a large income from which to pay TR for the data and I doubt that TR has passed it on for purely altruistic reasons. Could it be that TR are simply trying to undercut THE's attempt to go it alone? If nothing else, it could undermine any THE plans to go into the benchmarking and consulting trade.
Anyway, here are some first places. No surprises here, except maybe for Scuola Superiore Normale Pisa. You can find out exactly where the strengths of that school are by checking the scores for the twenty indicators.
Overall: Harvard
Teaching: Caltech
Research: Chicago
International Diversity: EPF Lausanne
Financial Sustainability: Caltech
China: Peking 49th
Russia: Moscow State 187th
India: IIT Kharagpur 272nd
UK: ICL 5th
Germany: Munich 22nd
France: Ecole Polytechnique 19th
Egypt: American University Cairo 571st
South Africa: Cape Town 201st
Brazil: Sao Paulo 65th
Italy: Scuola Normale Superiore Pisa 66th
Turkey: METU 308th
Malaysia: Universiti Putra Malaysia 513th
Australia: ANU 77th
Japan: Tokyo 47th
Korea: KAIST 41st.
The rankings are based entirely on data from Thomson Reuters (TR) and the structure and methodology are similar to last year's Times Higher Education (THE) World University Rankings. They include 12 out of the 13 indicators used in the 2014 THE rankings, with only the percentage of research income derived from industry omitted. There are eight more measures making a total of twenty, five each for teaching, research, international diversity and financial sustainability.
There is a normalized citations indicator with a weighting of only eight per cent, balanced by a simple count of citations per academic and research staff, also with eight per cent.
Altogether the three reputation indicators count for 18 per cent of the weighting compared to 33 per cent in the 2014 THE rankings or 50 per cent in the Quacquarelli Symonds (QS) world rankings
To date these rankings appear to have been ignored by the world media except in Russia and its neighbors. Compared to the excitement with which the THE or even the QS or Shanghai rankings are greeted this might seem a bit odd. If the THE rankings were sophisticated because they had 13 indicators then these are even more so with 20. If the THE rankings were trusted because they were powered by Thomson Reuters so are these. If the survey in the THE rankings was real social science then so is this.
Could it be that the THE rankings are beloved of the Russell Group and its like around the world not because of their robustness, comprehensiveness, transparency or superior methodology but because of the glamour derived from a succession of prestigious events, networking dinners and exclusive masterclasses designed to appeal to the status anxieties of upwardly or downwardly mobile university administrators?
There are some problems with the RUR rankings. There is incoherence about what the indicators are supposed to measure.The methodology says that '[I]t is assumed that "undergraduate" level is the core of higher education' so there is an indicator measuring academic staff per bachelor degree. But then we have a weighting of eight per cent for doctoral degrees per bachelor degrees.
One excellent thing about these rankings is that the score for all of the indicators can be found in the profiles of the individual universities. If anyone has the energy and time there are some important questions that could be answered . Is the correlation between teaching and research reputation so high that a distinction between the two is redundant? Is income or number of faculty a better prediction of research performance?
The presentation leaves a lot to be desired. Cooper League? The explanation of the methodology verges on the incomprehensible. Can somebody tell RUR to get a competent human to translate for them and forget about the Google Translator?
The economics of the relationship between TR and RUR are puzzling. There are no obvious signs that RUR has a large income from which to pay TR for the data and I doubt that TR has passed it on for purely altruistic reasons. Could it be that TR are simply trying to undercut THE's attempt to go it alone? If nothing else, it could undermine any THE plans to go into the benchmarking and consulting trade.
Anyway, here are some first places. No surprises here, except maybe for Scuola Superiore Normale Pisa. You can find out exactly where the strengths of that school are by checking the scores for the twenty indicators.
Overall: Harvard
Teaching: Caltech
Research: Chicago
International Diversity: EPF Lausanne
Financial Sustainability: Caltech
China: Peking 49th
Russia: Moscow State 187th
India: IIT Kharagpur 272nd
UK: ICL 5th
Germany: Munich 22nd
France: Ecole Polytechnique 19th
Egypt: American University Cairo 571st
South Africa: Cape Town 201st
Brazil: Sao Paulo 65th
Italy: Scuola Normale Superiore Pisa 66th
Turkey: METU 308th
Malaysia: Universiti Putra Malaysia 513th
Australia: ANU 77th
Japan: Tokyo 47th
Korea: KAIST 41st.
Sunday, September 06, 2015
More on Alternative Indicators for Ranking African Universities
Continuing with our exploration of how to rank universities outside the world's top 200 or 400 where it is necessary to develop robust and sophisticated techniques of standardisation, normalisation, scaling, regional modification, taking away the number you first thought of (just kidding) verification, weighting and validation to figure out that Caltech's normalised research impact is slightly better than Harvard's or that Cambridge is a bit more international than that place in the other Cambridge, here is a ranking of African universities according to recommendations in LinkedIn.
There are obvious problems with this indicator, not least of which is the tiny number of responses compared to all the students on the continent. It might, however, be the precursor to a useful survey of student opinion or graduate employability later on.
First place goes to the University of South Africa, an open distance education institution whose alumni include Nelson Mandela, Cyril Ramaphosa and Jean-Bertrand Aristide. Makerere University, the University of Nairobi and Kenyatta University do well.
Data was compiled on the 28th and 29th of July. All universities included in the THE experimental African ranking, the top fifty African universities in Webometrics plus the top universities in Webometrics or 4icu of any country still nor included.
Rank | University | Country | Number of LinkedIn Recommendations |
---|---|---|---|
1 | University of South Africa | South Africa | 154 |
2 | Makerere University | Uganda | 116 |
3 | University of the Witwatersrand | South Africa | 94 |
4 | University of Ibadan | Nigeria | 86 |
5 | University of Johannesburg | South Africa | 79 |
6 | University of Nairobi | Kenya | 75 |
7 | Cairo University | Egypt | 67 |
8 | Stellenbosch University | South Africa | 63 |
9 | University of Pretoria | South Africa | 62 |
10 | Kenyatta University | Kenya | 61 |
11 | University of Cape Town | South Africa | 60 |
12 | University of Lagos | Nigeria | 58 |
13 | Addis Ababa University | Ethiopia | 55 |
14 | Obafemi Awolowo University | Nigeria | 50 |
15 | Alexandria University | Egypt | 47 |
16 | Rhodes University | South Africa | 42 |
17 | Jomo Kenyatta University of Agriculture and Technology | Kenya |
40
|
18 | American University in Cairo | Egypt | 28 |
19 | University of Kwazulu-Natal | South Africa | 26 |
20 | University of Ilorin | Nigeria | 24 |
21 | University of Zimbabwe | Zimbabwe | 22 |
22 | Kwame Nkrumah University of Science and Technology | Ghana | 21 |
23 | Helwan University | Egypt |
20
|
24= | North West University | South Africa | 18 |
24= | University of Ghana | Ghana | 18 |
24= | University of Port Harcourt | Nigeria | 18 |
27= | Durban University of Technology | South Africa | 16 |
27= | University of Dar Es Salaam | Tanzania | 16 |
29= | Nelson Mandela Metropolitan University | South Africa | 14 |
29= | University of the Western Cape | South Africa | 14 |
31 | Cape Peninsula University of Technology | South Africa | 13 |
32 | Mansoura university | Egypt | 12 |
33 | University of Botswana | Botswana | 10 |
34 | Covenant University | Nigeria | 9 |
35= | Zagazig University | Egypt | 7 |
35= | Suez Canal university | Egypt | 7 |
37 | Tanta University | Egypt | 6 |
38= | Assiut University | Egypt | 5 |
38= | Université Constantine 1 | Algeria | 5 |
40= | University of the Free State | South Africa | 4 |
40= | Universite des Sciences et de la Technologie Houari Boumediene | Algeria |
4
|
42+ | South Valley University | Egypt | 3 |
42+ | Université Cadi Ayyad | Morocco | 2 |
42+ | University ofTunis | Tunisia | 2 |
42+ | University of Namibia | Namibia | 1 |
42+ | University of Mauritius | Mauritius | 1 |
42+ | Université Cheikh Anta Diop | Senegal | 0 |
42+ | Université Mohammed V Souissi | Morocco | 0 |
42+ | University of Khartoum | Sudan | 0 |
42+ | University of Malawi | Malawi | 0 |
42+ | Université Hassan II Ain Chock | Morocco | 0 |
42+ | Kafrelsheikh University | Egypt | 0 |
42+ | University of Zambia | Zambia | 0 |
42+ | Bejaia university | Algeria | 0 |
42+ | Minia University | Egypt | 0 |
42+ | Benha University | Egypt | 0 |
42+ | Universidade Católica de Angola | Angola | 0 |
42+ | Université de Lomé | Togo | 0 |
42+ | Université Abou Bekr Belkaid | Algeria | 0 |
42+ | Beni-Suef University | Egypt | 0 |
42+ | Université Omar Bongo | Gabon | 0 |
42+ | University of the Gambia | Gambia | 0 |
42+ | Université de Toliara | Madagascar | 0 |
42+ | Université Kasdi Merbah Ouarg | Algeria | 0 |
42+ | Universite de la Reunion | Reunion | 0 |
42+ | Université d'Abomey-Calavi | Benin | 0 |
42+ | Universidade Eduardo Mondlane | Mozambique | 0 |
42+ | Université de Ouagadougou | Burkina Faso | 0 |
42+ | University of Rwanda | Rwanda | 0 |
42+ | Universite de Bamako | Mali | 0 |
42+ | University of Swaziland | Swaziland | 0 |
42+ | Université Félix Houphouët-Boigny | Ivory Coast | 0 |
42+ | Université de Kinshasa | Democratic Republic of the Congo | 0 |
42+ | National University of Lesotho | Lesotho |
0
|
42+ | Universidade Jean Piaget de Cabo Verde | Cape Verde | 0 |
42+ | N Engineering S of Sfax | Tunisia | 0 |
42+ | Université Marien Ngouabi | Republic of the Congo |
0
|
42+ | University of Liberia | Liberia | 0 |
42+ | Université Djillali Liabes | Algeria | 0 |
42+ | Université Abdou Moumouni de Niamey | Niger | 0 |
42+ | Misurata University | Libya | 0 |
42+ | Université de Dschang | Cameroons | 0 |
42+ | Université de Bangui | Central African Republic | 0 |
42+ | Université de Nouakchott | Mauritania | 0 |
42+ | Eritrea Institute of Technology | Eritrea | 0 |
42+ | Université de Djibouti | Djibouti | 0 |
42+ | University of Seychelles | Seychelles | 0 |
42+ | Mogadishu University | Somalia | 0 |
42+ | Universidad Nacional de Guinea Ecuatorial | Equatorial Guinea | 0 |
42+ | Universite Gamal Abdel Nasser de Conakry | Guinea | 0 |
42+ | University of Makeni | Sierra Leone | 0 |
42+ | John Garang Memorial University | South Sudan | 0 |
42+ | Hope Africa University | Burundi | 0 |
42+ | Universite de Moundou | Chad | 0 |
42+ | Universite de Yaounde I | Cameroons | 0 |
Tuesday, September 01, 2015
Best German and Austrian Universities if you Want to get Rich
If you want to go a university in Germany or Austria and get rich afterwards, the website Wealth-X has a ranking for you. It counts the number of UHNW (ultra high net worth) alumni, those with US$ 30 million or above.
Here are the top five with the number of UHNW individuals in brackets.
1. University of Cologne (18)
2. University of Munich (14)
3. University of Hamburg (13)
4. University of Freiburg (11)
5. University of Bonn (11)
There may well be protests about who should be first. In tenth place is "Ludwig Maximilians University Munich (LMU Munich)", which I assume is another name for the University of Munich, with six UHNW alumni .
Here are the top five with the number of UHNW individuals in brackets.
1. University of Cologne (18)
2. University of Munich (14)
3. University of Hamburg (13)
4. University of Freiburg (11)
5. University of Bonn (11)
There may well be protests about who should be first. In tenth place is "Ludwig Maximilians University Munich (LMU Munich)", which I assume is another name for the University of Munich, with six UHNW alumni .
Monday, August 31, 2015
Update on changes in ranking methodology
Times Higher Education (THE) have been preparing the ground for methodological changes in their world rankings. A recent article by Phil Baty announced that the new world rankings scheduled for September 30 will not count the citations to 649 papers, mainly in particle physics, with more than 1000 authors.
This is perhaps the best that is technically and/or commercially feasible at this moment but it is far from satisfactory. Some of these publications are dealing with the most basic questions about the nature of physical reality and it is a serious distortion not to include them in the ranking methodology. There have been complaints about this. Pavel Krokovny's comment was noted in a previous post while Mete Yeyisoglu argues that:
Baty provides a reasonable answer to the question why the cut-off point is 1,000 authors.
But there is a fundamental issue developing here that goes beyond ranking procedure. The concept of authorship of a philosophy paper written entirely by a single person or a sociological study from a small research team is very different from that of the huge multi-national capital and labour intensive publications in which the number of collaborating institutions exceeds the number of paragraphs and there are more authors than sentences.
Fractional counting does seem to be the only fair and sensible way forward and it is now apparently on THE's agenda although they have still not committed themselves.
The objection could be raised that while the current THE system gives a huge reward to even the least significant contributing institution, fractional counting would give major research universities insufficient credit for their role in important research projects.
A long term solution might be to draw a distinction between the contributors to and the authors of the mega papers. For most publications there would be no need to draw such a distinction but for those with some sort of input from dozens, hundreds or thousands of people it might be feasible for to allot half the credit to all those who had anything to do with the project and the other half to those who meet the standard criteria of authorship. There would no doubt be a lot of politicking about who gets the credit but that would be nothing new.
Duncan Ross, the new Data and Analytics Director at THE, seems to be thinking along these lines.
THE now appear to agree that this is indefensible in the long run and hope that a more inclusive academic survey and the shift to Scopus, with broader coverage than the Web of Science, will lead to this adjustment being phased out.
It is a bit odd that TR and THE should have introduced income, in three separate indicators, and international outlook, in another three, as markers of excellence, but then included a regional modification to compensate for limited funding and international contacts.
THE are to be congratulated for having put fractional counting and phasing out the regional modification on their agenda. Let's hope it doesn't take too long.
While we are on the topic, there are some more things about the citation indicator to think about . First, to repeat a couple of points mentioned in the earlier post.
Some other things THE could think about.
This is perhaps the best that is technically and/or commercially feasible at this moment but it is far from satisfactory. Some of these publications are dealing with the most basic questions about the nature of physical reality and it is a serious distortion not to include them in the ranking methodology. There have been complaints about this. Pavel Krokovny's comment was noted in a previous post while Mete Yeyisoglu argues that:
"Fractional counting is the ultimate solution. I wish you could have worked it out to use fractional counting for the 2015-16 rankings.
The current interim approach you came up with is objectionable.
Why 1,000 authors? How was the limit set? What about 999 authored-articles?
Although the institution I work for will probably benefit from this interim approach, I think you should have kept the same old methodology until you come up with an ultimate solution.
This year's interim fluctuation will adversely affect the image of university rankings."
Baty provides a reasonable answer to the question why the cut-off point is 1,000 authors.
But there is a fundamental issue developing here that goes beyond ranking procedure. The concept of authorship of a philosophy paper written entirely by a single person or a sociological study from a small research team is very different from that of the huge multi-national capital and labour intensive publications in which the number of collaborating institutions exceeds the number of paragraphs and there are more authors than sentences.
Fractional counting does seem to be the only fair and sensible way forward and it is now apparently on THE's agenda although they have still not committed themselves.
The objection could be raised that while the current THE system gives a huge reward to even the least significant contributing institution, fractional counting would give major research universities insufficient credit for their role in important research projects.
A long term solution might be to draw a distinction between the contributors to and the authors of the mega papers. For most publications there would be no need to draw such a distinction but for those with some sort of input from dozens, hundreds or thousands of people it might be feasible for to allot half the credit to all those who had anything to do with the project and the other half to those who meet the standard criteria of authorship. There would no doubt be a lot of politicking about who gets the credit but that would be nothing new.
Duncan Ross, the new Data and Analytics Director at THE, seems to be thinking along these lines.
"In the longer term there are one technical and one structural approach that would be viable. The technical approach is to use a fractional counting approach (2932 authors? Well you each get 0.034% of the credit). The structural approach is more of a long term solution: to persuade the academic community to adopt metadata that adequately explains the relationship of individuals to the paper that they are ‘authoring’. Unfortunately I’m not holding my breath on that one."The counting of citations to mega papers is not the only problem with the THE citations indicator. Another is the practice of giving a boost to universities in underperforming countries. Another item by Phil Baty quotes this justification from Thomson Reuters, THE's former data partner.
“The concept of the regional modification is to overcome the differences between publication and citation behaviour between different countries and regions. For example some regions will have English as their primary language and all the publications will be in English, this will give them an advantage over a region that publishes some of its papers in other languages (because non-English publications will have a limited audience of readers and therefore a limited ability to be cited). There are also factors to consider such as the size of the research network in that region, the ability of its researchers and academics to network at conferences and the local research, evaluation and funding policies that may influence publishing practice.”
THE now appear to agree that this is indefensible in the long run and hope that a more inclusive academic survey and the shift to Scopus, with broader coverage than the Web of Science, will lead to this adjustment being phased out.
It is a bit odd that TR and THE should have introduced income, in three separate indicators, and international outlook, in another three, as markers of excellence, but then included a regional modification to compensate for limited funding and international contacts.
THE are to be congratulated for having put fractional counting and phasing out the regional modification on their agenda. Let's hope it doesn't take too long.
While we are on the topic, there are some more things about the citation indicator to think about . First, to repeat a couple of points mentioned in the earlier post.
- Reducing the number of fields or doing away with normalisation by year of citation. The more boxes into which any given citation can be dropped the greater the chance of statistical anomalies when a cluster of citations meets a low world average of citations for that particular year of citations, year of publication and field (300 in Scopus?)
- Reducing the weighting for this indicator. Perhaps citations per paper normalized by field is a useful instrument for comparing the quality of research of MIT, Caltech, Harvard and the like but it might be of little value when comparing the research performance of Panjab University and IIT Bombay or Istanbul University and Bogazici.
Some other things THE could think about.
- Adding a measure of overall research impact, perhaps simply by counting citations. At the very least stop calling field- and year- normalised regionally modified citations per paper a measure of research impact. Call it research quality or something like that.
- Doing something about secondary affiliations. So far this seems to have been a problem mainly for the Highly Cited Researchers indicator in the Shanghai ARWU but it may not be very long before more universities realise that a few million dollars for adjunct faculty could have a disproportion impact on publication and citation counts.
- Also, perhaps THE should consider excluding self-citations (or even citations within the same institution although that would obviously be technically difficult). Self-citation caused a problem in 2010 when Dr El Naschie's diligent citation of himself and a few friends lifted Alexandria University to fourth place in the world for research impact. Something similar might happen again now that THE are using a larger and less selective database.
Friday, August 28, 2015
The Richest University in China ...
... is Tsinghua University but Zhejiang, Peking and Shanghai Jiao Tong Universities appear to be more productive, as measured by the Publications indicator in the Shanghai rankings.
China Daily has just published a list of the top ten universities in China ranked according to annual income as reported to the Ministry of Education. Here they are with the Publications score (papers in the Science Citation Index and the Social Science Citation Index in 2014) in brackets.
1. Tsinghua University 17.56 billion yuan (63.8)
2. Zhejiang University 15.64 billion yuan (68.5)
3. Peking University 12.85 billion yuan (64)
4. Shanghai Jiao Tong University 11.89 billion yuan (68.5)
5. Fudan University 7.71 billion yuan (56.1)
6. Wuhan University 6.83 billion yuan (45.8)
7. Jilin University 6.82 billion yuan (50.7)
8. Huazhong University of Science and Technology 6.81 billion yuan (53.1)
9. Sun Yat-sen University 6.69 billion yuan (54.9)
10. Sichuan University 6.58 billion yuan (54.2).
Subscribe to:
Posts (Atom)