Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Thursday, April 25, 2013
Saturday, April 20, 2013
The Leiden Ranking
The Leiden ranking for 2013 is out. This is produced by the Centre for Science and Technology Studies (CWTS) at Leiden University and represents pretty much the state of the art in assessing research publications and citations.
A variety of indicators are presented with several different settings but no overall winner is declared which means that these rankings are not going to get the publicity given to QS and Times Higher Education.
Here are top universities, using the default settings provided by CWTS.
Total Publications: Harvard
Citations per Paper: MIT
Normalised Citations per Paper: MIT
Quality of Publications: MIT
There are also indicators for international and industrial collaboration that I hope to discuss later.
It is also noticeable that high flyers in the Times Higher Education citations indicator, Alexandria University, Moscow Engineering Physics Institute (MEPhI), Hong Kong Baptist University, Royal Holloway, do not figure at all in the Leiden Ranking. What happened to them?
How could MEPhI, equal first in the world for research influence according to THE and Thomson Reuters, fail to even show up in the normalised citation indicator in the Leiden Ranking?
Firstly, Leiden have collected data for the top 500 universities in the world according to number of publications in the Web of Science. That would have been sufficient to keep these institutions out of the rankings.
In addition, Leiden use fractionalised counting as a default setting so that the impact of mutiple-author publications is divided by the number of university addresses. This would drastically reduce the impact of publications like the Review of Particle Physics.
Also, by field Leiden mean five broad subject groups whereas Thomson Reuters appears to use a larger number (21 if they use the same system as they do for highly cited researchers.) There is accordingly more chance of anomalous cases having a great influence in the THE rankings.
THE and Thomson Reuters would do well to look at the multi-authored, and most probably soon to be multi-cited, papers that were published in 2012 and look at the universities that could do well in 2014 if the methodology remains unchanged.
The Leiden ranking for 2013 is out. This is produced by the Centre for Science and Technology Studies (CWTS) at Leiden University and represents pretty much the state of the art in assessing research publications and citations.
A variety of indicators are presented with several different settings but no overall winner is declared which means that these rankings are not going to get the publicity given to QS and Times Higher Education.
Here are top universities, using the default settings provided by CWTS.
Total Publications: Harvard
Citations per Paper: MIT
Normalised Citations per Paper: MIT
Quality of Publications: MIT
There are also indicators for international and industrial collaboration that I hope to discuss later.
It is also noticeable that high flyers in the Times Higher Education citations indicator, Alexandria University, Moscow Engineering Physics Institute (MEPhI), Hong Kong Baptist University, Royal Holloway, do not figure at all in the Leiden Ranking. What happened to them?
How could MEPhI, equal first in the world for research influence according to THE and Thomson Reuters, fail to even show up in the normalised citation indicator in the Leiden Ranking?
Firstly, Leiden have collected data for the top 500 universities in the world according to number of publications in the Web of Science. That would have been sufficient to keep these institutions out of the rankings.
In addition, Leiden use fractionalised counting as a default setting so that the impact of mutiple-author publications is divided by the number of university addresses. This would drastically reduce the impact of publications like the Review of Particle Physics.
Also, by field Leiden mean five broad subject groups whereas Thomson Reuters appears to use a larger number (21 if they use the same system as they do for highly cited researchers.) There is accordingly more chance of anomalous cases having a great influence in the THE rankings.
THE and Thomson Reuters would do well to look at the multi-authored, and most probably soon to be multi-cited, papers that were published in 2012 and look at the universities that could do well in 2014 if the methodology remains unchanged.
Tuesday, April 02, 2013
Combining Rankings
Meta University Ranking has combined the latest ARWU, QS and THE World Rankings. Universities are ordered by place so that Harvard gets the highest (i.e. lowest score) with an average of 2.67 (1st in ARWU, 3rd in QS and 4th in THE).
After that there is MIT, Cambridge, Caltech and Oxford.
Meta University Ranking has combined the latest ARWU, QS and THE World Rankings. Universities are ordered by place so that Harvard gets the highest (i.e. lowest score) with an average of 2.67 (1st in ARWU, 3rd in QS and 4th in THE).
After that there is MIT, Cambridge, Caltech and Oxford.
Tuesday, March 19, 2013
Some More on the THE World University Rankings 2
I have calculated the mean scores for the indicator groups in the 2012-13 Times Higher education. world university rankings. The mean scores are for the 400 universities included in the published 2012-13 rankings are:
Teaching 41.67
International Outlook 52.35
Industry Income 50.74
Research 40.84
Citations 65.25
For Industry Income, N is 363 since 37 universities, mainly in the US, did not submit data. This might be a smart move if the universities realized that they were likely to receive a low score. N is 400 for the others.
There are considerable differences between the indicators which are probably due to Thomson Reuters' methodology. Although THE publishes data for 200 universities on its website and another 200 on an iPad/iPhone app there are in fact several hundred more universities that are not included in the published rankings but whose scores are used to calculate the overall mean from which scores for the ranked universities are derived.
A higher score on an indicator means a greater distance from all the institutions in the Thomson Reuters database.
The high scores for citations mean that there is a large gap between the top 400 and the lesser places outside the top 400.
I suspect that the low scores for teaching and research are due to the influence of the academic survey which contributes to both indicator clusters. We have already seen that after the top six, the curve for the survey is relatively flat.
The citations indicator already has a disproportionate influence contributing 30 % to the overall weighing. That 30 % is of course a maximum. Since universities on average are getting more for citations than the the indicators, it has in practice a correspondingly greater weighting.
I have calculated the mean scores for the indicator groups in the 2012-13 Times Higher education. world university rankings. The mean scores are for the 400 universities included in the published 2012-13 rankings are:
Teaching 41.67
International Outlook 52.35
Industry Income 50.74
Research 40.84
Citations 65.25
For Industry Income, N is 363 since 37 universities, mainly in the US, did not submit data. This might be a smart move if the universities realized that they were likely to receive a low score. N is 400 for the others.
There are considerable differences between the indicators which are probably due to Thomson Reuters' methodology. Although THE publishes data for 200 universities on its website and another 200 on an iPad/iPhone app there are in fact several hundred more universities that are not included in the published rankings but whose scores are used to calculate the overall mean from which scores for the ranked universities are derived.
A higher score on an indicator means a greater distance from all the institutions in the Thomson Reuters database.
The high scores for citations mean that there is a large gap between the top 400 and the lesser places outside the top 400.
I suspect that the low scores for teaching and research are due to the influence of the academic survey which contributes to both indicator clusters. We have already seen that after the top six, the curve for the survey is relatively flat.
The citations indicator already has a disproportionate influence contributing 30 % to the overall weighing. That 30 % is of course a maximum. Since universities on average are getting more for citations than the the indicators, it has in practice a correspondingly greater weighting.
Friday, March 15, 2013
Some More on the THE World University Rankings 2012-13
Here are some observations based on a simple analysis of the Times Higher Education World University Rankings of 2012-13.
First, calculating the Pearson correlation between the indicator groups produces some interesting points. If a ranking is valid we would expect the correlations between indicators to be fairly high but not too high. If the correlations between indicators are above .800 this suggests that they are basically measuring the same thing and that there is no point in having more than one indicator. On the other hand it is safe to assume that if an indicator does measure quality or desired characteristics in some way it will have a positive relationship with other valid indicators.
One thing about the 2012-13 rankings is that the relationship between international outlook (international faculty, students and research collaboration) and the other indicators is negative or very slight. With teaching it is .025 (not significant), industry income .003 (not significant), research .156 and citations 158. This adds to my suspicion that internationalisation, at least among those universities that get into the world rankings, does not per se say very much about quality.
Industry income correlates modestly with teaching (.350) and research (.396), insignificantly with international outlook (.003) and negatively and insignificantly with citations (-.008).
The correlation between research and teaching is very high at .905. This may well be because the survey of academic opinion contributes to the teaching and the research indicators. There are different questions -- one about research and one about postgraduate supervision -- but the difference between the responses is probably quite small.
It is also very interesting that the correlation between scores for research and citations is rather modest at .410. Since volume of publications, funding and reputation should contribute to research influence, which is what citations are supposed to measure, this suggests that the citations indicator needs a careful review.
Teaching, research and international outlook are composites of several indicators. It would be very helpful if THE or Thomson Reuters released the scores for the separate indicators.
Here are some observations based on a simple analysis of the Times Higher Education World University Rankings of 2012-13.
First, calculating the Pearson correlation between the indicator groups produces some interesting points. If a ranking is valid we would expect the correlations between indicators to be fairly high but not too high. If the correlations between indicators are above .800 this suggests that they are basically measuring the same thing and that there is no point in having more than one indicator. On the other hand it is safe to assume that if an indicator does measure quality or desired characteristics in some way it will have a positive relationship with other valid indicators.
One thing about the 2012-13 rankings is that the relationship between international outlook (international faculty, students and research collaboration) and the other indicators is negative or very slight. With teaching it is .025 (not significant), industry income .003 (not significant), research .156 and citations 158. This adds to my suspicion that internationalisation, at least among those universities that get into the world rankings, does not per se say very much about quality.
Industry income correlates modestly with teaching (.350) and research (.396), insignificantly with international outlook (.003) and negatively and insignificantly with citations (-.008).
The correlation between research and teaching is very high at .905. This may well be because the survey of academic opinion contributes to the teaching and the research indicators. There are different questions -- one about research and one about postgraduate supervision -- but the difference between the responses is probably quite small.
It is also very interesting that the correlation between scores for research and citations is rather modest at .410. Since volume of publications, funding and reputation should contribute to research influence, which is what citations are supposed to measure, this suggests that the citations indicator needs a careful review.
Teaching, research and international outlook are composites of several indicators. It would be very helpful if THE or Thomson Reuters released the scores for the separate indicators.
Sunday, March 10, 2013
The California Paradox
Looking at the Times Higher Education reputation rankings, I noticed that there
were two Californian universities in the
superbrand six and seven in the top 50. This is not an anomaly. A
slightly different seven can be found in the THE World University Rankings. California
does even better in the Shanghai ARWU with three in the top six and 11 in the
top 50. This is a slight improvement on 2003 when there were ten. According to
ARWU, California would be the second best country in the world for higher
education if it became independent.
California’s performance is not so spectacular according to
QS who have just four Californian institutions in their top fifty, a fall from
2004 when they had five (I am not counting the University of California at San
Francisco which, being a single subject medical school, should not have
been there). Even so it is still a creditable performance.
But, if we are to believe many commentators, higher education in
California, at least public higher education, is dying if not already dead.
According to Andy Kroll in Salon:
"California’s public higher education system is, in other
words, dying a slow death. The promise of a cheap, quality education is
slipping away for the working and middle classes, for immigrants, for the very
people whom the University of California’s creators held in mind when they began
their grand experiment 144
years ago. And don’t think the slow rot of public education is unique to
California: that state’s woes are the nation’s".
The villains according to Kroll are Californian taxpayers
who refuse to accept
adding to a tax burden that is the among the highest in the world.
It is surprising that the death throes of higher education in
California have gone unnoticed by the well known international rankers.
It is also surprising that public and private universities that are still
highly productive and by international standards still lavishly funded exist in
the same state as secondary and elementary schools that are close to being the worse in the nation in terms of student performance. The relative and absolute decline in educational achievement is matched by a similar decline in the overall economic performance of the state.
It may be just a matter of time and in the coming decades Californian universities will follow primary and secondary education into irreversible decline.
It may be just a matter of time and in the coming decades Californian universities will follow primary and secondary education into irreversible decline.
Preserving data
Times Higher and QS have both renovated their ranking pages recently and both seem to have removed access to some data from previous years. THE used to provide links to the Times Higher Education (Supplement) - Quacquarelli Symonds rankings of 2004-2010 but apparently not any more. QS do not seem to give access to these rankings before 2007. In both cases, I will update if it turns out that there is a way to get to these rankings.
There is, however, a site which has the rankings for the top 200 of the THES - QS Rankings of 2004-2007.
Times Higher and QS have both renovated their ranking pages recently and both seem to have removed access to some data from previous years. THE used to provide links to the Times Higher Education (Supplement) - Quacquarelli Symonds rankings of 2004-2010 but apparently not any more. QS do not seem to give access to these rankings before 2007. In both cases, I will update if it turns out that there is a way to get to these rankings.
There is, however, a site which has the rankings for the top 200 of the THES - QS Rankings of 2004-2007.
Wednesday, March 06, 2013
The THE Reputation Rankings
Times Higher Education have published their reputation rankings based on data collected from the World University Rankings of 2012.
They are not very interesting. Which is exactly what they should be. When rankings show massive changes from one year to another a certain amount of scepticism is required.
The same six, Harvard, MIT, Stanford, Berkeley, Oxford and Cambridge are well ahead of everybody else as they were in 2012 and in 2011.
Taking a quick look at the top fifty, there is little movement between 2011 and 2013. Four universities for the US, Japan, Netherlands and Germany have dropped out. In their place there is one more from Korea and from the the UK and two more from Australia.
I was under the impression that Australian universities were facing savage cuts in research funding and were going to be deserted by international students and researchers..
Maybe it is the other universities that are being cut. or maybe a bit of blood letting is good for the health.
I also noticed that the number of respondents went down a bit in 2012. It could be that the academic world is beginning to suffer from ranking fatigue.
Times Higher Education have published their reputation rankings based on data collected from the World University Rankings of 2012.
They are not very interesting. Which is exactly what they should be. When rankings show massive changes from one year to another a certain amount of scepticism is required.
The same six, Harvard, MIT, Stanford, Berkeley, Oxford and Cambridge are well ahead of everybody else as they were in 2012 and in 2011.
Taking a quick look at the top fifty, there is little movement between 2011 and 2013. Four universities for the US, Japan, Netherlands and Germany have dropped out. In their place there is one more from Korea and from the the UK and two more from Australia.
I was under the impression that Australian universities were facing savage cuts in research funding and were going to be deserted by international students and researchers..
Maybe it is the other universities that are being cut. or maybe a bit of blood letting is good for the health.
I also noticed that the number of respondents went down a bit in 2012. It could be that the academic world is beginning to suffer from ranking fatigue.
Saturday, March 02, 2013
GRE Country Ranking: Verbal Reasoning
Arranging the mean scores for the 2011-12 GRE Verbal Reasoning test, we can see that the bottom looks rather similar to the Quantitative Reasoning test. It comprises African and Arab countries. The top is very different, with five out of six places held by countries where English is currently the native language of a majority of the population.
Arranging the mean scores for the 2011-12 GRE Verbal Reasoning test, we can see that the bottom looks rather similar to the Quantitative Reasoning test. It comprises African and Arab countries. The top is very different, with five out of six places held by countries where English is currently the native language of a majority of the population.
1 | Australia | 158.40 |
2 | New Zealand | 157.30 |
3= | Singapore | 157.10 |
3= | Ireland | 157.10 |
3= | UK | 157.10 |
6 | Canada | 156.00 |
7 | Netherlands | 155.50 |
8 | Belgium | 155.00 |
9 | US White | 154.10 |
10 | Switzerland | 153.70 |
11 | Romania | 153.50 |
12= | Sweden | 153.30 |
12= | South Africa | 153.30 |
14 | Bulgaria | 153.20 |
15 | Norway | 153.10 |
16 | USA | 152.90 |
17 | Argentina | 152.80 |
18 | France | 152.70 |
19 | US Asian | 152.60 |
20 | Austria | 152.50 |
21 | Germany | 152.30 |
22 | Denmark | 152.30 |
23 | Italy | 152.20 |
24 | Croatia | 151.70 |
25 | Finland | 151.70 |
26 | Uruguay | 151.60 |
27 | US American Indian | 151.50 |
28= | Czech Republic | 151.40 |
28= | Israel | 151.40 |
28= | Trinidad | 151.40 |
31 | Hungary | 151.20 |
32= | Portugal | 150.90 |
32= | Spain | 150.90 |
34 | Poland | 150.40 |
35 | Lithuania | 150.30 |
36 | US Hispanic | 150.20 |
37 | Iceland | 149.80 |
38 | US Mexican | 149.70 |
39= | Malaysia | 149.50 |
39= | Barbados | 149.50 |
41 | Greece | 149.40 |
42= | Costa Rica | 149.10 |
42= | Philippines | 149.10 |
44 | Guatemala | 149.00 |
45 | Brazil | 148.90 |
46= | Zimbabwe | 148.80 |
46= | Jamaica | 148.80 |
48 | US Porto Rican | 148.70 |
49= | Georgia | 148.60 |
49= | Bosnia - Herzogovina | 148.60 |
51 | Guyana | 148.60 |
52 | Moldova | 148.40 |
53 | Macedonia | 148.30 |
54= | Peru | 148.20 |
54= | Mexico | 148.20 |
54= | Bahamas | 148.20 |
57 | Chile | 148.00 |
58 | Belarus | 147.90 |
59 | Russia | 147.80 |
60= | Latvia | 147.70 |
60= | Colombia | 147.70 |
62 | Albania | 147.60 |
63= | Estonia | 147.60 |
63= | Venezuela | 147.60 |
63= | El Salvador | 147.60 |
63= | Cuba | 147.60 |
63= | St Lucia | 147.60 |
68= | Hong Kong | 147.50 |
68= | South Korea | 147.50 |
70= | Ukraine | 147.40 |
70= | Bahrain | 147.40 |
72= | Serbia | 147.30 |
72= | Bolivia | 147.30 |
74= | Eritrea | 147.20 |
74= | Honduras | 147.20 |
76= | Zambia | 147.10 |
76= | Afghanistan | 147.10 |
78 | Pakistan | 147.00 |
79 | Panama | 146.80 |
80 | US Black | 146.70 |
81= | Ecuador | 146.50 |
81= | Kenya | 146.50 |
83 | Nigeria | 146.40 |
82= | Morocco | 146.30 |
82= | Senegal | 146.30 |
82= | Nicaragua | 146.30 |
87= | Cyprus | 146.10 |
87= | Dominican Republic | 146.10 |
87= | Sierra leone | 146.10 |
90 | Uzbekistan | 146.00 |
91 | China | 145.90 |
92 | Mongolia | 145.80 |
93= | Vietnam | 145.70 |
93= | Myanmar | 145.70 |
95 | Kazakhstan | 145.60 |
96= | Togo | 145.50 |
96= | Ghana | 145.50 |
98= | Tunisia | 145.20 |
98= | Uganda | 145.20 |
100 | Niger | 145.10 |
101= | Burkina Faso | 145.00 |
101= | Malawi | 145.00 |
103 | Kyrgyzstan | 144.90 |
104= | India | 144.70 |
104= | Indonesia | 144.70 |
106 | Cote d'Ivoire | 144.60 |
107= | Japan | 144.50 |
107= | Nepal | 144.50 |
107= | Haiti | 144.40 |
110= | Taiwan | 144.20 |
110= | Bangladesh | 144.20 |
110= | Ethiopia | 144.20 |
110= | Benin | 144.20 |
110= | Congo DR | 144.20 |
115 | Turkey | 144.10 |
116= | Egypt | 143.80 |
116= | Azerbaijan | 143.80 |
118 | Armenia | 143.70 |
119= | Turkmenistan | 143.50 |
119= | Cameroon | 143.50 |
121= | Macao | 143.40 |
121= | Sri Lanka | 143.40 |
121= | Tanzania | 143.40 |
124 | Thailand | 142.80 |
125= | Syria | 142.70 |
125= | Rwanda | 142.70 |
127 | Qatar | 142.50 |
128 | Algeria | 141.60 |
129 | Jordan | 141.40 |
130 | Oman | 141.30 |
131 | Yemen | 141.00 |
132 | Congo Republic | 140.90 |
133 | Kuwait | 140.80 |
134 | Sudan | 140.60 |
135= | UAE | 140.30 |
135= | Mali | 140.30 |
137 | Namibia | 140.20 |
138 | Saudi Arabia | 137.40 |
Wednesday, February 27, 2013
Ranking Countries by GRE Scores
ETS has produced an analysis of the scores for the Graduate Record Exam required for entry into US graduate schools. Among the more interesting tables are the scores by nationality for the general test, composed of verbal reasoning, quantitative reasoning and analytical writing. This could be regrded as a crude measure of a country's undergraduate education system although clearly there are all sorts of factors that would blur the picture.
Here are mean scores for quantitative skills by country.
ETS has produced an analysis of the scores for the Graduate Record Exam required for entry into US graduate schools. Among the more interesting tables are the scores by nationality for the general test, composed of verbal reasoning, quantitative reasoning and analytical writing. This could be regrded as a crude measure of a country's undergraduate education system although clearly there are all sorts of factors that would blur the picture.
Here are mean scores for quantitative skills by country.
1 | Hong Kong | 169.50 |
2 | China | 162.90 |
3 | Singapore | 160.30 |
4 | Taiwan | 159.20 |
5 | Vietnam | 158.90 |
6 | Turkey | 158.70 |
7 | South Korea | 158.20 |
8 | Macao | 158.00 |
9 | France | 157.50 |
10 | Belgium | 157.10 |
11 | Czech Republic | 156.90 |
12 | Israel | 156.70 |
13 | Switzerland | 156.70 |
14 | Netherlands | 156.60 |
15 | Greece | 156.40 |
16= | Bulgaria | 156.30 |
16= | Japan | 156.30 |
18 | Hungary | 156.20 |
19 | Australia | 155.70 |
20 | Germany | 155.50 |
21= | Russia | 155.30 |
21= | Thailand | 155.30 |
23= | Belarus | 154.80 |
23= | Romania | 154.80 |
25= | Bangladesh | 154.70 |
25= | Lithuania | 154.70 |
27 | Malaysia | 154.60 |
28= | Eritrea | 154.50 |
28= | Iceland | 154.50 |
28= | Tunisia | 154.50 |
31= | New Zealand | 154.40 |
31= | Ukraine | 154.40 |
33 | Latvia | 154.30 |
34 | Sri Lanka | 154.20 |
35= | Austria | 154.10 |
35= | India | 154.10 |
35= | Italy | 154.10 |
38= | Indonesia | 154.00 |
38= | Moldova | 154.00 |
40= | Armenia | 153.80 |
40= | Ireland | 153.80 |
42= | Cyprus | 153.70 |
42= | Argentina | 153.60 |
44 | Canada | 153.60 |
45= | Nepal | 153.50 |
45= | Portugal | 153.50 |
45= | US Asian | 153.50 |
48= | Albania | 153.40 |
48= | Mongolia | 153.40 |
50= | Croatia | 153.30 |
50= | Egypt | 153.30 |
52 | Poland | 153.20 |
53= | Norway | 153.10 |
53= | Pakistan | 153.10 |
53= | Spain | 153.10 |
56 | UK | 152.90 |
57= | Denmark | 152.80 |
57= | Kazakhstan | 152.80 |
59= | Chile | 152.70 |
59= | Syria | 152.70 |
61= | Azerbaijan | 152.60 |
61= | Macedonia | 152.60 |
61= | Serbia | 152.60 |
61= | Sweden | 152.60 |
65 | Myanmar | 152.40 |
66 | Peru | 152.30 |
67= | Turkmenistan | 152.20 |
67= | Uzbekistan | 152.20 |
69 | Jordan | 151.90 |
70= | Estonia | 151.80 |
70= | Ethiopia | 151.80 |
70= | Morocco | 151.80 |
73 | Georgia | 151.60 |
74 | Finland | 151.50 |
75= | South Africa | 151.30 |
75= | Uruguay | 151.30 |
77 | Bolivia | 150.80 |
78 | Brazil | 150.50 |
79 | US White | 150.40 |
80 | Bosnia - Herzog | 150.10 |
81 | Venezuela | 150.00 |
82= | Benin | 149.70 |
82= | Costa Rica | 149.70 |
84 | USA | 149.50 |
85 | Colombia | 149.40 |
86 | Mexico | 149.30 |
87= | Bahrain | 149.20 |
87= | Zimbabwe | 149.20 |
89 | Philippines | 149.10 |
90 | Trinidad | 148.80 |
91= | Ecuador | 148.60 |
91= | Panama | 148.60 |
91= | UAE | 148.60 |
91= | Yemen | 148.60 |
95= | Algeria | 148.50 |
95= | Sudan | 148.50 |
97= | Guatemala | 148.30 |
97= | Kyrgyzstan | 148.30 |
99 | Cote d' Ivoire | 148.10 |
100 | Togo | 148.00 |
101 | Qatar | 147.90 |
102 | Barbados | 147.80 |
103 | Rwanda | 147.60 |
104= | El Salvador | 147.50 |
104= | Honduras | 147.50 |
106= | Ghana | 147.40 |
106= | Nigeria | 147.40 |
108 | Cuba | 147.30 |
109= | Kenya | 147.10 |
109= | US American Ind | 147.10 |
111 | US Hispanic | 147.00 |
112 | Cameroon | 146.90 |
113= | Burkina Faso | 146.80 |
113= | Niger | 146.80 |
113= | Zambia | 146.80 |
116= | Dominican Repub | 146.50 |
116= | Guyana | 146.50 |
116= | Kuwait | 146.50 |
116= | Tanzania | 146.50 |
116= | US Mexican | 146.50 |
121= | Uganda | 145.90 |
121= | US Porto Rican | 145.90 |
123 | Jamaica | 145.80 |
124 | Oman | 145.40 |
125 | Senegal | 145.30 |
126 | St Lucia | 145.20 |
127 | Congo DR | 145.10 |
128 | Nicaragua | 144.50 |
129= | Afghanistan | 144.20 |
129= | Haiti | 144.20 |
131 | Mali | 144.00 |
132 | Malawi | 143.90 |
133= | Bahamas | 143.70 |
133= | Sierra Leone | 143.70 |
135 | US Black | 143.10 |
136 | Saudi Arabia | 142.80 |
137 | Congo Republic | 142.40 |
138 | Namibia | 140.20 |
Friday, February 22, 2013
More Rankings on the Way
Soon it will be springtime in the Northern hemisphere and spring would not be complete without a few more rankings.
The Times Higher Education reputation rankings will be launched in early March at the British Council's Going Global conference in Dubai.
Meanwhile, the QS ranking of 30 subjects is coming soon. Until now these have been based on varying combinations of employer opinion, academic opinion and citations. This year they will be adding an indicator based on the h-index.
Here is a definition from Wikipedia:
This means that one paper cited once produces an index of 1, 20 papers cited 20 times an index of 20, 100 papers cited 100 times an index of 100 and so on.
The point of this is that it combines productivity and quality as measured by citations and reduces the effect of extreme outliers. This is definitely an improvement for QS.
Soon it will be springtime in the Northern hemisphere and spring would not be complete without a few more rankings.
The Times Higher Education reputation rankings will be launched in early March at the British Council's Going Global conference in Dubai.
“Almost 50,000 academics have provided their expert insight over just three short annual rounds of the survey, providing a serious worldwide audit of an increasingly important but little-understood aspect of global higher education – a university’s academic brand.”
This year’s reputation rankings will be the based on the 16,639 responses, from 144 countries, to Thomson Reuters’ 2012 Academic Reputation Survey, which was carried out during March and April 2012. The 2011 survey attracted 17,554 responses, and 2010’s survey attracted 13,388 respondents.
The survey is by invitation only and academics are selected to be statistically representative of their geographical region and discipline. All are published scholars, questioned about their experiences in the field in which they work. The average time this year’s respondents spent working in the sector was 17 years. '
Meanwhile, the QS ranking of 30 subjects is coming soon. Until now these have been based on varying combinations of employer opinion, academic opinion and citations. This year they will be adding an indicator based on the h-index.
Here is a definition from Wikipedia:
"The index is based on the distribution of citations received by a given researchers publications. Hirsch writes:
In other words, a scholar with an index of h has published h papers each of which has been cited in other papers at least h times.[2] Thus, the h-index reflects both the number of publications and the number of citations per publication. The index is designed to improve upon simpler measures such as the total number of citations or publications. The index works properly only for comparing scientists working in the same field; citation conventions differ widely among different fields.
- A scientist has index h if h of his/her Np papers have at least h citations each, and the other (Np − h) papers have no more than h citations each.
The h-index serves as an alternative to more traditional journal impact factor metrics in the evaluation of the impact of the work of a particular researcher. Because only the most highly cited articles contribute to the h-index, its determination is a relatively simpler process. Hirsch has demonstrated that h has high predictive value for whether a scientist has won honors like National Academy membership or the Nobel Prize. "
This means that one paper cited once produces an index of 1, 20 papers cited 20 times an index of 20, 100 papers cited 100 times an index of 100 and so on.
The point of this is that it combines productivity and quality as measured by citations and reduces the effect of extreme outliers. This is definitely an improvement for QS.
Tuesday, February 19, 2013
Freedom Indicator?
It is often argued that the quality of a a university has something to do with academic freedom. Some Western academic have become noticeably self-righteous about respect for human rights in other countries. There have been criticisms of Yale University's links with Singapore, where gay rights are restricted.
One wonders whether Western campuses should talk so loudly about freedom. A recent incident at Carleton University in Canada suggests that when it comes to human rights some humans are much more equal than others.
Carleton has a freedom wall where students can write thoughts that are forbidden in the rest of the campus, probably even in much or most of Canada. Even this was too much for Arun Smith, a seventh year (yes, that's right) human rights student. From the Macleans On Campus blog:
It appears that Arun Smith has been widely condemned and that he will be punished. What seems to have been passed over is that it is apparently necessary to have a wall where mainstream religious opinions can be expressed. Yes, I know that "abortion is murder" is a gross simplification of a complex philosophical issue but whose fault is it that it has to be expressed in three words?
The Justice Centre for Constitutional Freedoms has issued a Campus Freedom Index for Canadian universities. Unsurprisingly, Carleton gets a C and 3 Fs. The best appears to be St Thomas with one A and three Bs
What about an international edition?
It is often argued that the quality of a a university has something to do with academic freedom. Some Western academic have become noticeably self-righteous about respect for human rights in other countries. There have been criticisms of Yale University's links with Singapore, where gay rights are restricted.
One wonders whether Western campuses should talk so loudly about freedom. A recent incident at Carleton University in Canada suggests that when it comes to human rights some humans are much more equal than others.
Carleton has a freedom wall where students can write thoughts that are forbidden in the rest of the campus, probably even in much or most of Canada. Even this was too much for Arun Smith, a seventh year (yes, that's right) human rights student. From the Macleans On Campus blog:
"Seventh-year Carleton University human rights [apparently human rights and political science with a minor in sexuality studies] student Arun Smith has apparently not been in school long enough to learn that other people have rights to opinions that differ from his. After the “free speech wall” on campus was torn down, he posted a message to his Facebook wall claiming responsibility. “If everyone speaks freely we end up simply reinforcing the hierarchies that are created in our society,” it read. The display had been erected by campus club Carleton Students for Liberty and students were encouraged to write anything they wanted on the paper. Someone wrote “abortion is murder” and “traditional marriage is awesome.” GBLTQ Centre volunteer Riley Evans took offense, telling The Charlatan student newspaper that the wall was attacking those who have had abortions and those in same-sex relationships."
It appears that Arun Smith has been widely condemned and that he will be punished. What seems to have been passed over is that it is apparently necessary to have a wall where mainstream religious opinions can be expressed. Yes, I know that "abortion is murder" is a gross simplification of a complex philosophical issue but whose fault is it that it has to be expressed in three words?
The Justice Centre for Constitutional Freedoms has issued a Campus Freedom Index for Canadian universities. Unsurprisingly, Carleton gets a C and 3 Fs. The best appears to be St Thomas with one A and three Bs
What about an international edition?
The Commission Strikes Back
Jordi Curell from the European Commission's Directorate General for Education and Culture has written in defence of the proposed U-multirank university ranking system. He starts:
He concludes:
Jordi Curell from the European Commission's Directorate General for Education and Culture has written in defence of the proposed U-multirank university ranking system. He starts:
"Is Times Higher Education worried about competition to its world university ranking from U-Multirank? It looks like it from the tone of its reporting on the new European ranking initiative launched in Dublin at the end of January. "
He concludes:
"However, the EU should not finance U-Multirank forever; this should be limited to the start-up phase. That is why the contract for delivering the ranking includes the design of a self-sustaining business plan and organising the transition to this model.By the way, THE is a magazine now, not a newspaper.
These are challenging times for higher education in Europe, and the purpose behind U-Multirank could not be clearer. Our objective is improving the performance of Europe's higher education systems – not just selling newspapers."
Tuesday, February 12, 2013
Wasting Money
The League of European Research Universities claims to be upset about the 2 million Euros that the Europan Uniion is spending on its proposed multi-dimensional university ranking. What do they or their American counterpartsthink about things like this?
"The president [of the US] will invest $55 million in a new First in the World competition, to support the public and private colleges and non-profit organizations as they work to develop and test the next breakthrough strategy that will boost higher education attainment and student outcomes. The new program will also help scale-up those innovative and effective practices that have been proven to boost productivity and enhance teaching and learning on college campuses."
Monday, February 11, 2013
Update on U-Multirank
Using data supplied by institutions is not a good idea for any international ranking. Apart from questions of reliability and objectivity, there is always the possibility of "conscientious objectors" disrupting the ranking process by refusing to take part.
The League of European Research Universities has just announced that it will not participate in the European Union's proposed multi-dimensional ranking project.
Membership of the League is by invitation only and "is periodically evaluated against a broad set of quantitative and qualitative criteria, such as research volume, impact and funding, strengths in PhD training, size and disciplinary breadth, and peer-recognised academic excellence." At the moment , it includes Oxford, Cambridge, Heidelberg, Geneva and Strasbourg universities.
According to Times Higher Education
'Kurt Deketelaere, secretary-general of Leru, said the organisation, whose members include the universities of Oxford, Cambridge and Edinburgh, believes the project is ill-conceived and poorly designed.
"We consider U-Multirank at best an unjustifiable use of taxpayers' money and at worst a serious threat to a healthy higher education system," he said. "Leru has serious concerns about the lack of reliable, solid and valid data for the chosen indicators in U-Multirank, about the comparability between countries, about the burden put upon universities to collect data and about the lack of 'reality-checks' in the process thus far."'
Considering the sort of thing that European universities spend texpayers' money on, 2 million Euros seem comparatively trivial. There are no doubt genuine concerns about the reliability of data produced by institutions and comparability between countries but if you can swallow the camel of Rice University and Moscow Engineering Physics Institute as the best in the world for research influence according to Times Higher and Thomson Reuters, then why strain at U-Multirank's gnats?
And as for a serious threat to higher education, I think someone should sit down for a few minutes and have a cup of tea before making any more statements.
Using data supplied by institutions is not a good idea for any international ranking. Apart from questions of reliability and objectivity, there is always the possibility of "conscientious objectors" disrupting the ranking process by refusing to take part.
The League of European Research Universities has just announced that it will not participate in the European Union's proposed multi-dimensional ranking project.
Membership of the League is by invitation only and "is periodically evaluated against a broad set of quantitative and qualitative criteria, such as research volume, impact and funding, strengths in PhD training, size and disciplinary breadth, and peer-recognised academic excellence." At the moment , it includes Oxford, Cambridge, Heidelberg, Geneva and Strasbourg universities.
According to Times Higher Education
'Kurt Deketelaere, secretary-general of Leru, said the organisation, whose members include the universities of Oxford, Cambridge and Edinburgh, believes the project is ill-conceived and poorly designed.
"We consider U-Multirank at best an unjustifiable use of taxpayers' money and at worst a serious threat to a healthy higher education system," he said. "Leru has serious concerns about the lack of reliable, solid and valid data for the chosen indicators in U-Multirank, about the comparability between countries, about the burden put upon universities to collect data and about the lack of 'reality-checks' in the process thus far."'
Considering the sort of thing that European universities spend texpayers' money on, 2 million Euros seem comparatively trivial. There are no doubt genuine concerns about the reliability of data produced by institutions and comparability between countries but if you can swallow the camel of Rice University and Moscow Engineering Physics Institute as the best in the world for research influence according to Times Higher and Thomson Reuters, then why strain at U-Multirank's gnats?
And as for a serious threat to higher education, I think someone should sit down for a few minutes and have a cup of tea before making any more statements.
Saturday, February 09, 2013
Another Ranking on the Way
The European Union has just launched its U- Multirankranking system. Data will be collected during 2013 and the results will be out
in 2014.
According to the European Commissioner for Education the aim
is to to provide a multi-dimensional analysis of institutions rather than one
that emphasises research excellence.
It is certainly true that the prominent international rankings focus largely
or almost entirely on research. The
Shanghai rankings are all about research except perhaps the 10 percent for
Nobel and Field awards given to alumni. The QS rankings have a weighting at least 60 per
cent for research (citations per faculty and academic survey) and maybe more since research only faculty are counted in the faculty student ratio. Times Higher
Education allocates 30 percent for research influence (citations) and 30 percent
for research (volume, income and reputation). Since the scores for the
citations indicators are substantially higher than those for the others it can carry an even greater weight for
many universities. Rankings that
measure other significant parts of a university’s mission might therefore fill an obvious gap.
But the new rankings are going to rely on data submitted
by universities. What happens if several major institutions, including perhaps
many British ones, decline to take part?
Sunday, February 03, 2013
Article in the Chronicle of Higher Education
The Chronicle of Higher Education has an article by Debra Houry on university rankings. She makes some pertinent comments although her recommendations at the end are either impractical or likely to make things worse.
She points out that several American colleges have been found to have submitted inflated data to the US News and World Report in order to boost their standing in the rankings and notes that "there is an inherent conflict of interest in asking those who are most invested in the rankings to self-report data."
This is true and is even more true of international rankings. One reason why the Shanghai rankings are more credible than those produced by QS and Times Higher Education is that they rely entirely on reasonably accessible public data. Using information provided by institutions is a risky business which, among other things, could lead to universities refusing to cooperate, something which ended the promising Asiaweek rankings in 2001.
She then argues that measures of student quality such as high school class rank and SAT scores should be abandoned because they "discourage colleges from selecting a diverse student body. An institution that begins accepting more African-American students or students from low-income families—two groups that have among the lowest SAT scores, according to the College Board—might see its ranking drop because the average SAT score of its freshmen has gone down."
True, but on the other hand an institution that puts more emphasis on standardized test scores might rise in the rankings and might also increase its intake of Asian students and so become more diverse. Are Asian students less diverse than African- Americans? They are certainly likely to be far more varied in terms of mother tongue, political opinions or religious affiliation.
She also points out that it is now a bit late to count printed books in the law school rankings and wonders about using ratemyprofessor to assess teaching quality.
Then there is a familiar criticism of the QS Stars rating systems.
Professor Houry also makes the common complaint that the rankings do not capture unique features of institutions such as "a program called Living-Learning Communities, which gives upperclassmen at Emory incentives to live on campus and participate in residential learning. But you would never learn about that from the ranking formulas."
The problem is that a lot of people are interested in how smart graduates are or how much research, if any, faculty are doing or how much money is flowing in. But seriously, what is so interesting about upperlassmen living on campus? In any case if this is unique would you expect any measure to "capture" it.
Finally she concludes "ranking organizations should develop more-meaningful measures around diversity of students, job placement, acceptance into professional schools, faculty membership in national academies, and student engagement. Instead of being assigned a numerical rank, institutions should be grouped by tiers and categories of programs. The last thing students want is to be seen as a number. Colleges shouldn't want that, either."
But all of these raise more problems than solutions. If we really want diversity of students shouldn't we counting counting conservative students or evangelical Christians? Job placement raises the possibility, already found in law school rankings, of counting graduates employed in phony temporary jobs or glorified slave labor (internships). Membership in national academies? A bit elitist, perhaps?
The Chronicle of Higher Education has an article by Debra Houry on university rankings. She makes some pertinent comments although her recommendations at the end are either impractical or likely to make things worse.
She points out that several American colleges have been found to have submitted inflated data to the US News and World Report in order to boost their standing in the rankings and notes that "there is an inherent conflict of interest in asking those who are most invested in the rankings to self-report data."
This is true and is even more true of international rankings. One reason why the Shanghai rankings are more credible than those produced by QS and Times Higher Education is that they rely entirely on reasonably accessible public data. Using information provided by institutions is a risky business which, among other things, could lead to universities refusing to cooperate, something which ended the promising Asiaweek rankings in 2001.
She then argues that measures of student quality such as high school class rank and SAT scores should be abandoned because they "discourage colleges from selecting a diverse student body. An institution that begins accepting more African-American students or students from low-income families—two groups that have among the lowest SAT scores, according to the College Board—might see its ranking drop because the average SAT score of its freshmen has gone down."
True, but on the other hand an institution that puts more emphasis on standardized test scores might rise in the rankings and might also increase its intake of Asian students and so become more diverse. Are Asian students less diverse than African- Americans? They are certainly likely to be far more varied in terms of mother tongue, political opinions or religious affiliation.
She also points out that it is now a bit late to count printed books in the law school rankings and wonders about using ratemyprofessor to assess teaching quality.
Then there is a familiar criticism of the QS Stars rating systems.
Professor Houry also makes the common complaint that the rankings do not capture unique features of institutions such as "a program called Living-Learning Communities, which gives upperclassmen at Emory incentives to live on campus and participate in residential learning. But you would never learn about that from the ranking formulas."
The problem is that a lot of people are interested in how smart graduates are or how much research, if any, faculty are doing or how much money is flowing in. But seriously, what is so interesting about upperlassmen living on campus? In any case if this is unique would you expect any measure to "capture" it.
Finally she concludes "ranking organizations should develop more-meaningful measures around diversity of students, job placement, acceptance into professional schools, faculty membership in national academies, and student engagement. Instead of being assigned a numerical rank, institutions should be grouped by tiers and categories of programs. The last thing students want is to be seen as a number. Colleges shouldn't want that, either."
But all of these raise more problems than solutions. If we really want diversity of students shouldn't we counting counting conservative students or evangelical Christians? Job placement raises the possibility, already found in law school rankings, of counting graduates employed in phony temporary jobs or glorified slave labor (internships). Membership in national academies? A bit elitist, perhaps?
Monday, January 14, 2013
The Last Print Issue of Newsweek
At the end of the last print issue of Newsweek (31/12/12) is a special advertising feature about the Best Colleges and Universities in Asia, Korea, Vietnam, Hong Kong, Japan and the USA.
The feature is quite revealing about how the various global rankings are regarded in Asia. There is nothing about the Shanghai rankings, the Taiwan rankings, Scimago, Webometrics, URAP or the Leiden Ranking.
There are five references to the QS rankings one of which calls them "revered" (seriously!) and another that refers to the "SQ" rankings.
There are two to Times Higher Education, two to America's Best Colleges, one to Community College Weekly and one to the Korean Joongang Daily university rankings.
At the end of the last print issue of Newsweek (31/12/12) is a special advertising feature about the Best Colleges and Universities in Asia, Korea, Vietnam, Hong Kong, Japan and the USA.
The feature is quite revealing about how the various global rankings are regarded in Asia. There is nothing about the Shanghai rankings, the Taiwan rankings, Scimago, Webometrics, URAP or the Leiden Ranking.
There are five references to the QS rankings one of which calls them "revered" (seriously!) and another that refers to the "SQ" rankings.
There are two to Times Higher Education, two to America's Best Colleges, one to Community College Weekly and one to the Korean Joongang Daily university rankings.
Sunday, January 13, 2013
A Bit More on the THE Citations Indicator
I have already posted on the citations (research influence) indicator in the Times Higher Education World University Rankings and how it can allow a few papers to have a disproportionate impact. But there are other features of this indicator that affect its stability and can produce large changes even if there is no change in methodology.
This indicator has a weighting of 30 percent. The next most heavily weighted indicator is the research reputation survey which carries a weighting of 18 percent and is combined with number of publications (6 percent) and research income (6 percent) to produce a weighting of 30 percent for research: volume, income and reputation.
It might be argued that the citations indicator accounts for only 30 percent of the total weighting so that anomalously high scores given to obscure or mediocre institutions for citations would be balanced or diluted by scores on the other indicators which have a weighting of 70 percent.
The problem with this is that the scores for the citation indicator are often substantially higher than the scores for other indicators, especially in the 300 - 400 region of the rankings so that the impact of this indicator is correspondingly increased. Full data can be found on the 2012-13 iPhone app.
For example, the University of Ferrara in 378th place with a score of 58.5 for citations has a total score of 31.3 so that nearly 60% of its total score comes from the citations indicator. King Mongkut's Unversity of Technology, Thonburi, in 389th place has a score of 68.4 for citations but its total score is 30.3 so that two thirds of its total score comes from citations. Southern Methodist University in 375th place gets 67.3 for citations which after weighting comes close to providing two thirds of its overall score of 31.6. For these universities a proportional change in the final processed score for citations would have a greater impact than a similar change in any of the other indicators.
Looking at the bottom 25 universities in the top 400, in eight cases the citation indicator provides half or more of the total score and in 22 cases it provides a third or more. Thus, the indicator could have more impact on total scores than its weighting of 30 percent would suggest.
It is also noticeable that the mean score for citations of the THE top 400 universities is much higher than that for research, about 65 compared to about 41.This disparity is especially large as we reach the 200s and the 300s.
So we find Southern Methodist University has a score of 67.3 for citations but 9.0 for research.Then the University of Ferrara has a score of 58.5 for citations and 13.0 for research. King Mongkut's University of Technology, has a score of 68.4 for citations and 10.2 for research.
One reason why the scores for the citations indicator are so high is the "regional modification" introduced by Thompson Reuters in 2011. To simplify, this means that the number of citations to a university in a given year and a a given field is divided by the square root of the average number of citations in the field and year for all universities in the country
So if a university in country A receives 100 citations in a certain year of publication and in a certain field and the average impact for that year and field for all universities is 100 then the university will get a score of 10 (100 divided by 10). If a university in country B receives 10 citations in the same year and the same field then but the average impact from all universities in the country is 1 then the citations score for that field year would also be 10 (10 divided by 1).
This drastically reduces the gap for citations between countries that produce many citations and those that produce few. Thompson Reuters justify this by saying that in some countries it is easier to get grants and to travel and join the networks that lead to citations and international collaborations than in others. The problem with this is that it can produce some rather dubious results.
Let us consider the University of Vigo and the University of Surrey. Vigo has an overall score of 30.7 and is in 383rd place. Surrey is just behind with a score of 30.5 and is 388th place.
But, with the exception of citations Surrey is well ahead of Vigo for everything: teaching (31.1 to 19.4), international outlook (80.4 to 26.9), industry income (45.4 to 36.6) and research (25.0 to 9.5).
Surrey, however, does comparatively badly for citations with a score of only 21.6. It does have a contribution to the massively cited 2008 review of particle physics but the university has too many publications for this review to have much effect. Vigo however has a score of 63.7 for citations which may be because of a much cited paper containing a new algorithm for genetic analysis but also presumably because it received, along with other Spanish and Portuguese-speaking universities, a boost from the regional modification.
There are several problems with this modification. First, it can contribute another element of instability. If we observe a university's score for citations has declined it could be because its citations have decreased overall or in key fields or because a much cited paper has slipped out of the five year period of assessment. It could also be that the number of publications has increased without a corresponding increase in citations.
Applying the regional modification could mean that a university's score would be affected by the fluctuations in the impact of the country's universities as a whole. If there was an increase in the number of citations or reduction in publications nationally then this would reduce the citations score of a particular university since the university's score would be divided by the square root of a larger number.
This could lead to the odd situation where stringent austerity measures lead to the emigration of talented researchers and eventually a fall in citations but some universities in the country may improve since they are being compared to a smaller national average.
The second problem is that it can lead to misleading comparisons. It would be a mistake to conclude that Vigo is a better university than Surrey or about the same or even that its research influence is more significant. What has happened is that is that Vigo is more ahead of the Spanish average than Surrey is ahead of the British.
Another problematic feature of the citations indicator is that its relationship with the research indicator is rather modest. Consider that 18 out of 30 points for research are from the reputation survey whose respondents are drawn from those researchers whose publications are in the ISI databases while the citations indicator counts citations in precisely those papers. Then another 6 percent goes to research income which we would expect to have some relationship with the the quality of research.
Yet the correlation between the scores for research and citations for the top 400 universities is modest at .409 which calls into question the validity of one of the indicators or both of them.
A further problem is that this indicator only counts the impact of papers that make it into an ISI database. A university where most of the faculty do not publish in ISI indexed journals would do no worse than one where there was a lot of publications but not many citations or not many citations in low cited fields.
To conclude, the current construction of the citations indicator has the potential to produce anomalous results and to introduce a significant degree of instability into the rankings.
I have already posted on the citations (research influence) indicator in the Times Higher Education World University Rankings and how it can allow a few papers to have a disproportionate impact. But there are other features of this indicator that affect its stability and can produce large changes even if there is no change in methodology.
This indicator has a weighting of 30 percent. The next most heavily weighted indicator is the research reputation survey which carries a weighting of 18 percent and is combined with number of publications (6 percent) and research income (6 percent) to produce a weighting of 30 percent for research: volume, income and reputation.
It might be argued that the citations indicator accounts for only 30 percent of the total weighting so that anomalously high scores given to obscure or mediocre institutions for citations would be balanced or diluted by scores on the other indicators which have a weighting of 70 percent.
The problem with this is that the scores for the citation indicator are often substantially higher than the scores for other indicators, especially in the 300 - 400 region of the rankings so that the impact of this indicator is correspondingly increased. Full data can be found on the 2012-13 iPhone app.
For example, the University of Ferrara in 378th place with a score of 58.5 for citations has a total score of 31.3 so that nearly 60% of its total score comes from the citations indicator. King Mongkut's Unversity of Technology, Thonburi, in 389th place has a score of 68.4 for citations but its total score is 30.3 so that two thirds of its total score comes from citations. Southern Methodist University in 375th place gets 67.3 for citations which after weighting comes close to providing two thirds of its overall score of 31.6. For these universities a proportional change in the final processed score for citations would have a greater impact than a similar change in any of the other indicators.
Looking at the bottom 25 universities in the top 400, in eight cases the citation indicator provides half or more of the total score and in 22 cases it provides a third or more. Thus, the indicator could have more impact on total scores than its weighting of 30 percent would suggest.
It is also noticeable that the mean score for citations of the THE top 400 universities is much higher than that for research, about 65 compared to about 41.This disparity is especially large as we reach the 200s and the 300s.
So we find Southern Methodist University has a score of 67.3 for citations but 9.0 for research.Then the University of Ferrara has a score of 58.5 for citations and 13.0 for research. King Mongkut's University of Technology, has a score of 68.4 for citations and 10.2 for research.
One reason why the scores for the citations indicator are so high is the "regional modification" introduced by Thompson Reuters in 2011. To simplify, this means that the number of citations to a university in a given year and a a given field is divided by the square root of the average number of citations in the field and year for all universities in the country
So if a university in country A receives 100 citations in a certain year of publication and in a certain field and the average impact for that year and field for all universities is 100 then the university will get a score of 10 (100 divided by 10). If a university in country B receives 10 citations in the same year and the same field then but the average impact from all universities in the country is 1 then the citations score for that field year would also be 10 (10 divided by 1).
This drastically reduces the gap for citations between countries that produce many citations and those that produce few. Thompson Reuters justify this by saying that in some countries it is easier to get grants and to travel and join the networks that lead to citations and international collaborations than in others. The problem with this is that it can produce some rather dubious results.
Let us consider the University of Vigo and the University of Surrey. Vigo has an overall score of 30.7 and is in 383rd place. Surrey is just behind with a score of 30.5 and is 388th place.
But, with the exception of citations Surrey is well ahead of Vigo for everything: teaching (31.1 to 19.4), international outlook (80.4 to 26.9), industry income (45.4 to 36.6) and research (25.0 to 9.5).
Surrey, however, does comparatively badly for citations with a score of only 21.6. It does have a contribution to the massively cited 2008 review of particle physics but the university has too many publications for this review to have much effect. Vigo however has a score of 63.7 for citations which may be because of a much cited paper containing a new algorithm for genetic analysis but also presumably because it received, along with other Spanish and Portuguese-speaking universities, a boost from the regional modification.
There are several problems with this modification. First, it can contribute another element of instability. If we observe a university's score for citations has declined it could be because its citations have decreased overall or in key fields or because a much cited paper has slipped out of the five year period of assessment. It could also be that the number of publications has increased without a corresponding increase in citations.
Applying the regional modification could mean that a university's score would be affected by the fluctuations in the impact of the country's universities as a whole. If there was an increase in the number of citations or reduction in publications nationally then this would reduce the citations score of a particular university since the university's score would be divided by the square root of a larger number.
This could lead to the odd situation where stringent austerity measures lead to the emigration of talented researchers and eventually a fall in citations but some universities in the country may improve since they are being compared to a smaller national average.
The second problem is that it can lead to misleading comparisons. It would be a mistake to conclude that Vigo is a better university than Surrey or about the same or even that its research influence is more significant. What has happened is that is that Vigo is more ahead of the Spanish average than Surrey is ahead of the British.
Another problematic feature of the citations indicator is that its relationship with the research indicator is rather modest. Consider that 18 out of 30 points for research are from the reputation survey whose respondents are drawn from those researchers whose publications are in the ISI databases while the citations indicator counts citations in precisely those papers. Then another 6 percent goes to research income which we would expect to have some relationship with the the quality of research.
Yet the correlation between the scores for research and citations for the top 400 universities is modest at .409 which calls into question the validity of one of the indicators or both of them.
A further problem is that this indicator only counts the impact of papers that make it into an ISI database. A university where most of the faculty do not publish in ISI indexed journals would do no worse than one where there was a lot of publications but not many citations or not many citations in low cited fields.
To conclude, the current construction of the citations indicator has the potential to produce anomalous results and to introduce a significant degree of instability into the rankings.
Sunday, January 06, 2013
QS Stars Again
The New York Times has an article by Don Guttenplan on the QS Stars ratings which award universities one to five stars according to eight criteria, some of which are already assessed by the QS World University Rankings and some of which are not. The criteria are teaching quality, innovation and knowledge transfer, research quality, specialist subject, graduate employability, third mission, infrastructure and internationalisation.
The article has comments from rankings experts Ellen Hazelkorn, Simon Marginson and Andrew oswald.
The QS Stars system does raise issues about commercial motivations and conflict of interests. Nonetheless, it has to be admitted that it does fill a gap in the current complex of international rankings. The Shanghai, QS and Times Higher Education rankings may be able to distinguish between Harvard and Cornell or Oxford and Manchester but they rank only a fraction of the world's universities. There are national ranking and rating systems but so far anyone wishing to compare middling universities in different countries has very little information available.
There is, however, the problem that making distinction among little known and mediocre universities, a disproportionate number of which are located in Indonesia, means a loss of discrimination at the top or near top. The National University of Ireland Galway, Ohio State University and Queensland University of Technology get the same five stars as Cambridge and Kings College London.
The QS Stars has potential to offer a broader assessment of university quality but it would be better for everybody if it was kept completely separate from the QS World University Rankings.
The New York Times has an article by Don Guttenplan on the QS Stars ratings which award universities one to five stars according to eight criteria, some of which are already assessed by the QS World University Rankings and some of which are not. The criteria are teaching quality, innovation and knowledge transfer, research quality, specialist subject, graduate employability, third mission, infrastructure and internationalisation.
The article has comments from rankings experts Ellen Hazelkorn, Simon Marginson and Andrew oswald.
The QS Stars system does raise issues about commercial motivations and conflict of interests. Nonetheless, it has to be admitted that it does fill a gap in the current complex of international rankings. The Shanghai, QS and Times Higher Education rankings may be able to distinguish between Harvard and Cornell or Oxford and Manchester but they rank only a fraction of the world's universities. There are national ranking and rating systems but so far anyone wishing to compare middling universities in different countries has very little information available.
There is, however, the problem that making distinction among little known and mediocre universities, a disproportionate number of which are located in Indonesia, means a loss of discrimination at the top or near top. The National University of Ireland Galway, Ohio State University and Queensland University of Technology get the same five stars as Cambridge and Kings College London.
The QS Stars has potential to offer a broader assessment of university quality but it would be better for everybody if it was kept completely separate from the QS World University Rankings.
Sunday, December 23, 2012
The URAP Ranking
Another ranking that has been neglected is the University Ranking by Academic Performance [www.urapcenter.org] started by the Middle East Technical University in 2009. This has six indicators: number of articles (21%), citations (21%), total documents (10%), journal impact total (18%), journal citation impact total (15%) and international collaboration (10%).
A distinctive feature is that these rankings provide data for 2000 universities,much more than the current big three.
The top ten are:
1. Harvard
2. Toronto
3. Johns Hopkins
4. Stanford
5. UC Berkeley
6. Michigan Ann Argot
7. Oxford
8. Washington Seattle
9. UCLA
10. Tokyo
These rankings definitely favour size over quality as shown by the strong performance of Toronto and Johns Hopkins and the lowly position of Caltech in 51st place and Princeton in 95th. Still, they could be very helpful for countries with few institutions in the major rankings.
Another ranking that has been neglected is the University Ranking by Academic Performance [www.urapcenter.org] started by the Middle East Technical University in 2009. This has six indicators: number of articles (21%), citations (21%), total documents (10%), journal impact total (18%), journal citation impact total (15%) and international collaboration (10%).
A distinctive feature is that these rankings provide data for 2000 universities,much more than the current big three.
The top ten are:
1. Harvard
2. Toronto
3. Johns Hopkins
4. Stanford
5. UC Berkeley
6. Michigan Ann Argot
7. Oxford
8. Washington Seattle
9. UCLA
10. Tokyo
These rankings definitely favour size over quality as shown by the strong performance of Toronto and Johns Hopkins and the lowly position of Caltech in 51st place and Princeton in 95th. Still, they could be very helpful for countries with few institutions in the major rankings.
Saturday, December 15, 2012
The Taiwan Rankings
It is unfortunate that the "big three" of the international ranking scene -- ARWU (Shanghai), THE and QS -- receive a disproportionate amount of public attention while several research-based rankings are largely ignored. Among them is the National Taiwan University Ranking which until this year was run by the Higher Education Evaluation and Acceditation Council of Taiwan.
The rankings, which are based on the ISI databases, assign a weighting of 25% to research productivity (number of articles over the last 11 years, number of articles in the current year), 35% to research impact (number of citations over the last 11 years, number of citations in the current year, average number of citations over the last 11 years) and 40 % to research excellence (h-index over the last 2 years, number of highly cited papers, number of articles in the current year in highly cited journals).
Rankings by field and subject are also available.
There is no attempt to assess teaching or student quality and publications in the arts and humanities are not counted.
These rankings are a valuable supplement to the Shanghai ARWU. The presentation of data over 11 and 1 year periods allows quick comparisons of changes over a decade.
Here are the top ten.
1. Harvard
2. Johns Hopkins
3. Stanford
4. University of Washington at Seattle
5. UCLA
6. University of Washington Ann Arbor
7. Toronto
8. University of California Berkeley
9. Oxford
10. MIT
High-flyers in other rankings do not do especially well here. Princeton is 52nd, Caltech 34th, Yale 19th, Cambridge 15th most probably because they are relatively small or have strengths in the humanities.
It is unfortunate that the "big three" of the international ranking scene -- ARWU (Shanghai), THE and QS -- receive a disproportionate amount of public attention while several research-based rankings are largely ignored. Among them is the National Taiwan University Ranking which until this year was run by the Higher Education Evaluation and Acceditation Council of Taiwan.
The rankings, which are based on the ISI databases, assign a weighting of 25% to research productivity (number of articles over the last 11 years, number of articles in the current year), 35% to research impact (number of citations over the last 11 years, number of citations in the current year, average number of citations over the last 11 years) and 40 % to research excellence (h-index over the last 2 years, number of highly cited papers, number of articles in the current year in highly cited journals).
Rankings by field and subject are also available.
There is no attempt to assess teaching or student quality and publications in the arts and humanities are not counted.
These rankings are a valuable supplement to the Shanghai ARWU. The presentation of data over 11 and 1 year periods allows quick comparisons of changes over a decade.
Here are the top ten.
1. Harvard
2. Johns Hopkins
3. Stanford
4. University of Washington at Seattle
5. UCLA
6. University of Washington Ann Arbor
7. Toronto
8. University of California Berkeley
9. Oxford
10. MIT
High-flyers in other rankings do not do especially well here. Princeton is 52nd, Caltech 34th, Yale 19th, Cambridge 15th most probably because they are relatively small or have strengths in the humanities.
Sunday, November 18, 2012
Article in University World News
GLOBAL
Ranking’s research impact indicator is skewed
Richard Holmes15 November 2012 Issue No:247
The 2012 Times Higher Education World University Rankings consisted of 13 indicators grouped into five clusters. One of these clusters consisted of precisely one indicator, research impact, which is measured by normalised citations and which THE considers to be the flagship of the rankings.
It is noticeable that this indicator, prepared for THE by Thomson Reuters, gives some universities implausibly high scores. I have calculated the world's top universities for research impact according to this indicator.
Since it accounts for 30% of the total ranking it clearly can have a significant effect on overall scores. The data are from the profiles, which can be accessed by clicking on the top 400 universities. Here are the top 20:
1. Rice University
1. Moscow (State) Engineering Physics Institute (MEPhI)
3. University of California Santa Cruz
3. MIT
5. Princeton
6. Caltech
7. University of California Santa Barbara
7. Stanford
9. University of California Berkeley
10. Harvard
11. Royal Holloway London
12. Chicago
13. Northwestern
14. Tokyo Metropolitan University
14. University of Colorado Boulder
16. University of Washington Seattle
16. Duke
18. University of California San Diego
18. University of Pennsylvania
18. Cambridge
There are some surprises here, such as Rice University in joint top place, second-tier University of California campuses at Santa Cruz (equal third with MIT) and Santa Barbara (equal seventh with Stanford) placed ahead of Berkeley and Los Angeles, Northwestern almost surpassing Chicago, and Tokyo Metropolitan University ahead of Tokyo and Kyoto universities and everywhere else in Asia.
It is not totally implausible that Duke and the University of Pennsylvania might be overtaking Cambridge and Oxford for research impact, but Royal Holloway and Tokyo Metropolitan University?
These are surprising, but Moscow State Engineering Physics Institute (MEPhI) as joint best in the world is a definite head-scratcher.
Other oddities
Going down a bit in this indicator we find more oddities.
According to Thomson Reuters, the top 200 universities in the world for research impact include Notre Dame, Carleton, William and Mary College, Gottingen, Boston College, University of East Anglia, Iceland, Crete, Koc University, Portsmouth, Florida Institute of Technology and the University of the Andes.
On the other hand, when we get down to the 300s we find that Tel Aviv, National Central University Taiwan, São Paulo, Texas A&M and Lomonosov Moscow State University are assigned surprisingly low places. The latter is actually in 400th place for research impact among the top 400.
It would be interesting to hear what academics in Russia think about an indicator that puts MEPhI in first place in the world for research impact and Lomonosov Moscow State University in 400th place.
I wonder too about the Russian reaction to MEPhI as overall second among Russian and Eastern European universities. See here, here and here for national university rankings and here and here for web-based rankings.
Déjà vu
We have been here or somewhere near here before.
In 2010 the first edition of the THE rankings placed Alexandria University in the world's top 200 and fourth for research impact. This was the result of a flawed methodology combined with diligent self-citation and cross-citation by a writer whose lack of scientific credibility has been confirmed by a British court.
Supposedly the methodology was fixed last year. But now we have an indicator as strange as in 2010, perhaps even more so.
So how did MEPhI end up as world's joint number one for research impact? It should be emphasised that this is something different from the case of Alexandria. MEPhI is, by all accounts, a leading institution in Russian science. It is, however, very specialised and fairly small and its scientific output is relatively modest.
First, let us take a look at another source, the Scimago World Report, which gives MEPhI a rank of 1,722 for total publications between 2006 and 2010, the same period that Thomson Reuters counts.
Admittedly, that includes a few non-university institutions. It has 25.9 % of its publication in the top quartile of journals. It has a score of 8.8 % for excellence – that is, the proportion of publications in the most highly cited 10% of publications. It gets a score of 1.0 for ‘normalised impact’, which means that it gets exactly the world average for citations adjusted by field, publication type and period of publication.
Moving on to Thomson Reuters’ database at the Web of Science, MEPhI has had only 930 publications listed under that name between 2006 and 2010, although there were some more under other name variants that pushed it over the 200 papers per year threshold to be included in the rankings.
It is true that MEPhI can claim three Nobel prize winners, but they received awards in 1958 and 1964 and one of them was for work done in the 1930s.
So how could anyone think that an institution that now has a modest and specialised output of publications and a citation record that, according to Scimago, does not seem significantly different from the international average using the somewhat larger Scopus database – Thomson Reuters uses the more selective ISI Web of Science – could emerge at the top of Thomson Reuters' research impact indicator?
Furthermore, MEPhI has no publications listed in the ISI Social Science Citation Index and exactly one (uncited) paper in the Arts and Humanities Index on oppositional politics in Central Russia in the 1920s.
There are, however, four publications assigned to MEPhI authors in the Web of Science that are listed as being on the literature of the British Isles, none of which seem to have anything to do with literature or the British Isles or any other isles, but which have a healthy handful of citations that would yield much higher values if they were classified under literature rather than physics.
Citations
Briefly, the essence of Thomson Reuters' counting of citations is that a university's citations are compared to the average for a field in a particular period after publication.
So if the average for a field is 10 citations per paper one year after publication, then 300 citations of a single paper one year after publication would count as 30 citations. If the average for the field was one citation it would count as 300 citations.
To get a high score in the Thomson Reuters research impact indicator, it helps to get citations soon after publication, preferably in a field where citations are low or middling, rather than simply getting many citations.
The main cause of MEPhI's research impact supremacy would appear to be a biennial review that summarises research over two years in particle physics and is routinely referred to in the literature review of research papers in the field.
The life span of each review is short since it is superseded after two years by the next review so that the many citations are jammed into a two-year period, which could produce a massive score for ‘journal impact factor’.
It could also produce a massive score on the citations indicator in the THE rankings. In addition, Thomson Reuters then gives a weighting to countries according to the number of citations. If citations are generally low in their countries, then institutions get some more value added.
The 2006 “Review of Particle Physics” published in Journal of Physics G, received a total of 3,662 citations, mostly in 2007 and 2008. The 2008 review published in Physics Letters B had 3,841 citations, mostly in 2009 and 2010, and the 2010 review, also published in Journal of Physics G, had 2,592 citations, nearly all in 2011. Someone from MEPhI was listed as co-author of the 2008 and 2010 reviews.
It is not the total number of citations that matters, but the number of citations that occur soon after publication. So the 2008 review received 1,278 citations in 2009, but the average number of citations in 2009 to other papers published in Physics Letters B for 2008 was 4.4.
So the 2008 review received nearly 300 times as many citations in the year after publication as the mean for that journal. Add the extra weighting for Russia and there is a very large boost to MEPhI's score from just a single publication. Note that these are reviews of research so it is likely that there had already been citations to the research publications that are reviewed. Among the highlights of the 2010 review are 551 new papers and 108 mostly revised or new reviews.
If the publications had a single author or just a few authors from MEPhI then this would perhaps suggest that the institute had produced or made a major contribution to two publications of exceptional merit. The 2008 review in fact had 173 co-authors. The 2010 review listed 176 members of the Particle Data Group who are referred to as contributors.
It seems then that MEPhI was declared the joint best university for research impact largely because of two multi-author (or contributor) publications to which it had made a fractional contribution. Those four papers assigned to literature may also have helped.
As we go through the other anomalies in the indicator, we find that the reviews of particle physics contributed to other high research impact scores. Tokyo Metropolitan University, Royal Holloway London, the University of California at Santa Cruz, Santa Barbara and San Diego, Notre Dame, William and Mary, Carleton and Pisa also made contributions to the reviews.
This was not the whole story. Tokyo Metropolitan University benefited from many citations to a paper about new genetic analysis software and Santa Cruz had contributed to a massively cited multi-author human haplotype map.
Number of authors
This brings us to the total number of publications. There were more than 100 authors of or contributors to the reviews but for some institutions the number of citations had no discernible effect and for others not very much.
Why the difference? Here size really does matter and small really is beautiful.
MEPhI has relatively few publications overall. It only just managed to cross the 200 publications per year threshold to get into the rankings. This means that the massive and early citation of the reviews was averaged out over a small number of publications. For others the citations were absorbed by many thousands of publications.
These anomalies and others could have been avoided by a few simple and obvious measures. After the business with Alexandria in 2010 Thomson Reuters did tweak its system, but evidently this was not enough.
First, it would help if Thomson Reuters scrutinised the criteria by which specialised institutions are included in the rankings. If we are talking about how well universities spread knowledge and ideas, it is questionable whether we should count institutions that do research in one or only a few fields.
There are many methods by which research impact can be evaluated. The full menu can be found on the Leiden Ranking site. Use a variety of methods to calculate research impact, especially those like the h-index that are specifically designed to work around outliers and extreme cases.
It would be sensible to increase the threshold of publications for inclusion in the rankings. The Leiden Ranking excludes universities with fewer than 500 publications a year. If a publication has multiple authors, divide the number of citations by the number of authors. If that is too complex then start dividing citations when the number reaches 10 or a dozen.
Do not count reviews, summaries, compilations or other publications that refer to papers that may have already been cited, or at least put them in a separate publication type. Do not count self-citations. Even better, do not count citations within the same institution or the same journal.
Most importantly, calculate the indicator for the six subject groups and then aggregate them. If you think that a fractional contribution to two publications justifies putting MEPhI at the top for research impact in physics, go ahead and give them 100 for natural sciences or physical sciences. But is it reasonable to give the institution any more than zero for arts and humanities, the social sciences and so on?
So we have a ranking indicator that has again yielded some very odd results.
In 2010 Thomson Reuters asserted that it had a method that was basically robust, transparent and sophisticated but which had a few outliers and statistical anomalies about which they would be happy to debate.
It is beginning to look as though outliers and anomalies are here to stay and there could well be more on the way.
It will be interesting to see if Thomson Reuters will try to defend this indicator again.
* Richard Holmes is a lecturer at Universiti Teknologi MARA in Malaysia and author of the University Ranking Watch blog.
Apology for factual error
In an earlier version of this article in University World News I made a claim that Times Higher Education, in their 2012 World University Rankings, had introduced a methodological change that substantially affected the overall ranking scores. I acknowledge that this claim was without factual foundation. I withdraw the claim and apologise without reservation to Phil Baty and Times Higher Education.
Richard Holmes
It is noticeable that this indicator, prepared for THE by Thomson Reuters, gives some universities implausibly high scores. I have calculated the world's top universities for research impact according to this indicator.
Since it accounts for 30% of the total ranking it clearly can have a significant effect on overall scores. The data are from the profiles, which can be accessed by clicking on the top 400 universities. Here are the top 20:
1. Rice University
1. Moscow (State) Engineering Physics Institute (MEPhI)
3. University of California Santa Cruz
3. MIT
5. Princeton
6. Caltech
7. University of California Santa Barbara
7. Stanford
9. University of California Berkeley
10. Harvard
11. Royal Holloway London
12. Chicago
13. Northwestern
14. Tokyo Metropolitan University
14. University of Colorado Boulder
16. University of Washington Seattle
16. Duke
18. University of California San Diego
18. University of Pennsylvania
18. Cambridge
There are some surprises here, such as Rice University in joint top place, second-tier University of California campuses at Santa Cruz (equal third with MIT) and Santa Barbara (equal seventh with Stanford) placed ahead of Berkeley and Los Angeles, Northwestern almost surpassing Chicago, and Tokyo Metropolitan University ahead of Tokyo and Kyoto universities and everywhere else in Asia.
It is not totally implausible that Duke and the University of Pennsylvania might be overtaking Cambridge and Oxford for research impact, but Royal Holloway and Tokyo Metropolitan University?
These are surprising, but Moscow State Engineering Physics Institute (MEPhI) as joint best in the world is a definite head-scratcher.
Other oddities
Going down a bit in this indicator we find more oddities.
According to Thomson Reuters, the top 200 universities in the world for research impact include Notre Dame, Carleton, William and Mary College, Gottingen, Boston College, University of East Anglia, Iceland, Crete, Koc University, Portsmouth, Florida Institute of Technology and the University of the Andes.
On the other hand, when we get down to the 300s we find that Tel Aviv, National Central University Taiwan, São Paulo, Texas A&M and Lomonosov Moscow State University are assigned surprisingly low places. The latter is actually in 400th place for research impact among the top 400.
It would be interesting to hear what academics in Russia think about an indicator that puts MEPhI in first place in the world for research impact and Lomonosov Moscow State University in 400th place.
I wonder too about the Russian reaction to MEPhI as overall second among Russian and Eastern European universities. See here, here and here for national university rankings and here and here for web-based rankings.
Déjà vu
We have been here or somewhere near here before.
In 2010 the first edition of the THE rankings placed Alexandria University in the world's top 200 and fourth for research impact. This was the result of a flawed methodology combined with diligent self-citation and cross-citation by a writer whose lack of scientific credibility has been confirmed by a British court.
Supposedly the methodology was fixed last year. But now we have an indicator as strange as in 2010, perhaps even more so.
So how did MEPhI end up as world's joint number one for research impact? It should be emphasised that this is something different from the case of Alexandria. MEPhI is, by all accounts, a leading institution in Russian science. It is, however, very specialised and fairly small and its scientific output is relatively modest.
First, let us take a look at another source, the Scimago World Report, which gives MEPhI a rank of 1,722 for total publications between 2006 and 2010, the same period that Thomson Reuters counts.
Admittedly, that includes a few non-university institutions. It has 25.9 % of its publication in the top quartile of journals. It has a score of 8.8 % for excellence – that is, the proportion of publications in the most highly cited 10% of publications. It gets a score of 1.0 for ‘normalised impact’, which means that it gets exactly the world average for citations adjusted by field, publication type and period of publication.
Moving on to Thomson Reuters’ database at the Web of Science, MEPhI has had only 930 publications listed under that name between 2006 and 2010, although there were some more under other name variants that pushed it over the 200 papers per year threshold to be included in the rankings.
It is true that MEPhI can claim three Nobel prize winners, but they received awards in 1958 and 1964 and one of them was for work done in the 1930s.
So how could anyone think that an institution that now has a modest and specialised output of publications and a citation record that, according to Scimago, does not seem significantly different from the international average using the somewhat larger Scopus database – Thomson Reuters uses the more selective ISI Web of Science – could emerge at the top of Thomson Reuters' research impact indicator?
Furthermore, MEPhI has no publications listed in the ISI Social Science Citation Index and exactly one (uncited) paper in the Arts and Humanities Index on oppositional politics in Central Russia in the 1920s.
There are, however, four publications assigned to MEPhI authors in the Web of Science that are listed as being on the literature of the British Isles, none of which seem to have anything to do with literature or the British Isles or any other isles, but which have a healthy handful of citations that would yield much higher values if they were classified under literature rather than physics.
Citations
Briefly, the essence of Thomson Reuters' counting of citations is that a university's citations are compared to the average for a field in a particular period after publication.
So if the average for a field is 10 citations per paper one year after publication, then 300 citations of a single paper one year after publication would count as 30 citations. If the average for the field was one citation it would count as 300 citations.
To get a high score in the Thomson Reuters research impact indicator, it helps to get citations soon after publication, preferably in a field where citations are low or middling, rather than simply getting many citations.
The main cause of MEPhI's research impact supremacy would appear to be a biennial review that summarises research over two years in particle physics and is routinely referred to in the literature review of research papers in the field.
The life span of each review is short since it is superseded after two years by the next review so that the many citations are jammed into a two-year period, which could produce a massive score for ‘journal impact factor’.
It could also produce a massive score on the citations indicator in the THE rankings. In addition, Thomson Reuters then gives a weighting to countries according to the number of citations. If citations are generally low in their countries, then institutions get some more value added.
The 2006 “Review of Particle Physics” published in Journal of Physics G, received a total of 3,662 citations, mostly in 2007 and 2008. The 2008 review published in Physics Letters B had 3,841 citations, mostly in 2009 and 2010, and the 2010 review, also published in Journal of Physics G, had 2,592 citations, nearly all in 2011. Someone from MEPhI was listed as co-author of the 2008 and 2010 reviews.
It is not the total number of citations that matters, but the number of citations that occur soon after publication. So the 2008 review received 1,278 citations in 2009, but the average number of citations in 2009 to other papers published in Physics Letters B for 2008 was 4.4.
So the 2008 review received nearly 300 times as many citations in the year after publication as the mean for that journal. Add the extra weighting for Russia and there is a very large boost to MEPhI's score from just a single publication. Note that these are reviews of research so it is likely that there had already been citations to the research publications that are reviewed. Among the highlights of the 2010 review are 551 new papers and 108 mostly revised or new reviews.
If the publications had a single author or just a few authors from MEPhI then this would perhaps suggest that the institute had produced or made a major contribution to two publications of exceptional merit. The 2008 review in fact had 173 co-authors. The 2010 review listed 176 members of the Particle Data Group who are referred to as contributors.
It seems then that MEPhI was declared the joint best university for research impact largely because of two multi-author (or contributor) publications to which it had made a fractional contribution. Those four papers assigned to literature may also have helped.
As we go through the other anomalies in the indicator, we find that the reviews of particle physics contributed to other high research impact scores. Tokyo Metropolitan University, Royal Holloway London, the University of California at Santa Cruz, Santa Barbara and San Diego, Notre Dame, William and Mary, Carleton and Pisa also made contributions to the reviews.
This was not the whole story. Tokyo Metropolitan University benefited from many citations to a paper about new genetic analysis software and Santa Cruz had contributed to a massively cited multi-author human haplotype map.
Number of authors
This brings us to the total number of publications. There were more than 100 authors of or contributors to the reviews but for some institutions the number of citations had no discernible effect and for others not very much.
Why the difference? Here size really does matter and small really is beautiful.
MEPhI has relatively few publications overall. It only just managed to cross the 200 publications per year threshold to get into the rankings. This means that the massive and early citation of the reviews was averaged out over a small number of publications. For others the citations were absorbed by many thousands of publications.
These anomalies and others could have been avoided by a few simple and obvious measures. After the business with Alexandria in 2010 Thomson Reuters did tweak its system, but evidently this was not enough.
First, it would help if Thomson Reuters scrutinised the criteria by which specialised institutions are included in the rankings. If we are talking about how well universities spread knowledge and ideas, it is questionable whether we should count institutions that do research in one or only a few fields.
There are many methods by which research impact can be evaluated. The full menu can be found on the Leiden Ranking site. Use a variety of methods to calculate research impact, especially those like the h-index that are specifically designed to work around outliers and extreme cases.
It would be sensible to increase the threshold of publications for inclusion in the rankings. The Leiden Ranking excludes universities with fewer than 500 publications a year. If a publication has multiple authors, divide the number of citations by the number of authors. If that is too complex then start dividing citations when the number reaches 10 or a dozen.
Do not count reviews, summaries, compilations or other publications that refer to papers that may have already been cited, or at least put them in a separate publication type. Do not count self-citations. Even better, do not count citations within the same institution or the same journal.
Most importantly, calculate the indicator for the six subject groups and then aggregate them. If you think that a fractional contribution to two publications justifies putting MEPhI at the top for research impact in physics, go ahead and give them 100 for natural sciences or physical sciences. But is it reasonable to give the institution any more than zero for arts and humanities, the social sciences and so on?
So we have a ranking indicator that has again yielded some very odd results.
In 2010 Thomson Reuters asserted that it had a method that was basically robust, transparent and sophisticated but which had a few outliers and statistical anomalies about which they would be happy to debate.
It is beginning to look as though outliers and anomalies are here to stay and there could well be more on the way.
It will be interesting to see if Thomson Reuters will try to defend this indicator again.
* Richard Holmes is a lecturer at Universiti Teknologi MARA in Malaysia and author of the University Ranking Watch blog.
Apology for factual error
In an earlier version of this article in University World News I made a claim that Times Higher Education, in their 2012 World University Rankings, had introduced a methodological change that substantially affected the overall ranking scores. I acknowledge that this claim was without factual foundation. I withdraw the claim and apologise without reservation to Phil Baty and Times Higher Education.
Richard Holmes
Subscribe to:
Posts (Atom)