I am sure that many people waited for the new Times Higher Education rankings in the hope that they would be a significant improvement over the old THE-QS rankings.
In some ways there have been improvements and if one single indicator had been left out the new rankings could have been considered a qualified success.
However there is a problem and it is a big problem. This is the citations indicator, which consists of the number of citations to articles published between 2004 and 2008 in ISI indexed journals divided by the number of articles. It is therefore a measure of the average quality of articles since we assume that the more citations a paper receives the better it is.
Giving nearly a third of the total weighting to research impact is questionable. Giving nearly a third to just one of several possible indicators of research impact is dangerous. Apart from anything else, it means that any errors or methodological flaws might undermine the entire ranking.
THE have been at pains to suggest that one of the flaws in the old rankings was that the failure to take account of different citation patterns in different disciplines meant that universities with strengths in disciplines such as medicine where citation is frequent do much better than those that are strong in disciplines such as philosophy where citation is less common. We were told that the new data would be normalized by disciplinary group so that a university with a small number of citations in the arts and humanities could still do well if the number of citations was relatively high compared to the number of citations for the highest scorer in that disciplinary cluster.
I think we can assume that this means that in each of the six disciplinary groups, the number of citations per paper was calculated for each university. Then the mean for all universities in the group was calculated. Then the top scoring university was given a score of 100. Then Z scores were calculated, that is the number of standard deviations from the mean. Then the score for the whole indicator was found by calculating the mean score for the six disciplinary groups.
The crucial point here is the rather obvious one that no university can get more that 100 for each disciplinary group. If it were otherwise then Harvard, MIT and Caltech would be getting scores well in excess of 100.
So, let us look at some of the highest scores for citations per paper . First the university of Alesxandria, which is not listed in the ARWU top 500 and is not ranked by QS.and which is ranked 5,882nd in the world by Webometrics.
The new rankings put Alexandria in 4th place in the world for citations per paper. This meant that with the high weighting given to the citations indicator the university achieved a very respectable overall place of 147th.
How did this happen? For a start I would like to compare Alexandria with Cornell, an Ivy League university with a score of 88.1 for citations, well below Alexandria’s
I have used data from the Web of Science to analyse citation patterns according to the disciplinary groups indicated by Thomson Reuters. These scores may not be exactly those calculated by TR since I have made some instantaneous decisions about allocating subjects to different groups and TR may well have done it differently. I doubt though that it would make any real difference if I put biomedical engineering in clinical and health subjects and TR put it in engineering and technology or life sciences. Still I would welcome it if Thomson Reuters could show how their classification of disciplines into various groups produced the score that they have published.
So where did Alexandria’s high score come from. It was not because Alexandria does well in the arts and humanities. Alexandria had an average of 0.44 citations per paper and Cornell 0.85.
I was not because Alexandria is brilliant in the social sciences. It had 4.21 citations per paper and Cornell 7.98.
Was it medicine and allied disciplines? No. Alexandria had 4.97 and Cornell 11.53.
Life sciences? No. Alexandra had 5.30 and Cornell 13.49.
Physical Sciences? No. Alexandria had 6.54 and Cornell 16.31.
Engineering, technology and computer science? No. Alexandria had 6.03 and Cornell 9.59.
In every single disciplinary group Cornell is well ahead of Alexandria. Possibly, TR did something differently. Maybe they counted citations to papers in conference proceedings but that would only affect papers published in 2008 and after. At the moment, I cannot think of anything that would substantially affect the relative scores.
Some further investigation showed that while Alexandria’s citation record is less than stellar in all respects there is precisely one discipline, or subdiscipline or even subsubdiscipline, where it does very well. Looking at the disciplines one by one, I found that there is one where Alexandria does seem to have an advantage, namely mathematical physics. Here it has 11.52 citations per paper well ahead of Cornell with 6.36.
Phil Baty in THE states:
This is not very convincing. Does Alexandria produce strong research? Overall, No. It is ranked 1014 in the world for total papers over a ten year period by SCImago.“Alexandria University is Egypt's only representative in the global top 200, in joint 147th place. Its position, rubbing shoulders with the world's elite, is down to an exceptional score of 99.8 in the "research-influence" category, which is virtually on a par with Harvard University.
Alexandria, which counts Ahmed H. Zewail, winner of the 1999 Nobel Prize for
Chemistry, among its alumni, clearly produces some strong research. But it is a
cluster of highly cited papers in theoretical physics and mathematics - and more
controversially, the high output from one scholar in one journal - that gives it
such a high score.Mr Pratt said: "The citation rates for papers in these fields may not appear exceptional when looking at unmodified citation counts; however, they are as high as 40 times the benchmark for similar papers. "The effect of this is particularly strong given the relatively low number of papers the university publishes overall."
Let us assume, however, that Alexandria’s citations per paper were such that it was the top scorer not just in mathematical or interdisciplinary physics, but also in physics in general and in the physical sciences, including maths (which, as we have seen it was not anyway)
Even if the much cited papers in mathematical physics did give a maximum score of 100 for the physical sciences and maths group, how could that compensate for the low scores that the university should be getting for the other five groups? To attain a score of 99.8 Alexandria would have to be near the top for each of the six disciplinary groups. This is clearly not the case. I would therefore like to ask someone from Thomson Reuters to explain how they got from the citation and paper counts in the ISI database to an overall score.
Similarly we find that Bilkent University in Turkey had a score for citations of 91.7, quite a bit ahead of Cornell.
The number of citations per paper in each disciplinary group is as follows:
Arts and Humanities: Bilkent 0.44, Cornell 0.85
Social Sciences: Bilkent 2.92, Cornell 7.98
Medicine etc: Bilkent 9.42 Cornell 11.53
Life Sciences: Bilkent 5.44 Cornell 13.49
Physical Sciences: Bilkent 8.75 Cornell 16.31
Engineering and Computer Science: Bilkent 6.15 Cornell 9.59
Again, it is difficult to see how Bilkent could have surpassed Cornell. I did notice that one single paper in Science had received over 600 citations. Would that be enough to give Bilkent such a high score?
It has occurred to me that since this paper was listed under “multidisciplinary sciences” that maybe its citations have been counted more than once. Again, it would be a good idea for TR to explain step by step exactly what they did.
Now for Hong Kong Baptist University. It is surprising that this university should be in the top 200 since in all other rankings it has lagged well behind other Hong Kong universities. Indeed it lags behind on the other indicators in this ranking.
The number of citations per paper in the various disciplinary groups is as follows:
Arts and Humanities: HKBU 0.34, Cornell 0.85
Social Sciences: HKBU 4.50 Cornell 7.98
Medicine etc: 7.82 Cornell 11.53
Life Sciences: 10.11 Cornell 13.49
Physical Sciences: HKBU 10.78 Cornell 16.31
Engineering and Computer Science: HKBU 8.61 Cornell 9.59
Again, there seems to be a small group of prolific and highly accomplished and reputable researchers especially in chemistry and engineering who have boosted HKBU’s citations. But again, how could this affect the overall score. Isn’t this precisely what normalization by discipline was supposed to prevent?
There are other universities with suspiciously high scores for this indicator. Also one wonders whether among the universities that did not make it into the top 200 there were some unfairly penalized. Now that the iphone app is being downloaded across the world this may soon become apparent.
Once again I would ask TR to go step by step through the process of calculating these scores and to assure us that they are not the result of an error or series of errors. If they can do this I would be very happy to start discussing more fundamental questions about these rankings.
My bet is that TR uses the Leiden "Crown indicator" since this is what is embodied in their product InCites.
ReplyDeleteTo cut it short, each paper is linked to a subdiscipline, a type of publication (letter, review, ...) and a year of publication. With this data for the whole world, it is easy to calculate the expected number of citations for a paper of a given type, in a given discipline, in a given year.
For a set of papers (e.g. all the papers of Alexandria university), the indicator is calculated as Sum(received citations)/Sum(expected citations).
This number can become very high if you have a small number of paper or if you look only at recent papers (if, on average you expect 0.1 citations for a recent paper in math, a single citation will give you a score of 10 for this paper!)
Note that Leiden as recently decided to change its favorite indicator for a mean(citations received/citations expected) which gives less weight for a few highly cited papers in a set. But it seems that TR has not implemented yet this new indicator.
Note also that, in order to avoid the overweight given to few papers in a small set, Leiden publish its own ranking of universities with thresholds on the total number of papers published.
Hi Richard.. I asked Simon Pratt the same question about Alexandria, but haven't heard from him yet. he must be quite busy answering similar questions from 1000s of people.
ReplyDeleteSimon is quite good at what he does (I know him personally) so I am very interested to see his explanation.
Dahman
@Richard~ Not sure what data have you used to run your analysis.. I was trying to do the same and asked THE some questions through Twitter.. you may find their answers useful.. here you go with Twitter:
ReplyDeletedahman - Not sure how citations were calculated. is it 2004-2008 citations to all articles published in WOS since 1900 up to 2008? #THEWUR
THEworldunirank - @dahman #THEWUR uses normalised citation impact. 25m citations for papers between 2004 to 2008
dahman - @THEworldunirank So both, publications & citations taken from period 2004-2008. means looking at recent impact of recent research! #THEWUR
THEworldunirank - @dahman citations during 2009 are included. #thewur
@Pablo ~ I guess you are right about TR using Leiden's crown indicator. The most omportant thing for Times/TR is that Leiden in their latest ranking http://www.cwts.nl uses a threshold of 400 publications and not the silly 50.
ReplyDelete