Right after putting up the post on Moscow State Engineering Physics Institute and its "achievement" in getting the maximum score for research impact in the latest THE - TR World University Rankings, I found this exchange on Facebook. See my comments at the end.
- Valery Adzhiev So, the best university in the world in the "citation" (i.e. "research influence") category is Moscow State Engineering Physics Institute with maximum '100' score. This is remarkable achivement by any standards. At the same time it scored in "research" just 10.6 (out of 100) which is very, very low result. How on earth that can be?
- Times Higher Education World University Rankings Hi Valery,
Regarding MEPHI’s high citation impact, there are two causes: Firstly they have a couple of extremely highly cited papers out of a very low volume of papers.The two extremely highly cited papers are skewing what would ordinarily be a very good normalized citation impact to an even higher level.
We also apply "regional modification" to the Normalized Citation Impact. This is an adjustment that we make to take into account the different citation cultures of each country (because of things like language and research policy). In the case of Russia, because the underlying citation impact of the country is low it means that Russian universities get a bit of a boost for the Normalized Citation Impact.
MEPHI is right on the boundary for meeting the minimum requirement for the THE World University Rankings, and for this reason was excluded from the rankings in previous years. There is still a big concern with the number of papers being so low and I think we may see MEPHI’s citation impact change considerably over time as the effect of the above mentioned 2 papers go out of the system (although there will probably be new ones come in).
Hope this helps to explain things.
THE - Valery Adzhiev Thanks for your prompt reply. Unfortunately, the closer look at that case only adds rather awkward questions. "a couple of extremely highly cited papers are actually not "papers": they are biannual volumes titled "The Review of Particle Physics" that ...See More
- Valery Adzhiev I continue. There are more than 200 authors (in fact, they are "editors") from more than 100 organisation from all over the world, who produce those volumes. Look: just one of them happened to be affiliated with MEPhI - and that rather modest fact (tha...See More
- Valery Adzhiev Sorry, another addition: I'd just want to repeat that my point is not concerned only with MEPhI - Am talking about your methodology. Look at the "citation score" of some other universities. Royal Holloway, University of London having justt 27.7 in "res...See More
- Alvin See Great observations, Valery.
- Times Higher Education World University Rankings Hi Valery,
Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for...See More - Andrei Rostovtsev This is in fact rather philosofical point. There are also a number of very scandalous papers with definitively negative scientific impact, but making a lot of noise around. Those have also high contribution to the citation score, but negative impact t...See More
It is true that two extremely highly cited publications combined with a low total number of publications skewed the results but what is equally or perhaps more important is that these citations occur in the year or two years after publication when citations tend to be relatively infrequent compared to later years. The 2010 publication is a biennial review, like the 2008 publication, that will be cited copiously for two years after which it will no doubt be superseded by the 2012 edition.
Also, we should note that in the ISI Web of Science, the 2008 publication is classified as "physics, multidisciplinary". Papers listed as multidisciplinary generally get relatively few citations so if the publication was compared to other multidisciplinary papers it would get an even larger weighting.Valery has an excellent point when he points out that these publications have over 100 authors or contributors each (I am not sure whether they are actual researchers or administrators). Why then did not all the other contributors boost their instutitions' scores to similar heights? Partly because they were not in Russia and therefore did not get the regional weighting but also because they were publishing many more papers overall than MEPhI.
So basically, A. Romaniouk who contributed 1/173rd of one publication was considered as having more research impact than hundreds of researchers at Harvard, MIT, Caltech etc producing hundreds of papers cited hundreds of times. Sorry, but is this a ranking of research quality or a lottery?
The worse part of THE's reply is this:
Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for all to see (and indeed scrutinise, which everyone is entitled to do).
We welcome feedback, are constantly developing our system, and will definitely take your comments on board.
The system is not balanced. Citations have a weighting of 30 %, much more than any other indicator. Even the research reputation survey has a weighting of only 18%. And to describe as comprehensive an indicator which allows a fraction of one or two publications to surpass massive amounts of original and influential research is really plumbing the depths of absurdity.
I am just about to finish comparing the scores for research and research impact for the top 400 universities. There is a statistically significant correlation but it is quite modest. When research reputation, volume of publications and research income show such a modestcorrelation with research impact it is time to ask whether there is a serious problem with this indicator.
Here is some advice for THE and TR.
- First, and surely very obvious, if you are going to use field normalisation then calculate the score for discipline groups, natural sciences, social sciences and so on and aggregate the scores. So give MEPhI a 100 for physical or natural sciences if you think they deserve it but not for the arts and humanities.
- Second, and also obvious, introduce fractional counting, that is dividing the number of citations by the number of authors of the cited paper.
- Do not count citations to summaries, reviews or compilations of research.
- Do not count citations of commercial material about computer programs. This would reduce the very high and implausible score for Gottingen which is derived from a single publication.
- Do not assess research impact with only one indicator. See the Leiden ranking for the many ways of rating research.
- Consider whether it is appropriate to have a regional weighting. This is after all an international ranking.
- Reduce the weighting for this indicator.
- Do not count self-citations. Better yet do not count citations from researchers at the same university.
- Strictly enforce your rule about not including single subject institutions in the general rankings.
- Increase the threshold number of publications for inclusion in the rankings from two hundred to four hundred.
No comments:
Post a Comment