Saturday, November 03, 2012


Apology

In a recent article in University World News I made a claim that Times Higher Education in their recent World University Rankings had introduced a methodological change that substantially affected the overall ranking scores. I acknowledge that this claim was without factual foundation. I withdraw the claim and apologise without reservation to Phil Baty and Times Higher education.

Saturday, October 27, 2012

More on MEPhI

Right after putting up the post on Moscow State Engineering Physics Institute and its "achievement" in getting the maximum score for research impact in the latest THE - TR World University Rankings, I found this exchange on Facebook.  See my comments at the end.

  • Valery Adzhiev So, the best university in the world in the "citation" (i.e. "research influence") category is Moscow State Engineering Physics Institute with maximum '100' score. This is remarkable achivement by any standards. At the same time it scored in "research" just 10.6 (out of 100) which is very, very low result. How on earth that can be?
  • Times Higher Education World University Rankings Hi Valery,

    Regarding MEPHI’s high citation impact, there are two causes: Firstly they have a couple of extremely highly cited papers out of a very low volume of papers.The two extremely highly cited papers are skewing what would ordinarily be a very g
    ood normalized citation impact to an even higher level.

    We also apply "regional modification" to the Normalized Citation Impact. This is an adjustment that we make to take into account the different citation cultures of each country (because of things like language and research policy). In the case of Russia, because the underlying citation impact of the country is low it means that Russian universities get a bit of a boost for the Normalized Citation Impact.

    MEPHI is right on the boundary for meeting the minimum requirement for the THE World University Rankings, and for this reason was excluded from the rankings in previous years. There is still a big concern with the number of papers being so low and I think we may see MEPHI’s citation impact change considerably over time as the effect of the above mentioned 2 papers go out of the system (although there will probably be new ones come in).

    Hope this helps to explain things.
    THE
  • Valery Adzhiev Thanks for your prompt reply. Unfortunately, the closer look at that case only adds rather awkward questions. "a couple of extremely highly cited papers are actually not "papers": they are biannual volumes titled "The Review of Particle Physics" that ...See More
  • Valery Adzhiev I continue. There are more than 200 authors (in fact, they are "editors") from more than 100 organisation from all over the world, who produce those volumes. Look: just one of them happened to be affiliated with MEPhI - and that rather modest fact (tha...See More
  • Valery Adzhiev Sorry, another addition: I'd just want to repeat that my point is not concerned only with MEPhI - Am talking about your methodology. Look at the "citation score" of some other universities. Royal Holloway, University of London having justt 27.7 in "res...See More
  • Alvin See Great observations, Valery.
  • Times Higher Education World University Rankings Hi Valery,

    Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for
    ...See More
  • Andrei Rostovtsev This is in fact rather philosofical point. There are also a number of very scandalous papers with definitively negative scientific impact, but making a lot of noise around. Those have also high contribution to the citation score, but negative impact t...See More

    It is true that two extremely highly cited publications combined with a low total number of publications skewed the results but what is equally or perhaps more important is that  these citations occur in the year or two years after publication when citations tend to be relatively infrequent compared to later years. The 2010 publication is a biennial review, like the 2008 publication, that will be cited copiously for two years after which it will no doubt be superseded by the 2012 edition.

    Also, we should note that in the ISI Web of Science, the 2008 publication is classified as "physics, multidisciplinary". Papers listed as multidisciplinary generally get relatively few citations so if the publication was compared to other multidisciplinary papers it would get an even larger weighting. 
    Valery has an excellent point when he points out that these publications have over 100 authors or contributors each (I am not sure whether they are actual researchers or administrators). Why then did not all the other contributors boost their instutitions' scores to similar heights? Partly because they were not in Russia and therefore did not get the regional weighting but also because they were publishing many more papers overall than MEPhI.  

    So basically, A. Romaniouk who contributed 1/173rd of one publication was considered as having more research impact than hundreds of researchers at Harvard, MIT, Caltech etc producing hundreds of papers cited hundreds of times.  Sorry, but is this a ranking of research quality or a lottery?

    The worse part of THE's reply is this:

    Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for all to see (and indeed scrutinise, which everyone is entitled to do).

    We welcome feedback, are constantly developing our system, and will definitely take your comments on board.

    The system is not balanced. Citations have a weighting of 30 %, much more than any other  indicator. Even the research reputation survey has a weighting of only 18%.  And to describe as comprehensive an indicator which allows a fraction of one or two publications to surpass massive amounts of original and influential research is really plumbing the depths of absurdity.

    I am just about to finish comparing the scores for research and research impact for the top 400 universities. There is a statistically significant correlation but it is quite modest. When research reputation, volume of publications and research income show such a modestcorrelation with research impact it is time to ask whether there is a serious problem with this indicator.

    Here is some advice for THE and TR.

    • First, and surely very obvious, if you are going to use field normalisation then calculate the score for discipline groups, natural sciences, social sciences and so on and aggregate the scores. So give MEPhI a 100 for physical or natural sciences if you think they deserve it but not for the arts and humanities.
    • Second, and also obvious, introduce fractional counting, that is dividing the number of citations by the number of authors of the cited paper.
    • Do not count citations to summaries, reviews or compilations of research.
    • Do not count citations of commercial material about computer programs. This would reduce the very high and implausible score for Gottingen which is derived from a single publication.
    • Do not assess research impact with only one indicator. See the Leiden ranking for the many ways of rating research.
    • Consider whether it is appropriate to have a regional weighting. This is after all an international ranking.
    • Reduce the weighting for this indicator.
    • Do not count self-citations. Better yet do  not count citations from researchers at the same university.
    • Strictly enforce your rule about  not including single subject institutions in the general rankings.
    • Increase the threshold number of publications for inclusion in the rankings from two hundred to four hundred.


Friday, October 26, 2012

Stellenbosch

Credit when due. Times Higher Education has now put Stellenbosch University in Africa.
Dancing in the Street in Moscow

"Jubilant crowds poured into the streets of Moscow when it was announced that Moscow State Engineering Physics Institute had been declared to be the joint top university in the world, along with Rice University in Texas, for research impact".

Just kidding about the celebrations.

But the Times Higher Education - Thomson Reuters World University Rankings have given the "Moscow State Engineering Physics Institute" a score of 100 for research impact, which is measured by the number of citations per paper normalised by field, year of publication and country.

There are a couple of odd things about this.

First, "Moscow State Engineering Physics Institute " was reorganised in 2009 and its official title is now  National Research Nuclear University MEPhI. It still seems to be normal to refer to MEPhI or Moscow State Engineering Physics Institute so I will not argue about this. But I wonder if there has been some confusion in TR's data collection.

Second, THE says that institutions are not ranked if they teach only a single narrow subject. Does the institution teach more than just physics?

So how did MEPhI do it ?  The answer seems to be because of a couple of massively cited review articles. The first was by C Amsler et (many many) alia in Physics Letters B of September 2008, entitled Review of Particle Physics. It was cited 1278 times in 2009 and 1627 times in 2010 according to the Web of Science, even more according to Google Scholar.

Here is the abstract.

"Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions., plus 2778 new measurements from 645 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors., probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, top quark, muon anomalous magnetic moment, extra dimensions, particle detectors, cosmic background radiation, dark matter, cosmological parameters, and big bang cosmology".

I have not counted the number of authors but there are113 institutional affiliations of which MEPhI is 84th.

The second paper is by K. Nakamura et alia.  It is also entitled Review of Particle Physics and was published in the Journal Of Physics G-Nuclear and Particle Physics in July 2010 . It was cited 1240 times in 2011. This is the abstract.
 
"This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2158 new measurements from 551 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on neutrino mass, mixing, and oscillations, QCD, top quark, CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, fragmentation functions, particle detectors for accelerator and non-accelerator physics, magnetic monopoles, cosmological parameters, and big bang cosmology".

There are 119 affiliations of which MEPhI is 91st..

Let me stress that there is nothing improper here. It is normal for papers in the physical sciences to include summaries or reviews of research at the beginning of a literature review. I also assume that the similarity in the wording of the abstracts would be considered appropriate standardisation within the discipline rather than plagiarism.

TR 's method counts the numbers of citations of a paper compared to the average for that field in that year in that country. MEPhI would not get very much credit for a publication in physics which is a quite highly cited discipline, but it would get some for being in Russia where citations in English are relatively sparse and a massive boost for exceeding the average for citations within one or two years of publication many times over.

There is one other factor. MEPhI was only one of more than 100 institutions contributing to each of these papers but it got such an unusually massive score because its citations, which were magnified by region and period of publication, were divided by a comparatively small number of publications.

This is not as bad as Alexandria University being declared the fourth best university for research impact in 2010. MEPhI is a genuinely excellent institution which Alexandria, despite a solitary Nobel laureate and an historic library, was not.  But does it really deserve to be number one for research impact or even in the top 100? TR's methods are in need of very thorough revision.

And I haven't heard about any celebrations in Houston either.