A very interesting document from the top 11 Japanese research universities has appeared. They are unhappy with the citations indicator in last year's Times Higher Education -- Thomson Reuters World University Rankings.
"The purpose of analyzing academic research data, particularly publication and citation trends is to provide diverse objective information on universities and other academic institutions that can be used by researchers and institutions for various evaluations and the setting of objectives. The 2010 Thomson Reuters / THE World University Rankings, however, do not give sufficient consideration to the unique characteristics of universities in different countries or the differing research needs and demands from society based on country, culture and academic field. As a result, those rankings are likely to lead to an unbalanced misleading and misuse of the citation index.
RU11 strongly requests, therefore, that Thomson Reuters / THE endeavors to contribute to academic society by providing objective and impartial data, rather than imposing a simplistic and trivialized form of university assessment."
It is a tactical mistake to go on about uniqueness. This is an excuse that has been used too often by institutions whose flaws have been revealed by international rankings.
Still, they do have a point. They go on to show that when the position of Asian universities according to the citations indicator in the THE-TR rankings is compared with the citations per paper indicator in the 2010 QS Asian university rankings, citations per paper over an 11 year period from TR's Essential Science Indicators and citations per paper/citations per faculty in the 2010 QS World university rankings (I assume they mean citations per faculty here since the QS World University Rankings do not have a citations per paper indicator) leading Japanese universities do badly while Chinese, Korean and other Asian universities do very well.
They complain that the THE--TR rankings emphasise "home run papers" and research that produces immediate results and that regional modification (normalisation) discriminates against Japanese universities.
This no doubt is a large part of the story but I suspect that the distortions of the 2010 THE--TR indicator are also because differences in the practice of self citation and intra--university citation, because TR's methodology actually favors those who publish relatively few papers and because of its bias towards low--cited disciplines.
The document continues:
"1. The ranking of citations based on either citations per author (or faculty) or citations per paper represent two fundamentally different ways of thinking with regards to academic institutions: are the institutions to be viewed as an aggregation of their researchers, or as an aggregation of the papers they have produced? We believe that the correct approach is to base the citations ranking on citations per faculty as has been the practice in the past.
2. We request a revision of the method used for regional modification.
3. We request the disclosure of the raw numerical data used to calculate the citation impact score for the various research fields at each university."
I suspect that TR and THE would reply that their methodology identifies pockets of excellence (which for some reason cannot be found anywhere in the Japanese RU 11), that the RU 11 are just poor losers and that they are right and QS is wrong.
This question might be resolved by looking at other measures of citations such as those produced by HEEACT, Scimago and ARWU.
It could be that this complaint if was sent to TR was the reason for TR and THE announcing that they were changing the regional weighting process this year. If that turns out to be the case and TR is perceived as changing its methodology to suit powerful vested interests then we can expect many academic eyebrows to be raised.
If the RU 11 are still unhappy then THE and TR might see a repeat of the demise of the Asiaweek rankings brought on in part because of a mass abstention by Japanese and other universities.