Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Saturday, October 23, 2010
Monday, October 18, 2010
See here for a draft of an article on the THE-QS rankings.
Sunday, October 17, 2010
Times Higher Education and Thomson Reuters have said that they wish to engage and that they will be happy to debate their new rankings methodology. So far we have not seen much sign of a debate although I will admit that perhaps more things were said at the recent seminars in London and Spain than got into print. In particular, they have been rather reticent about defending the citations indicator which gives the whole ranking a very distinctive cast and which is likely to drag down what could have been a promising development in ranking methodology.
First, let me comment on the few attempts to defend this indicator, which accounts for nearly a third of the total weighting and for more in some of the subject rankings. It has been pointed out that David Willetts, British Minister for Universities and Science has congratulated THE on its new methodology.
“I congratulate THE for reviewing the methodology to produce this new picture ofWilletts has praised THE for reviewing its methodology. So have many of us but that is not quite the same as endorsing what has emerged from that review.
the best in higher education worldwide. It should prompt all of us who care
about our universities to see how we can improve the range and quality of the
data on offer. Prospective students — in all countries — should have good
information to hand when deciding which course to study, and where. With the
world to choose from, it is in the interests of universities themselves to
publish figures on graduate destinations as well as details of degree
programmes.”
Steve Smith, President of Universities UK and Vice-Chancellor of Exeter University is explicit in supporting the new rankings, especially the citations component.
But, as we shall see in a moment, there are serious issues with the robustness of citations as a measure of research impact and, if used inappropriately, they can become indistinguishable from a subjective measure of reputation.
The President of the University of Toronto makes a similar point and praises the new rankings’ reduced emphasis on subjective reputational surveys and refers to the citations (knowledge transfer?) indicator.
It might be argued that this indicator is noteworthy for revealing that some universities possess hitherto unsuspected centres of research excellence. An article by Phil Baty in THE of the 16th of September refers to the most conspicuous case, a remarkably high score for citations by Alexandria University, which according to the THE rankings has had a greater research impact than any university in the world except Caltech, MIT and Princeton. Baty suggests that there is some substance to Alexandria University’s extraordinary score. He refers to Ahmed Zuweil, a Nobel prize winner who left Alexandria with a master’s degree some four decades ago. Then he mentions some frequently cited papers by a single author in one journal.
The author in question is Mohamed El Naschie, who writes on mathematical physics and the journals – there are two that should be given the credit for Alexandria’s performance, not one – are Chaos, Solitons and Fractals and the International Journal of Nonlinear Sciences and Numerical Simulation. The first is published by Elsevier and was until recently edited by El Naschie. It has published a large number of papers by El Naschie and these have been cited many times by himself and by some other writers in CSF and IJNSNS.
The second journal is edited by Ji-Huan He of Donghua University in Shanghai, China with El Naschie as co-editor and is published by the Israeli publishing company, Freund Publishing House Ltd of Tel Aviv.
An amusing digression. In the instructions for authors in the journal the title is given as International Journal of Nonlinear Sciences and Numerical Stimulation. This could perhaps be described as a Freundian slip.
Although El Naschie has written a large number of papers and these have been cited many times, his publication and citation record is far from unique. He is not, for example, found in the ISI list of highly cited researchers. His publications and citations were perhaps necessary to push Alexandria into THE’s top 200 universities but they were not enough by themselves. This required a number of flaws in TR’s methodology.
First, TR assigned a citation impact score that compares actual citations of a paper with a benchmark score based on the expected number of citations for a specific subject in a specific year. Mathematics is a field where citations are relatively infrequent and usually occur a few years after publication. Since El Naschie published in a field in which citations are relatively scarce and published quite recently this boosted the impact score of his papers. The reason for using this approach is clear and sensible, to overcome the distorting effects of varying citation practices in different disciplines when comparing individual researchers or departments. But there are problems if this method is used to compare whole universities. A great deal depends on when the cited and citing articles are published and in which subject they were classified by TR.
A question for TR. How are articles classified? Is it possible to influence the category in which they are placed by the use of key words or the wording of the title?
Next, note that TR were measuring average citation impact. A consequence of this is that the publication of large numbers of papers that are cited less frequently than the high fliers could drag down the score. This explains an apparent oddity of the citation scores in the 2010 THE rankings. El Naschie listed nine universities as his affiliation in varying combinations between 2004 and 2008. yet it was only Alexandria that managed to leave the Ivy League and Oxbridge standing in the research impact dust. Recently, El Naschie’s list of affiliations has consisted of Alexandria, Cairo, Frankfurt University and Shanghai Jiao Tong University.
What happened was quite simply that all the others were producing so many papers that El Naschie’s made little or no difference. For once, it would be quite correct if El Naschie announced that he could not have done it without the support of his colleagues. Alexandria University owes its success not only to El Naschie and his citers but also to all those researchers who refrained from submitting articles to ISI–indexed journals or conference proceedings.
TR have some explaining to do here. If an author lists more than one affiliation, are they all counted? Or are fractions awarded for each paper? Is there any limit on the number of affiliations that an author may have. I think that it is two but would welcome clarification
As for the claim that Alexandria is strong in research, a quick look at the Scimago rankings is enough to dispose of that. It is ranked 1,047th in the 2010 rankings, which admittedly include many non-university organizations, for total publications over a decade. Also, one must ask how much of El Naschie’s writing was actually done in Alexandria, seeing that he had eight other affiliations between 2004 and 2008.
It has to be said that even if El Naschie is, as has been claimed in comments on Phil’s THE article and elsewhere, one of the most original thinkers of our time, it is strange that THE and TR should use a method that totally undermines their claim that the new methodology is based on evidence rather than reputation By giving any sort of credence to the Alexandria score, THE are asking us to believe that Alexandria is strong in research because precisely one writer is highly reputed by himself and a few others. Incidentally, will TR tell us what score Alexandria got in the research reputation survey?
I am not qualified to comment on the scientific merits of El Naschie’s work. At the moment it appears, judging from the comments in various physics blogs, that among physicists and mathematicians there are more detractors than supporters. There are also few documented signs of conventional academic merit in recent years such as permanent full time appointments or research grants. None of his papers between 2004 and 2008 in ISI-indexed journals,for example, apparently received external funding. His affiliations, if documented, turn out to be honorary, advisory or visiting.To be fair, readers might wish to visit El Naschie’s site. I will also publish any comments of a non-libellous nature that support or dispute the scientific merits of his writings.
Incidentally, it is unlikely that Alexandria’s score of 19.3 for internationalisation was faked. TR use a logarithm. If there were zero international staff and students a university would get a score of 1 and a score of 19.3 actually represents a small percentage. On the other hand, I do wonder whether Alexandria counted those students in the branch campuses in Lebanon, Sudan and Chad.
Finally, TR did not take the very simple and obvious step of not counting individual self-citations. Had they done so, they would have saved everybody, including themselves a lot of trouble. It would have been even better if they had excluded intra-institutional and intra-journal citation. See here for the role of citations among the editorial board of IJNSNS in creating an extraordinarily high Journal Impact Factor.
THE and TR have done everyone a great service by highlighting the corrosive effect of self citation on the citations tracking industry. It has become apparent that there are enormous variations in the prevalence of self citation in its various forms and that these have a strong influence on the citation impact score.
Professor Dirk Van Damme is reported to have said at the London seminar that the world’s elite universities were facing a challenge from universities in the bottom half of the top 200. If this were the case then THE could perhaps claim that their innovative methodology had uncovered reserves of talent ignored by previous rankings. But what exactly was the nature of the challenge? It seems that it was the efficiency with which the challengers turned research income into citations. And how did they do that?
I have taken the simple step of dividing the score for citations by the score for the research indicator (which includes research income) and then sorting the resulting values. The top ten are Alexandria, Hong Kong Baptist University, Barcelona, Bilkent, William and Mary, ENS de Lyon, Royal Holloway, Pompeu Fabra, University College Dublin, the University of Adelaide.
Seriously, these are a threat to the world’s elite?
The high scores for citations relative to research were the result of a large number of citations or a small number of total publications or both. It is of interest to note that in some cases the number of citations was the result of assiduous self-citation.
This section of the post contained comments about comparative rates of self citation among various universities. The method used was not correct and I am recalculating.
As noted already, using the THE iPad app to change the importance attached to various indicators can produce very different results. This is a list of universities that rise more than a hundred places when the citations indicator is set to ‘not important’. They have suffered perhaps because of a lack of super-cited papers, perhaps also because they just produced too many papers.
Loughborough
Kyushu
Sung Kyun Kwan
Texas A and M
Surrey
Shanghai Jiao Tong University
Delft University of Technology
National Chiao Tung University (Taiwan)
Royal Institute of Technology Sweden
Tokushima
Hokkaido
Here is a list of universities that fall more than 100 places when the citations indicator is set to ‘not important’. They have benefitted from a few highly cited papers or low publication counts or a combination of the two.
Boston College
University of California Santa Cruz
Royal Holloway, University of London
Pompeu Fabra
Bilkent
Kent State University
Hong Kong Baptist University
Alexandria
Barcelona
Victoria University Wellington
Tokyo Metropolitan University
University of Warsaw
There are many others that rise or fell seventy, eighty, ninety places when citations are taken out of the equation. This is not a case of a few anomalies. The whole indicator is one big anomaly.
Earlier Jonathon Adams in a column that has attracted one comment said:
"Disciplinary diversity is an important factor, as is international diversity. How would you show the emerging excellence of a really good university in a less well known country such as Indonesia? This is where we would be most controversial, and most at risk, in using the logic of field-normalisation to add a small weighting in favour of relatively good institutions in countries with small research communities. Some may feel that we got that one only partially right."
The rankings do not include universities in Indonesia, really good or otherwise. The problem is with good, mediocre and not very good universities in the US, UK, Spain, Turkey, Egypt, New Zealand, Poland etc. . It is a huge weighting, not a small one, the universities concerned range from relatively good to relatively bad, in one case the research community seems to consist of one person and many are convinced that TR got that one totally wrong.
Indonesia may be less well known to TR but it is very well known to itself and neighbouring countries.
I will publish any comments by anyone who wishes to defend the citation indicator of the new rankings. Here are some questions they might wish to consider.
Was it a good idea to give such a heavy weighting to research impact, 32.5% in the oerall rankings , 37.5 in at least 2 subject rankings? Is it possible that commercial considerations, citations data being a lucrative business for TR , had something to do with it?
Are citations such a robust indictor? Is there not enough evidence now to suggest that manipulation of citations, including self-citation, intra-institutional citation and intra-journal citation, is so pervasive that the robustness of this measure is very slight?
Since there are several ways to measure research impact, would it not have been a good idea to have used several methods? After all, Leiden University has several different ways of assessing impact. Why use only one?
Why set the threshold for inclusion so low at 50 papers per year?
Monday, October 11, 2010
The Chronicle of Higher Education reports on a meeting of the International Rankings Expert Group's (IREG) Observatory on Academic Ranking and Excellence.
The Organisation has set up a mechanism to audit the various university rankings.
"The audit project, which he [Gero Federkeil] is helping to manage, will be based closely on IREG's principles, which emphasize clarity and openness in the purposes and goals of rankings, the design and weighting of indicators, the collection and processing of data, and the presentation of results.
"We all say that rankings should aim at delivering transparency about higher-education institutions, but we think there should be transparency about rankings too," Mr. Federkeil said. The audit process could eventually give rise to an IREG quality label, which would amount to an identification of trustworthy rankings, thereby enhancing the credibility of rankings and improving their quality, Mr. Federkeil said.
At the Berlin meeting last week, Mr. Federkeil and Ying Cheng, of the Center for World-Class Universities at Shanghai Jiao Tong University, which produces the best-known and most influential global ranking of universities, outlined the proposed methodology and procedure for the audit. The IREG executive committee will nominate audit teams consisting of three to five people. The chair of each team must not have any formal affiliation with a ranking organization, and at least one member of the audit team must be a member of the IREG executive committee. Audits will be based on self-reported data as well as possible on-site visits, and each full audit is expected to take about five months to complete."
The executive committee of IREG includes Liu Nian Cai from Shanghai Rankings Consultancy, Bob Morse from US News & World Report and Gero Federkeil from CHE, the German ranking body.
Members of the Observatory include HEEACT (Taiwan), the Kazakhstan and Slovak ranking agencies and QS Intelligence Agency.
Would anybody like to guess who will be the first to be audited?