Showing posts sorted by relevance for query oxford reputation. Sort by date Show all posts
Showing posts sorted by relevance for query oxford reputation. Sort by date Show all posts

Sunday, March 18, 2012

The THE Reputation Rankings

Times Higher Education has produced its third reputation ranking based on a survey of researchers published in ISI indexed journals. The top ten are:

1.  Harvard
2.  MIT
3.  Cambridge
4.  Stanford
5.  UC Berkeley
6.  Oxford
7.  Princeton
8.  Tokyo
9.  UCLA
10. Yale

This does not look all that dissimilar to the academic survey indicator in the  2011 QS World University Rankings. The top ten there is as follows:

1.  Harvard
2.  Cambridge
3.  Oxford
4.  UC Berkeley
5.  Stanford
6.  MIT
7.  Tokyo
8.  UCLA
9. Princeton
10. Yale

Once we step outside the top ten there are some differences. The National University of Singapore is 23rd in these rankings but 11th in the QS academic survey, possibly because QS still has respondents on its list from the time when it used World scientific, a Singapore based publishing company.

The Middle East Technical University in Ankara is in the top 100 (In the QS academic survey it is not even in the top 300), sharing the 90-100 band with Ecole Polytechnique, Bristol and Rutgers. At first glance this seems surprising since its research output is exceeded by other universities in the Middle East. But the technical excellence of its University Ranking by Academic Performance suggests that its research might be of a high quality.

Wednesday, March 06, 2013

The THE Reputation Rankings

Times Higher Education have published their reputation rankings based on data collected from the World University Rankings of 2012.

They are not very interesting. Which is exactly what they should be. When rankings show massive changes from one year to another a certain amount of scepticism is required.

The same six, Harvard, MIT, Stanford, Berkeley, Oxford and Cambridge are well ahead of everybody else as they were in 2012 and in 2011.

Taking a quick look at the top fifty, there is little movement between 2011 and 2013. Four universities for the US, Japan, Netherlands and Germany have dropped out. In their place there is one more from Korea and from the the UK and two more from Australia.

I was under the impression that Australian universities were facing savage cuts in research funding and were going to be deserted by international students and researchers..

Maybe it is the other universities that are being cut. or maybe a bit of blood letting is good for the health.

I also noticed that the number of respondents went down a bit in 2012. It could be that the academic world is beginning to suffer from ranking fatigue.

Thursday, October 02, 2014

Which universities have the greatest research influence?

Times Higher Education (THE) claims that its Citations:Research Influence indicator, prepared by Thomson Reuters (TR), is the flagship of its World University Rankings, It is strange then that the magazine has never published a research influence ranking although that ought to be just as interesting as its Young Universities Ranking, Reputation Rankings or gender index.

So let's have a look at the top 25  universities in the world this year ranked for research influence,  measured by field- and year- normalised citations, by Thomson Reuters.

Santa Cruz and Tokyo Metropolitan have the same impact as MIT. Federico Santa MariaTechnical University is ahead of Princeton. Florida Institute of Technology beats Harvard. Bogazici University and Scuola Normale Superiore do better than Oxford and Cambridge.

Are they serious?

Apparently. There will be an explanation in the next post. Meanwhile go and check if you don't believe me. And let me know if there's any dancing in the streets of Valparaiso, Pisa, Golden or Istanbul.


Rank and Score for Citations: Research Influence 2014-15 THE World Rankings

Rank University Score
1= University of California Santa Cruz 100
1= MIT 100
1= Tokyo Metropolitan University 100
4 Rice University 99.9
5= Caltech 99.7
5= Federico Santa Maria Technical University, Chile  99.7
7 Princeton University 99.6
8= Florida Institute of Technology 99.2
8= University of California Santa Barbara 99.2
10= Stanford University 99.1
10= University of California Berkeley 99.1
12= Harvard University 98.9
12= Royal Holloway University of London 98.9
14 University of Colorado Boulder  97.4
15 University of Chicago 97.3
16= Washington University of St Louis 97.1
16= Colorado School of Mines 97.1
18 Northwestern University 96.9
19 Bogazici University, Turkey  96.8
20 Duke University  96.6
21= Scuola Normale Superiore Pisa, Italy 96.4
21= University of California San Diego 96.4
23 Boston College 95.9
24 Oxford University 95.5
25= Brandeis University  95.3
25= UCLA 95.3

Tuesday, February 20, 2018

Is Erdogan Destroying Turkish Universities?


An article by Andrew Wilks in The National claims that the position of Turkish universities in the Times Higher Education (THE) world rankings, especially that of Middle East Technical University (METU) has been declining as a result of the crackdown by president Erdogan following the unsuccessful coup of July 2016.

He claims that Turkish universities are now sliding down the international rankings and that this is because of the decline of academic freedom, the dismissal or emigration of many academics and a decline in its academic reputation.


'Turkish universities were once seen as a benchmark of the country’s progress, steadily climbing international rankings to compete with the world’s elite.
But since the introduction of emergency powers following a failed coup against President Recep Tayyip Erdogan in July 2016, the government’s grip on academic freedom has tightened.
A slide in the nation's academic reputation is now indisputable. Three years ago, six Turkish institutions [actually five] were in the Times Higher Education’s global top 300. Ankara's Middle East Technical University was ranked 85th. Now, with Oxford and Cambridge leading the standings, no Turkish university sits in the top 300.
Experts say at least part of the reason is that since the coup attempt more than 5,800 academics have been dismissed from their jobs. Mr Erdogan has also increased his leeway in selecting university rectors.
Gulcin Ozkan, formerly of Middle East Technical University but now teaching economics at York University in Britain, said the wave of dismissals and arrests has "forced some of the best brains out of the country".'
I have no great regard for Erdogan but in this case he is entirely innocent.

There has been a massive decline in METU's position in the THE rankings since 2014 but that is entirely the fault of THE's methodology. 

In the world rankings of 2014-15, published in 2014, METU was 85th in the world, with a whopping score of 92.0 for citations, which carries an official weighting of 30%. That score was the result of METU's participation in the Large Hadron Collider (LHC) project which produces papers with hundreds or thousands of authors and hundreds and thousands of citations. In 2014 THE counted every single contributor as receiving all of the citations. Added to this was a regional modification that boosted the scores of universities located in countries with a low citations impact score.

In 2015, THE revamped its methodology by not counting the citations to these mega-papers and by applying the regional modification to only half of the research impact score.

As a result, in the 2015-16 rankings METU crashed to the 501-600 band, with a score for citations of only 28.8. Other Turkish universities had also been involved in the LHC project and benefited from the citations bonus and they too plummeted. There was now only one Turkish university in the THE top 300.

The exalted position of METU in the THE 2014-15 rankings was the result of THE's odd methodology and its spectacular tumble was the result of changing that methodology. In other popular rankings METU seems to be slipping a bit but it never goes as high as in THE in 2014 or as low as in 2015

In the QS world rankings for 2014-15 METU was in the 401-410 band and by 2017-18 it had fallen to  471-480 in 2017

The Russian Round University Rankings have it 375 in 2014 and 407 in 407. The US News Best Global Universities placed it 314th last year.

Erdogan had nothing to do with it.















Saturday, September 07, 2019

Finer and finer rankings prove anything you want

If you take a single metric from a single ranking and do a bit of slicing by country, region, subject, field and/or age there is a good chance that you can prove almost anything, for example that the University of the Philippines is a world beater for medical research. Here is another example from the Financial Times.

An article by John O'Hagan, Emeritus Professor at Trinity College Dublin, claims that German universities are doing well for research impact in the QS economics world rankings. Supposedly, "no German university appears in the top 50 economics departments in the world using the overall QS rankings. However, when just research impact is used, the picture changes dramatically, with three German universities, Bonn, Mannheim and Munich, in the top 50, all above Cambridge and Oxford on this ranking."

This is a response to Frederick Studemann's claim that German universities are about to move up the rankings. O'Hagan is saying that is already happening.

I am not sure what this is about. I had a look at the most recent QS economics rankings and found that in fact Mannheim is in the top fifty overall for that subject. The QS subject rankings do not have a research impact indicator. They have academic reputation, citations per paper, and h-index, which might be considered proxies for research impact, but for none of these are the three universities in the top fifty. Two of the three universities are in the top fifty for academic research reputation, one for citations per paper and two for h-index.

So it seems that the article isn't referring to the QS economics subject ranking. Maybe it is the overall ranking that professor O'Hagan is thinking of? There are no German universities in the overall top fifty there but there are also none in the citations per faculty indicator. 

I will assume that the article is based on an actual ranking somewhere, maybe an earlier edition of the QS subject rankings or the THE world rankings or from one of the many spin-offs. 

But it seems a stretch to talk about German universities moving up the rankings just because they did well in one metric in one of the 40 plus international rankings in one year.


Tuesday, August 13, 2019

University of the Philippines beats Oxford, Cambridge, Yale, Harvard, Tsinghua, Peking etc etc

Rankings can do some good sometimes. They can also do a lot of harm and that harm is multiplied when they are sliced more and more thinly to produce rankings by age, by size, by mission, by region, by indicator, by subject. When this happens minor defects in the overall rankings are amplified.

That would not be so bad if universities, political leaders and the media were to treat the tables and the graphs with a healthy scepticism. Unfortunately, they treat the rankings, especially THE, with obsequious deference as long as they are provided with occasional bits of publicity fodder.

Recently, the Philippines media have proclaimed that the University of the Philippines (UP) has beaten Harvard, Oxford and Stanford for health research citations. It was seventh in the THE Clinical, Pre-clinical and Health category behind Tokyo Metropolitan University, Auckland University of Technology, Metropolitan  Autonomous University Mexico, Jordan University of Science and Technology, University of Canberra and Anglia Ruskin University.

The Inquirer is very helpful and provides an explanation from the Philippine Council for Health Research and Development that citation scores “indicate the number of times a research has been cited in other research outputs” and that the score "serves as an indicator of the impact or influence of a research project which other researchers use as reference from which they can build on succeeding breakthroughs or innovations.” 

Fair enough, but how can UP, which has a miserable score of  13.4 for research in the same subject ranking have such a massive research influence? How can it have an extremely low output of papers, a poor reputation for research, and very little funding and still be a world beater for research impact.

It is in fact nothing to do with UP, nothing to do with everyone working as a team, decisive leadership or recruiting international talent.

It is the result of a bizarre and ludicrous methodology.  First, THE does not use fractional counting for papers with less than a thousand authors. UP, along with many other universities, has taken part in the Global Burden of Disease project funded by the Bill and Melinda Gates Foundation. This has produced a succession of papers, many of them in the Lancet, with hundreds of contributing institutions and researchers, whose names are all listed as authors, and hundreds or thousands of citations. As long as the number of authors does not reach 1,000 each author is counted as though he or she were the recipient of all the citations. So UP gets the credit for a massive number of citations which is divided by a relatively small number of papers.

Why not just use fractional counting, dividing the citations among the contributors or the intuitions, like Leiden Ranking does. Probably because it might add a little bit to costs, perhaps because THE doesn't like to admit it made a mistake.

Then we have the country bonus or regional modification, applied to half the indicator, which increases the score for universities in countries with low impact.

The result of all this is that UP, surrounded by low scoring universities, not producing very much research but with a role in a citation rich mega project, gets a score for this indicator that puts it ahead of the Ivy League, the Group of Eight and the leading universities of East Asia.

If nobody took this seriously, then no great harm would be done. Unfortunately it seems that large numbers of academics, bureaucrats and journalists do take the THE rankings very seriously or pretend to do so in public. 

And so committee addicts get bonuses and promotions, talented researchers spend their days in unending ranking-inspired transformational seminars, funds go to the mediocre and the sub mediocre, students and stakeholders base their careers on  misleading data, and the problems of higher education are covered up or ignored.



Saturday, December 16, 2006

Open Letter to the Times Higher Education Supplement

This letter has been sent to THES

Dear John O’Leary
The Times Higher Education Supplement (THES) world university rankings have acquired remarkable influence in a very short period. It has, for example, become very common for institutions to include their ranks in advertising or on web sites. It is also likely that many decisions to apply for university courses are now based on these rankings.

Furthermore, careers of prominent administrators have suffered or have been endangered because of a fall in the rankings. A recent example is that of the president of Yonsei University, Korea, who has been criticised for the decline of that university in the THES rankings compared to Korea University (1) although it still does better on the Shanghai Jiao Tong University index (2). Ironically, the President of Korea University seems to have got into trouble for trying too hard and has been attacked for changes designed to promote the international standing, and therefore the position in the rankings, of the university. (3) Another case is the Vice-Chancellor of Universiti Malaya, Malaysia, whose departure is widely believed to have been linked to a fall in the rankings between 2004 and 2005, which turned out to be the result of the rectification of a research error.

In many countries, administrative decisions and policies are shaped by the perception of their potential effect on places in the rankings. Universities are stepping up efforts to recruit international students or to pressure staff to produce more citable research. Also, ranking scores are used as ammunition for or against administrative reforms. Recently, we saw a claim the Oxford’s performance renders any proposed administrative change unnecessary (4).

It would then be unfortunate for THES to produce data that is any way misleading, incomplete or affected by errors. I note that the publishers of the forthcoming book that will include data on 500+ universities include a comment by Gordon Gee, Chancellor of Vanderbilt University, that the THES rankings are “the gold standard” of university evaluation (5)). I also note that on the website of your consultants, QS Quacquarelli Symonds, readers are told that your index is the best (6)).

It is therefore very desirable that the THES rankings should be as valid and as reliable as possible and that they should adhere to standard social science research procedures. We should not expect errors that affect the standing of institutions and mislead students, teachers, researchers, administrators and the general public.

I would therefore like to ask a few question concerning three components of the rankings that add up to 65% of the overall evaluation.

Faculty-student ratio
In 2005 there were a number of obvious, although apparently universally ignored, errors in the faculty-student ratio section. These include ascribing inflated faculty numbers to Ecole Polytechnique in Paris, Ecole Normale Superieure in Paris, Ecole Polytechnique Federale in Lausanne, Peking (Beijing) University and Duke University, USA. Thus, Ecole Polytechnique was reported on the site of QS Quacquarelli Symonds (7)), your consultants, to have 1,900 faculty and 2,468 students, a ratio of 1.30 students per faculty, Ecole Normale Superieure 900 faculty and 1800 students, a ratio of one per two faculty, Ecole Polytechnique Federale 3,210 faculty and 6,530 students, a ratio of 2.03, Peking University 15,558 faculty and 76,572 students, a ratio of 4.92, and Duke 6,244 faculty and 12,223 students, a ratio of 1.96

In 2006 the worst errors seem to have been corrected although I have not noticed any acknowledgement that any error had occurred or explanation that dramatic fluctuations in the faculty-student ratio or the overall score were not the result of any achievement or failing on the part of the universities concerned.

However, there still appear to be problems. I will deal with the case of Duke University, which this year is supposed to have the best score for faculty-student ratio. In 2005 Duke, according to the QS Topgraduates site, had, as I have just noted, 6,244 faculty and 12,223 students, giving it a ratio of about one faculty to 2 students. This is quite implausible and most probably resulted from a data entry error with an assistant or intern confusing the number of undergraduates listed on the Duke site, 6,244 in the fall of 2005, with the number of faculty. (8)

This year the data provided are not so implausible but they are still highly problematical. In 2006 Duke according to QS has 11,106 students but the Duke site refers to 13,088. True, the site may be in need of updating but it is difficult to believe that a university could reduce its total enrollment by about a sixth in the space of a year. Also, the QS site would have us believe that in 2006 Duke has 3,192 faculty members. But the Duke site refers to 1,595 tenure and tenure track faculty. Even if you count other faculty, including research professors, clinical professors and medical associates the total of 2,518 is still much less than the QS figure. I cannot see how QS could arrive at such a low figure for students and such a high figure for faculty. Counting part timers would not make up the difference, even if this were a legitimate procedure, since, according to the US News & World Report (America’s Best Colleges 2007 Edition), only three percent of Duke faculty are part time. My incredulity is increased by the surprise expressed by a senior Duke administrator (9) and by Duke's being surpassed by several other US institutions on this measure, according to the USNWR.

There are of course genuine problems about how to calculate this measure, including the question of part-time and temporary staff, visiting professors, research staff and so on. However, it is rather difficult to see how any consistently applied conventions could have produced your data for Duke.

I am afraid that I cannot help but wonder whether what happened was that data for 2005 and 2006 were entered in adjacent rows in a database for all three years and that the top score of 100 for Ecole Polytechnique in 2005 was entered into the data for Duke in 2006 – Duke was immediately below the Ecole in the 2005 rankings – and the numbers of faculty and students worked out backwards. I hope that this is not the case.

-- Could you please indicate the procedures that were employed for counting part-timers, visiting lecturers, research faculty and so on?
-- Could you also indicate when, how and from whom the figures for faculty and students at Duke were obtained?
-- I would like to point out that if the faculty-student ratio for Duke is incorrect then so are all the scores for this component, since the scores are indexed against the top scorer, and therefore all the overall scores. Also, if the ratio for Duke is based on an incorrect figure for faculty, then Duke’s score for citations per faculty is incorrect. If the Duke score does turn out to be incorrect would you consider recalculating the rankings and issuing a revised and corrected version?


International faculty
This year the university with the top score for international faculty is Macquarie, in Australia. On this measure it has made a giant leap forward from 55 to 100 (10).

This is not, I admit, totally unbelievable. THES has noted that in 2004 and 2005 it was not possible to get data for Australian universities about international faculty. The figures for Australian universities for these years therefore simply represent an estimate for Australian universities as a whole with every Australian university getting the same, or almost the same, score. This year the scores are different suggesting that data has now been obtained for specific universities.

I would like to digress a little here. On the QS Topgraduate website the data for 2005 gives the number of international faculty at each Australian university. I suspect that most visitors to the site would assume that these represent authentic data and not an estimate derived from applying a percentage to the total number of faculty. The failure to indicate that these data are estimates is perhaps a little misleading.

Also, I note that in the 2005 rankings the international faculty score for the Australian National University is 52, for Monash 54, for Curtin University of Technology 54 and for the University of Technology Sydney 33. For the other thirteen Australian and New Zealand universities it is 53. It is most unlikely that if data for these four universities were not estimates they would all differ from the general Australasian score by just one digit. It is likely then that in four out of seventeen cases there have been data entry errors or rounding errors. This suggests that it is possible that there have been other errors, perhaps more serious. The probability that errors have occurred is also increased by the claim, uncorrected for several weeks at the time of writing, on the QS Topuniversities site that in 2006 190,000 e-mails were sent out for the peer review.

This year the Australian and New Zealand universities have different scores for international faculty. I am wondering how they were obtained. I have spent several hours scouring the Internet, including annual reports and academic papers, but have been unable to find any information about the numbers of international faculty in any Australian university.

-- Can you please describe how you obtained this information? Was it from verifiable administrative or government sources? It is crucially important that the information for Macquarie is correct because if not then, once again, all the scores for this section are wrong.

Peer Review
This is not really a peer review in the conventional academic sense but I will use the term to avoid distracting arguments. My first concern with this section is that the results are wildly at variance with data that you yourselves have provided and with data from other sources. East Asian and Australian and some European universities do spectacularly better on the peer review, either overall or in specific disciplinary groups, than they do on any other criteria. I shall, first of all, look at Peking University (which you usually call Beijing University) and the Australian National University (ANU).

According to your rankings, Peking is in 2006 the 14th best university in the world (11). It is 11th on the general peer review, which according to your consultants explicitly assesses research accomplishment, and twelfth for science, twentieth for technology, eighth for biomedicine, 17th for social science and tenth for arts and humanities.

This is impressive, all the more so because it appears to be contradicted by the data provided by THES itself. On citations per paper Peking is 77th for science and 76th for technology. This measure is an indicator of how a research paper is regarded by other researchers. One that is frequently cited has aroused the interest of other researchers. It is difficult to see how Peking University could be so highly regarded when its research has such a modest impact. For biomedicine and social sciences Peking did not even do enough research for the citations to be counted.

If we compare overall research achievements with the peer review we find some extraordinary contrasts. Peking does much better on the peer review than California Institute of Technology (Caltech), with a score of 70 to 53 but for citations per faculty Peking’s score is only 2 compared to 100.

We find similar contrasts when we look at ANU. It was 16th overall and had an outstanding score on the peer review, ranking 7th on this criterion. It was also 16th for science, 24th for technology, 26th for biomedicine, 6th for social science and 6th for arts and humanities.

However, the scores for citations per paper are distinctly less impressive. On this measure, ANU ranks 35th for science, 62nd for technology and 56th for social science. It does not produce enough research to be counted for biomedicine.

Like Peking, ANU does much better than Caltech on the peer review with a score of 72 but its research record is less distinguished with a score of 13.

I should also like to look at the relative position of Cambridge and Harvard. According to the peer review Cambridge is more highly regarded than Harvard. Not only that, but its advantage increased appreciably in 2006. But Cambridge lags behind Harvard on other criteria, in particular citations per faculty and citations per paper in specific disciplinary groups. Cambridge is also decidedly inferior to Harvard and a few other US universities on most components of the Shanghai Jiao Tong index (12).

How can a university that has such an outstanding reputation perform so consistently less well on every other measure? Moreover, how can its reputation improve so dramatically in the course of two years?

I see no alternative but to conclude that much of the remarkable performance of Peking University, ANU and Cambridge is nothing more than an artifact of the research design. If you assign one third of your survey to Europe and one third to Asia on economic rather than academic grounds and then allow or encourage respondents to nominate universities in those areas then you are going to have large numbers of universities nominated simply because they are the best of a mediocre bunch. Is ANU really the sixth best university in the world for social science and Peking the tenth best for arts and humanities or is just that there are so few competitors in those disciplines in their regions?

There may be more. The performance on the peer review of Australian and Chinese universities suggests that a disproportionate number of e-mails were sent to and received from these places even within the Asia-Pacific region. The remarkable improvement of Cambridge between 2004 and 2006 also suggests that a disproportionate number of responses were received from Europe or the UK in 2006 compared to 2005 and 2004.

Perhaps there are other explanations for the discrepancy between the peer review scores for these universities and their performance on other measures. One is that citation counts favour English speaking researchers and universities but the peer review does not. This might explain the scores of Peking University but not Cambridge and ANU. Perhaps, Cambridge has a fine reputation based on past glories but this would not apply to ANU and why should there be such a wave of nostalgia sweeping the academic world between 2004 and 2006? Perhaps citation counts favour the natural sciences and do not reflect accomplishments in the humanities but the imbalances here seem to apply across the board in all disciplines.

There also are references to some very suspicious procedures. These include soliciting more responses to get more universities from certain areas in 2004. In 2006, there is a reference to weighting responses from certain regions. Also puzzling is the remarkable closing of the gap between high and low scoring institution between 2004 and 2005. Thus in 2004 the mean score for the peer review of all universities in the top 200 was 105.69 compared to a top score of 665 while in 2005 it was 32.82 compared to a top score of 100.

I would therefore like to ask these questions.

-- Can you indicate the university affiliation of your respondents in 2004, 2005 and 2006?
-- What was the exact question asked in each year?
-- How exactly were the respondents selected?
-- Were any precautions taken to ensure that those and only those to whom it was sent completed the survey?
-- How do you explain the general inflation of peer review scores between 2004 and 2005?
-- What exactly was the weighting given to certain regions in 2006 and to whom exactly was it given?
-- Would you considering publishing raw data to show the number of nominations that universities received from outside their regions and therefore the genuine extent of their international reputations?

The reputation of the THES rankings would be enormously increased if there were satisfactory answers to these questions. Even if errors have occurred it would surely be to THES’s long-term advantage to admit and to correct them.

Yours sincerely
Richard Holmes
Malaysia


Notes
(1) htttp://times.hankooki.com/lpage/nation/200611/kt2006110620382111990.htm
(2) http://ed.sjtu.edu.cn/ranking.htm
(3) http://english.chosun.com/w21data/html/news/200611/200611150020.html
(4) http://www.timesonline.co.uk/article/0,,3284-2452314,00.html
(5) http://www.blackwellpublishing.com/more_reviews.asp?ref=9781405163125&site=1
(6) http://www.topuniversities.com/worlduniversityrankings/2006/faqs/
(7) www.topgraduate.com
(8) http://www.dukenews.duke.edu/resources/quickfacts.html
(9) www.dukechronicle.com
(10) www.thes.co.uk
(11) www.thes.co.uk
(12) http://ed.sjtu.edu.cn/ranking.htm








Thursday, January 28, 2010

Opinion Surveys in University Rankings

In this week's Times Higher Education, Phil Baty discusses the role of reputational surveys in university ranking. It was a distinctive feature of the THE-QS rankings that they devoted 40 % of the weighting to a survey of academic opinion about the research excellence of universities. Baty points out that "The reputation survey used in the now-defunct Times Higher Education-QS World University Rankings was one of its most controversial elements: a survey of a tiny number of academics should not determine 40 per cent of a university's score".


It was not so much that a tiny number of academics was surveyed but that a tiny number responded and that this (relatively) tiny number was heavily biased towards particular countries and regions. A very obvious effect of the survey was to boost the position of Oxford and Cambridge well beyond anything they would have attained on indicators based on other more objective factors.

Whether THE can produce a better survey remains to be seen. But at least they have at last stopped calling it a peer review.

Friday, December 16, 2016

A new Super-University for Ireland?

University rankings have become extremely influential over the last few years. This is not entirely a bad thing. The initial publication of the Shanghai rankings in 2003, for example, exposed the pretensions of many European universities revealing just how far behind they had fallen in scientific research.  It also showed China how far it had to go to achieve scientific parity with the West.

Unfortunately, rankings have also had malign effects. The THE and QS world rankings have acquired a great deal of respect, trust, even reverence that may not be entirely deserved. Both introduced significant methodological changes in 2015 and THE has made further changes in 2016 and the consequence of this is that there have been some remarkable rises and falls within the rankings that have had a lot of publicity but have little to do with any real change in quality.

In addition, both QS and THE have increased the number of ranked universities which can affect the mean score for indicators from which the processed scores given to the public are derived. Both have surveys that can be biased and subjective. Both are unbalanced: QS with a 50 % weighting for academic and employer surveys and THE with field and year normalised citations plus a partial regional modification with an official weighting of 30% (the modification means that everybody except the top scorer gets a bonus for citations). The remarkable rise of Anglia Ruskin University to parity with Oxford and Princeton in this year’s THE research impact (citations) indicator and the high placing of the Pontifical Catholic University of Chile and the National University of Colombia in QS’s employers survey are evidence that these rankings continue to be implausible and unstable. To make higher education policy dependent on their fluctuations is very unwise.

This is particularly true of the two leading Irish universities, Trinity College Dublin (TCD)  and University College Dublin (UCD), which have in fact been advancing in the Round University Rankings produced by a Russian organisation and ShanghaiRanking’s Academic Ranking of World Universities. These two global rankings have methodologies that are generally stable and transparent.

I pointed out in 2015 that TCD had been steadily rising in the Shanghai ARWU  since 2004, especially in the Publications indicator (papers in the Science Citation Index - Expanded and the Social Science Citation Index) and PCP (productivity per capita, that is the combined indicator scores divided by the number of faculty). This year, to repeat an earlier post, TCD’s publication score again went up very slightly from 31 to 31.1 (27.1 in 2004) and the PCP quite significantly from 19 to 20.8 (13.9 in 2004), compared to top scores of 100 for Harvard and Caltech respectively.

UCD has also continued to do well in the Shanghai rankings with the publications score rising this year from 34.1 to 34.2 (27.3 in 2004) and PCP from 18.0 to 18.1 (8.1 in 2014).

The Shanghai rankings are, of course, famous for not counting the arts and humanities and not trying to measure anything related to teaching. The RUR rankings from Russia are based on Thomson Reuters data, also used by THE until two years ago and they do include publications in the humanities and teaching-related metrics. They have 12 out of the 13 indicators in the THE World University Rankings, plus eight others, but with a sensible weighting, for example 8% instead of 30% for field normalised citations.

The RUR rankings show that TCD rose from 174th overall in 2010 to 102nd in 2016. (193rd to 67th for research). UCD rose from 213th overall to 195th (157th to 69th for research) although some Irish universities such as NUI Galway, NUI Maynooth, University College Cork, and Dublin City University have fallen.

It is thoroughly disingenuous for Irish academics to claim that academic standards are declining because of a lack of funds. Perhaps they will do so in the future. But so far everything suggests that the two leading Irish universities are making steady progress especially in research.

The fall of UCD in this year’s THE rankings this year and TCD’s fall in 2015 and the fall of both in the QS rankings mean very little. When there are such large methodological changes it is pointless to discuss how to improve in the rankings. Methodological changes can be made and unmade and universities made and unmade as the Middle East Technical University found in 2015 when it fell from 85th place in the THE world rankings to below 501st.

The Irish Times of November 8th  had an article by Philip O’Kane that proposed that Irish universities should combine in some ways to boost their position in the global rankings.

He suggested that:
“The only feasible course of action for Ireland to avert continued sinking in the world rankings is to create a new “International University of Ireland”.

This could be a world-class research university that consists exclusively of the internationally-visible parts of all our existing institutions, and to do so at marginal cost using joint academic appointments, joint facilities and joint student registration, in a highly flexible and dynamic manner.

Those parts that are not internationally visible would be excluded from this International University of Ireland.”

It sounds like he is proposing that universities maintain their separate identity for some things but present a united front for international matters. This was an idea that was proposed in India a while ago but was quickly shot down by Phil Baty of THE. It is most unlikely that universities could separate data for faculty, students, and income, and publications of their international bits and send the data to the rankers.

The idea of a full merger is more practical but could be pointless or even counter-productive. In 2012 a group of experts, headed by European Commissioner Frans  Van Vught, suggested that UCD and TCD be merged to become a single world class university.

The ironic thing about this idea is that a merger would help with the Shanghai rankings that university bosses are studiously pretending do not exist but would be of little or no use with the rankings that the bureaucrats and politicians do care about.

The Shanghai rankings are known for being as much about quantity as quality. A merger of TCD and UCD would produce a significant gain for the university by combining the number of publications, papers in Nature and Science, and highly cited researchers. It would do no good for Nobel and Fields awards since Trinity has two now and UCD none so the new institution would still only have two (ShanghaiRanking does not count Peace and Literature). Overall, it is likely that the new Irish super-university would rise about a dozen places in the Shanghai rankings, perhaps even getting into the top 150 (TCD is currently 162nd).

But it would probably not help with the rankings that university heads are so excited about. Many of the indicators in the QS and THE rankings are scaled in some way. You might get more citations by adding together those of TCD and UCD, for instance, but QS divide them by number of faculty which would also be combined if there was a merger. You could combine the incomes of TCD and UCD but then the combined income would be divided by the combined staff numbers.

The only place where a merger would be of any point is the survey criteria, 50% in QS and 33% in THE but the problem here is that the reputation of a new University of Dublin or Ireland or whatever it is called is likely to be inferior to that of TCD and UCD for some years to come. There are places where merging universities is a sensible way of pooling the strengths of a multitude of small specialist schools and research centres, for example France and Russia. But for Ireland, there is little point if the idea is to get ahead in the QS and THE  rankings.


It would make more sense for Irish universities to focus on the Shanghai rankings where, if present trends continue, TCD will catch up with Harvard in about 240 years although by then the peaks of the intellectual world will probably be in Seoul, Shanghai, Moscow, Warsaw and Tallinn. 

Tuesday, July 12, 2011

This WUR had such promise

The new Times Higher Education World University Rankings of 2010 promised much, new indicators based on income, a reformed survey that included questions on postgraduate teaching, a reduction in the weighting given to international students.

But the actual rankings that came out in September were less than impressive.  Dividing the year's intake of undergraduate students by the total of academic faculty looked rather odd. Counting the ratio of doctoral students to undergraduates, while omitting masters programs, was an invitation to the herding of marginal students into substandard doctoral degree programmes.

The biggest problem though was the insistence on giving a high weighting – somewhat higher than originally proposed -- to citations. Nearly a third of the total weighting was assigned to the average citations per paper normalized by field and year. The collection of statistics about citations is the bread and butter of Thomson Reuters (TR), THE’s  data collector, and one of their key products is the Incites system, which apparently was the basis for their procedure during the 2010 ranking exercise. This compares the citation records of academics with international scores benchmarked by year and field. Of course, those who want to find out exactly where they stand have to find out what the benchmark scores are and that is something that cannot be easily calculated without Thomson Reuters.

Over the last two or three decades the number of citations received by papers, along with the amount of money attracted from funding agencies, has become an essential sign of scholarly merit. Things have now reached the point where, in many universities, research is simply invisible unless it has been funded by an external agency and then published in a journal noted for being cited frequently by writers who contribute to journals that are frequently cited. The boom in citations has begun to resemble classical share and housing bubbles as citations acquire an inflated value increasingly detached from any objective reality.

It has become clear that citations can be manipulated as much as, perhaps more than, any other indicator used by international rankings. Writers can cite themselves, they can cite co-authors, they can cite those who cite them. Journal editors and reviewers can  make suggestions to submitters about who to cite. And so on.

Nobody, however, realized quite how unrobust citations might become until the unplanned intersection of THE’s indicator and a bit of self citation and mutual citation by two peripheral scientific figures raised questions about the whole business.

One of these two was Mohamed El Naschie who comes from a wealthy Egyptian family. He studied in Germany and took a Ph D in engineering at University College London. Then he taught in Saudi Arabia while writing several papers that appear to have been of an acceptable academic standard although not very remarkable. 

But this was not enough. In 1993 he started a new journal dealing with applied mathematics and theoretical physics called Chaos, Solitons and Fractals (CSF), published by the leading academic publishers, Elsevier. El Naschie’s journal published many papers written by himself. He has, to his credit, avoided exploiting junior researchers or insinuating himself into research projects to which he has contributed little. Most of his papers do not appear to be research but rather theoretical speculations many of which concern the disparity between the mathematics that describes the universe and that which describes subatomic space and suggestions for reconciling the two.

Over the years El Naschie has listed a number of universities as affiliations. The University of Alexandra was among the most recent of them. It was not clear, however, what he did at or for the university and it was only recently, after the publication of the 2010 THE World University Rankings, that there is documentation of any official connection.

El Naschie does not appear to be highly regarded by physicists and mathematicians, as noted earlier in this blog,  and he has been criticized severely in the physics and mathematics blogosphere.  He has, it is true, received some very vocal support but he is not really helped by the extreme enthusiasm and uniformity of style of his admirers. Here is a fairly typical example, from the comments in Times Higher Education: 
“As for Mohamed El Naschie, he is one of the most original thinkers of our time. He mastered science, philosophy, literature and art like very few people. Although he is an engineer, he is self taught in almost everything, including politics. Now I can understand that a man with his charisma and vast knowledge must be the object of envy but what is written here goes beyond that. My comment here will be only about what I found out regarding a major breakthrough in quantum mechanics. This breakthrough was brought about by the work of Prof. Dr. Mohamed El Naschie”
Later, a professor at Donghua University, China, Ji-Huan He, an editor at El Naschie’s  journal, started a similar publication, the International Journal of Nonlinear Sciences and Numerical Simulation (IJNSNS), whose editorial board included El Naschie. This journal was published by the respectable and unpretentious Israeli company, Freund of Tel Aviv. Ji-Huan He’s journal has published 29 of his own papers and 19 by El Naschie. The  two journals have contained articles that cite and are cited by articles in the other. Since they deal with similar topics some degree of cross citation is to be expected but here it seems to be unusually large.

Let us look at how El Naschie worked. An example is his paper, ‘The theory of Cantorian spacetime and high energy particle physics (an informal review)’, published in Chaos, Solitons and Fractals,41/5, 2635-2646, in  September  2009.

There are 58 citations in the bibliography. El Naschie cites himself 24 times, 20 times to papers in Chaos, Solitons and Fractals and 4 in IJNSNS.  Ji-Huan He is cited twice along with four  other authors from CSF. This paper has been cited 11 times, ten times in CSF in issues of the journal published later in the year.

Articles in mathematics and theoretical physics do not get cited very much. Scholars in those fields prefer to spend time thinking about an interesting paper before settling down to comment. Hardly any papers get even a single citation in the same year. Here we have 10 for one paper. That might easily be 100 times the average for that discipline and that year.

The object of this exercise had nothing to do with the THE rankings. What it did do was to push El Naschie’s  journal into the top ranks of scientific journals as measured by the Journal Impact Factor, that is the number of citations per paper within a two year period. It also meant that for a brief period El Naschie was listed by Thomson Reuters’ Science Watch as a rising star of research.

Eventually, Elsevier appointed a new editorial board at CSF that did not include El Naschie. The journal did however continue to refer to him as the founding editor. Since then the number of citations has declined sharply.

Meanwhile, Ji-huan  He was also accumulating a large number of citations, many of them from conference proceedings that he had organized. He was launched into the exalted ranks of the ISI Highly Cited Researchers and his journal topped the citation charts in mathematics. Unfortunately, early this year Freund sold off its journals to the reputed German publishers De Gruyter, who appointed a new editorial board that did not include either him or El Naschie.

El Naschie, He and a few others have been closely scrutinized by Jason Rush, a mathematician formerly of the University of Washington. Rush was apparently infuriated by El Naschie s unsubstantiated claims to have held senior positions at a variety of universities including Cambridge, Frankfurt, Surrey and Cornell. Since 2009 he has closely, perhaps a little obsessively, published a blog that chronicles the activities of El Naschie and those associated with him. Most of what is known about El Naschie and He was unearthed by his blog, El Naschie Watch.

Meanwhile, Thomson Reuters were preparing their analysis of citations for the THE rankings. They used the Incites system and compared the number of citations with benchmark scores representing the average for year and field.
This meant that for this criterion a high score did not necessarily represent a large number of citations. It could simply represent more citations than normal in a short period of time in fields where citation was infrequent and, perhaps more significantly since we are talking about averages here, a small total number of publications. Thus, Alexandria, with only a few publications but listed as the affiliation of an author who was cited much more frequently than usual in theoretical physics or applied mathematics, did spectacularly well.


This is rather like declaring Norfolk (very flat according to Oscar Wilde) the most mountainous county in England because of a few hillocks that were nonetheless relatively much higher than the surrounding plains.

Thomson Reuters would have done themselves a lot of good if they had taken the sensible course of using several indicators of research impact, such as total citations, citations per faculty, the h-index or references in social media or if they had allocated a smaller weighting to the indicator or if they had imposed a reasonable  threshold number of publications instead of just 50 or if they had not counted self-citations, or citations within journals or if they had figured out a formula to detect mutual citations..

So, in September  THE published its rankings with University of Alexandria in the top 200 overall and in fourth place for research impact, ahead of Oxford, Cambridge and most of the Ivy league. Not bad for a university that had not even been counted by HEEACT, QS or the Shanghai rankings and that in 2010 had lagged behind two other institutions in Alexandria itself in Webometrics.

When the rankings were published THE pointed out that Alexandria had once had a famous library and that a former student had gone on to the USA to eventually win a Nobel prize decades later. Still, they did concede that the success of Alexandria was mainly due  to one "controversial" author.

Anyone with access to the Web of Science could determine in a minute precisely who the controversial author was. For a while it was unclear exactly how a few dozen papers and a few hundred citations could put Alexandria among the world’s elite. Some observers wasted time wondering if  Thomson Reuters had been counting papers from a community college in Virginia or Minnesota, a branch of the Louisiana State University or federal government offices in the Greater Washington area. Eventually, it was clear that El Naschie could not, as he himself asserted, have done it by himself: he needed the help of the very distinctive features of Thomson Reuters’ methodology.

There were  other oddities in the 2010 rankings. Some might have accepted a high placing for Bilkent University in Turkey. It was well known for its Academic English programs. It also had one much cited article whose apparent impact was increased because it was classified as multidisciplinary, usually a low cited category, thereby scoring well above the world benchmark. However, when regional patterns were analyzed, the rankings began to look rather strange, especially the research impact indicator. In Australia, the Middle East, Hong Kong and Taiwan the order of universities, looked rather different from what local experts expected. Hong Kong Baptist University the third best in the SAR? Pohang University of Science and Technology so much better than Yonsei or KAIST? Adelaide the fourth best Australian university?

In the UK or the US these placings might seem plausible or at least not worth bothering about. But in the Middle East the idea of Alexandria as top university even in Egypt is a joke and the places awarded to the others look very dubious.

THE and Thomson Reuters tried to shrug off the complaints by saying that there were just a few outliers which they were prepared to debate and that anyone who criticized them had a vested interest in the old THE-QS rankings which had been discredited. They  dropped hints that the citations indicator would be reviewed but so far nothing specific has emerged.

A few days ago, however,  Phil Baty of THE seemed to imply that there was nothing wrong with the citations indicator.
Normalised data allow fairer comparisons, and that is why Times Higher Education will employ it for more indicators in its 2011-12 rankings, says Phil Baty.
One of the most important features of the Times Higher Education World University Rankings is that all our research citations data are normalised to take account of the dramatic variations in citation habits between different academic fields.
Treating citations data in an “absolute manner”, as some university rankings do, was condemned earlier this year as a “mortal sin” by one of the world’s leading experts in bibliometrics, Anthony van Raan of the Centre for Science and Technology Studies at Leiden University. In its rankings, Times Higher Education gives most weight to the “research influence” indicator – for our 2010-11 exercise, this drew on 25 million citations from 5 million articles published over five years. The importance of normalising these data has been highlighted by our rankings data supplier, Thomson Reuters: in the field of molecular biology and genetics, there were more than 1.6 million citations for the 145,939 papers published between 2005 and 2009; in mathematics, however, there were just 211,268 citations for a similar number of papers (140,219) published in the same period.
To ignore this would be to give a large and unfair advantage to institutions that happen to have more provision in molecular biology, say, than in maths. It is for this crucial reason that Times Higher Education’s World University Rankings examine a university’s citations in each field against the global average for that subject.

Yes, but when we are assessing hundreds of universities in very narrowly defined fields we start running into quite small samples that can be affected by deliberate manipulation or by random fluctuations.

Another point is that if there are many more journals, papers, citations and grants in oncology or genetic engineering than in the spatialization of gender performativity or the influence of Semitic syntax on Old Irish then perhaps society is telling us something about what it values and that is something that should not be dismissed so easily.

So, it could be  we are going to get the University of Alexandria in the top 200 again, perhaps joined by Donghua university.

At the risk of being repetitive, there are a few simple  things that Times Higher  and TR could do to make the citations indicator more credible. There are also  more ways of measuring research excellence.Possibly they are thinking about them but so far there is no sign  of this.

The credibility of last year's rankings has  declined further with the decisions of the judge presiding over the libel case brought by El Naschie against Nature (see here for commentary). Until now it could be claimed that El Naschie was a wll known scientist by virtue of the large numbers of citations that he had received or at least an interesting and controversial maverick.

El  Naschie is pursuing a case against  Nature for publishing an article that suggested his writings were not of a high quality and that those published in his journal did not appear to be properly peer reviewed

The judge has recently ruled  ruled that  El Naachie cannot proceed with a claim for specific damages since he has not brought any evidence for this. He can only go ahead with a claim for general damages for loss of reputation and hurt feelings. Even here, it looks like it will be tough going. El Naschie seems to be unwilling or unable to find expert witnesses to testify to the scientific merits of his papers.

"The Claimant is somewhat dismissive of the relevance of expert evidence in this case, largely on the basis that his field of special scientific knowledge is so narrow and fluid that it is difficult for him to conceive of anyone qualifying as having sufficient "expert" knowledge of the field. Nevertheless, permission has been obtained to introduce such evidence and it is not right that the Defendants should be hindered in their preparations."

He also seems to have problems with locating records that would demonstrate that his many articles published in Chaos, Solitons and Fractals were adequately reviewed.
  1. The first subject concerns the issue of peer-review of those papers authored by the Claimant and published in CSF. It appears that there were 58 articles published in 2008. The Claimant should identify the referees for each article because their qualifications, and the regularity with which they reviewed such articles, are issues upon which the Defendants' experts will need to comment. Furthermore, it will be necessary for the Defendants' counsel to cross-examine such reviewers as are being called by the Claimant as to why alleged faults or defects in those articles survived the relevant reviews.

  2. Secondly, further information is sought as to the place or places where CSF was administered between 2006 and 2008. This is relevant, first, to the issue of whether the Claimant has complied with his disclosure obligations. The Defendants' advisers are not in a position to judge whether a proportionate search has been carried out unless they are properly informed as to how many addresses and/or locations were involved. Secondly, the Defendants' proposed expert witnesses will need to know exactly how the CSF journal was run. This information should be provided.
It would therefore  seem to be getting more and more difficult for anyone to argue that TR's methodology has uncovered a pocket of excellence in Alexandria.

Unfortunately, it is beginning to look as though THE will not only use much the same method as last time but will apply normalisation to other indicators as well.
But what about the other performance indicators used to compare institutions? Our rankings examine the amount of research income a university attracts and the number of PhDs it awards. For 2011-12, they will also look at the number of papers a university has published that are co-authored by an international colleague.
Don’t subject factors come into play here, too? Shouldn’t these also be normalised? We think so. So I am pleased to confirm that for the 2011-12 World University Rankings, Times Higher Education will introduce subject normalisation to a range of other ranking indicators.
This is proving very challenging. It makes huge additional demands on the data analysts at Thomson Reuters and, of course, on the institutions themselves, which have had to provide more and richer data for the rankings project. But we are committed to constantly improving and refining our methodology, and these latest steps to normalise more indicators evidence our desire to provide the most comprehensive and rigorous tables we can.
What this might mean is that universities that spend modest amounts of money in fields where little money is usually spent would get a huge score. So what would happen if an eccentric millionaire left millions to establish a lavishly funded research chair in continental philosophy at Middlesex University?  There are no doubt precautions that Thomson Reuters could take but will they? The El Naschie business does not inspire very much confidence that they will.

The reception of the 2010 THE WUR rankings suggests that the many in the academic world have doubts about the wisdom of using normalised citation data without considering the potential for gaming or statistical anomalies. But the problem may run deeper and involve citations as such. QS, THE 's rival and former partner, have produced a series of subject rankings based on data from 2010. The overall results for each subject are based on varying combinations of the scores for academic opinion, employer opinion and citations per paper (not per faculty as in the general rankings).

The results are interesting. Looking at citations per paper alone we see that Boston College and Munich are jointly first in Sociology. Rutgers is third for politics and international studies. MIT is third for philosophy (presumably Chomsky and co). Stellenbosch is first for Geography and Area studies. Padua is first for linguistics. Tokyo Metropolitan University is second for biological sciences and Arizona State University first.


Pockets of excellence or statistical anomalies? These results may not be quite as incredible as Alexandria in the THE rankings but they are not a very good advertisement for the validity of citations as a measure of research excellence.

It appears that THE have not made their minds up yet. There is still time to produce a believable and rigorous ranking system. But whatever happens, it is unlikely that citations,  normalized or unnormalized, will continue to be the unquestionable gold standard of academic and scientific research.


    Friday, October 02, 2015

    Very Interesting Rankings from Times Higher Education


    The latest edition of the Times Higher Education (THE) World University Rankings has just been published, along with a big dose of self-flattery and congratulations to the winners of what is beginning to look more like a lottery than an objective exercise in comparative assessment.

    The background to the story is that at the end of last year THE broke with their data suppliers Thomson Reuters (TR) and announced the dawn of a new era of transparency and accountability 

    There were quite a few things wrong with the THE rankings, especially with the citations indicator which supposedly measured research impact and was given nearly a third of the total weighting. This meant that THE was faced with a serious dilemma. Keeping the old methodology would be a problem but radical reform would raise the question of why THE would want to change what they claimed was a uniquely trusted and sophisticated methodology with carefully calibrated indicators.

    It seems that THE have decided to make a limited number of changes but to postpone making a decision about other issues.

    They have broadened the academic reputation survey, sending out forms in more languages and getting more responses from outside the USA. Respondents are now drawn from those with publications in the Scopus database, much larger than the Web of Science, as was information about publications and citations. In addition, THE have excluded 649 “freakish” multi – author papers from their calculations and diluted the effect of the regional modification that boosted the scores in the citations indicator of low performing countries.

    These changes have led to implausible fluctuations with some institutions rising or falling dozens or hundreds of places. Fortunately for THE, the latest winners are happy to trumpet their success and the losers so far seem to have lapsed into an embarrassed silence.

    When they were published on the 30th of September the rankings provided lots of headline fodder about who was up or down.

    The Irish Times announced that the rankings showed  Trinity College Dublin had fallen while University College Dublin was rising.

    In the Netherlands the University of Twente bragged about its “sensationally higher scores”.

    Study International asserted that “Asia Falters” and that Britain and the US were still dominant in higher education.

    The London Daily Telegraph claimed  that European universities were matching the US.

    The Hindu found something to boast about by noting that India was at last the equal of co-BRICS member Brazil.

    Russian media celebrated the remarkable achievement of Lomonosov Moscow State University in rising 35 places.

    And, of course, the standard THE narrative was trotted out again. British universities are wonderful but they will only go on being wonderful if they are given as much money as they want and are allowed to admit as many overseas students as they want.

    The latest rankings support this narrative of British excellence by showing Oxford and Cambridge overtaking Harvard, which was pushed into sixth place. But is such a claim believable? Has anything happened in the labs or lecture halls at any of those places between 2014 and 2015 to cause such a shift?

    In reality, what probably happened was that the Oxbridge duo were not actually doing anything better this year but that Harvard’s eclipse came from a large drop from 92.9 to 83.6 points for THE’s composite teaching indicator. Did Harvard’s teaching really deteriorate over twelve months? It is more likely that there were relatively fewer American respondents in the THE survey but one cannot be sure because there are four other statistics bundled into the indicator.

    While British universities appeared to do well, French ones appeared to perform disastrously. The École Normale Supérieure recorded a substantial gain going from 78th to 54th place but every other French institution in the rankings fell, sometimes by dozens of places. École Polytechnique went from 61st place to 101st, Université Paris-Sud from 120th  to 188th , the University of Strasbourg from the 201-225 band to 301-350, in every case because of a substantial fall in the citations indicator. If switching to Scopus was intended to help non-English speaking countries it did not do France any good.

    Meanwhile, the advance of Asia has apparently come to an end or gone into screeching reverse. Many Asian universities  slipped down the ladder although the top Chinese schools held their ground. Some Japanese and Korean universities fell dozens of places. The University of Tokyo went from 23rd to 43rd place, largely because of a fall in the citations indicator from 74.7 points to 60.9 and the University of Kyoto from 59th to 88th with another drop in the score for citations. Among the casualties was Tokyo Metropolitan University which used to advertise its perfect score of 100 for citations on the THE website. This year, stripped of the citations for mega-papers in physics, its citation score dropped to a rather tepid 72.2.

    The Korean flagships have also foundered. Seoul National University fell 35 places and the Korean Advanced Institute of Technology 66, largely because of a decline in the scores for teaching and research. Pohang University of Science and Technology (POSTECH) fell 50 places, losing points in all indicators except income from industry

    The most catastrophic fall was in Turkey. There were four Turkish universities in the top 200 last year. All of them have dropped out. Several Turkish universities contributed to the Large Hadron Collider project with its multiple authors and multiple citations and they also benefited from producing comparatively few research papers and from the regional modification, which gave them artificially high scores for the citations indicator in 2014 but not this year.

    The worst case was Middle East Technical University which had the 85th place in 2014, helped by an outstanding score of 92 for citations and reasonable scores for the other indicators. This year it was in the 501-600 band with reduced scores for everything except Industry Income and a very low score of 28.8 for citations.

    The new rankings appear to have restored the privilege given to medical research. In the upper reaches we find St George’s, University of London, a medical school, which according to THE is the world's leading university for research impact,  Charité - Universitätsmedizin Berlin,  a teaching hospital affiliated to Humboldt University and the Free University of Berlin, and Oregon Health and Science University.

    It also appears that THE's methodology continues to gives an undeserved advantage to small or specialized institutions such as Scuola Superiore Sant’Anna in Pisa, which does not appear to be a truly independent university,  the Copenhagen Business School, and Rush University in Chicago, the academic branch of a private hospital.

    These rankings appear so far to have got a good reception in the mainstream press, although it is likely that that before long we will hear some negative reactions from independent experts and from Japan, Korea, France, Italy and the Middle East.

    THE, however, have just postponed the hard decisions that they will eventually have to make.