Wednesday, July 19, 2017

Comments on an Article by Brian Leiter

Global university rankings are now nearly a decade and a half old. The Shanghai rankings (Academic Ranking of World Universities or ARWU) began in 2003, followed a year later by Webometrics and the THES-QS rankings which, after an unpleasant divorce, became the Times Higher Education (THE) and the Quacquarelli Symonds (QS) world rankings. Since then the number of rankings with a variety of audiences and methodologies has expanded.

We now have several research-based rankings, University Ranking by Academic Performance (URAP) from Turkey, the National Taiwan University Rankings, Best Global Universities from US NewsLeiden Ranking, as well as rankings that include some attempt to assess and compare something other than research, the Round University Rankings from Russia and U-Multirank from the European Union. And, of course, we also have subject rankingsregional rankings, even age group rankings.

It is interesting that some of these rankings have developed beyond the original founders of global rankings. Leiden Ranking is now the gold standard for the analysis of publications and citations. The Russian rankings use the same Web of Science database that THE did until 2014 and it has 12 out of the 13 indicators used by THE plus another eight in a more sensible and transparent arrangement. However, both of these receive only a fraction of the attention given to the THE rankings.

The research rankings from Turkey and Taiwan are similar to the Shanghai rankings but without the elderly or long departed Fields and Nobel award winners and with a more coherent methodology. U-Multirank is almost alone in trying to get at things that might be of interest to prospective undergraduate students.

It is regrettable that an article by Professor Brian Leiter of the University of Chicago in the Chronicle of Higher Education , 'Academic Ethics: To Rank or Not to Rank' ignores such developments and mentions only the original “Big Three”, Shanghai, QS and THE. This is perhaps forgivable since the establishment media, including THE and the Chronicle, and leading state and academic bureaucrats have until recently paid very little attention to innovative developments in university ranking. Leiter attacks the QS rankings and proposes that they should be boycotted while trying to improve the THE rankings.

It is a little odd that Leiter should be so caustic, not entirely without justification, about QS while apparently being unaware of similar or greater problems with THE.

He begins by saying that QS stands for “quirky silliness”. I would not disagree with that although in recent years QS has been getting less silly. I have been as sarcastic as anyone about the failings of QS: see here and here for an amusing commentary.

But the suggestion that QS is uniquely bad in contrast to THE is way off the target. There are many issues with the QS methodology, especially with its employer and academic surveys, and it has often announced placings that seem very questionable such as Nanyang Technological University (NTU) ahead of Princeton and Yale or the University of Buenos Aires in the world top 100, largely as a result of a suspiciously good performance in the survey indicators. The oddities of the QS rankings are, however, no worse than some of the absurdities that THE has served up in their world and regional rankings.  We have had places like University of Marakkesh Cadi Ayyad University in Morocco, Middle East Technical University in Turkey, Federico Santa Maria Technical University in Chile, Alexandria University and Veltech University in India rise to ludicrously high places, sometimes just for a year or two, as the result of a few papers or even a single highly cited author.

I am not entirely persuaded that NTU deserves its top 12 placing in the QS rankings. You can see here QS’s unconvincing reply to a question that I provided. QS claims that NTU's excellence is shown by its success in attracting foreign faculty, students and collaborators, but when you are in a country where people show their passports to drive to the dentist, being international is no great accomplishment. Even so, it is evidently world class as far as engineering and computer science are concerned and it is not impossible that it could reach an undisputed overall top ten or twenty ranking the next decade.

While the THE top ten or twenty or even fifty looks quite reasonable, apart from Oxford in first place, there are many anomalies as soon as we start breaking the rankings apart by country or indicator and THE has pushed some very weird data in recent years. Look at these places supposed to be regional or international centers of across the board research excellence as measured by citations: St Georges University of London, Brandeis University, the Free University of Bozen-Bolsano,  King Abdulaziz University, the University of Iceland, Veltech University. If QS is silly what are we to call a ranking where Anglia Ruskin University is supposed to have a greater research impact than Chicago, Cambridge or Tsinghua.

Leiter starts his article by pointing out that the QS academic survey is largely driven by the geographical distribution of its respondents and by the halo effect. This is very probably true and to that I would add that a lot of the responses to academic surveys of this kind are likely driven by simple self interest, academics voting for their alma mater or current employer. QS does not allow respondents to vote for the latter but they can vote for the former and also vote for grant providers or collaborators.

He says that “QS does not, however, disclose the geographic distribution of its survey respondents, so the extent of the distorting effect cannot be determined". This is not true of the overall survey. QS does in fact give very detailed figures about the origin of its respondents and there is good evidence here of probable distorting effects. There are, for example, more responses from Taiwan than from Mainland China, and almost as many from Malaysia as from Russia. QS does not, however, go down to subject level when listing geographic distribution.

He then refers to the case of University College Cork (UCC) asking faculty to solicit friends in other institutions to vote for UCC. This is definitely a bad practice, but it was in violation of QS guidelines and QS have investigated. I do not know what came of the investigation but it is worth noting that the message would not have been an issue if it had referred to the THE survey.

On balance, I would agree that THE ‘s survey methodology is less dubious than QS’s and less likely to be influenced by energetic PR campaigns. It would certainly be a good idea if the weighting of the QS survey was reduced and if there was more rigorous screening and classification of potential respondents.

But I think we also have to bear in mind that QS does prohibit respondents from voting for their own universities and it does average results out over a five- year period (formerly three years).

It is interesting that while THE does not usually combine and average survey results it did so in the 2016-17 world rankings combining the 2015 and 2016 survey results. This was, I suspect, probably because of a substantial drop in 2016 in the percentage of respondents from the arts and humanities that would, if unadjusted, have caused a serious problem for UK universities, especially those in the Russell Group.

Leiter then goes on to condemn QS for its dubious business practices. He reports that THE dropped QS because of its dubious practices. That is what THE says but it is widely rumoured within the rankings industry that THE was also interested in the financial advantages of a direct partnership with Thomson Reuters rather than getting data from QS.

He also refers to QS’s hosting a series of “World Class events” where world university leaders pay $950 for “seminar, dinners, coffee breaks” and “learn best practice for branding and marketing your institution through case studies and expert knowledge” and the QS stars plan where universities pay to be audited by QS in return for stars that they can use for promotion and advertising. I would add to his criticism that the Stars program has apparently undergone a typical “grade inflation” with the number of five-star universities increasing all the time.

Also, QS offers specific consulting services and it has a large number of clients from around the world although there are many more from Australia and Indonesia than from Canada and the US. Of the three from the US one is MIT which has been number one in the QS world rankings since 2012, a position it probably achieved after a change in the way in which faculty were classified.

It would, however, be misleading to suggest that THE is any better in this respect. Since 2014 it has launched a serious and unapologetic “monetisation of data” program.

There are events such as the forthcoming world "academic summit" where for 1,199 GBP (standard university) or 2,200 GBP (corporate), delegates can get "Exclusive insight into the 2017 Times Higher Education World University Rankings at the official launch and rankings masterclass,”, plus “prestigious gala dinner, drinks reception and other networking events”. THE also provides a variety of benchmarking and performance analysis services, branding, advertising and reputation management campaigns and a range of silver and gold profiles, including adverts and sponsored supplements. THE’s data clients include some illustrious names like the National University of Singapore and Trinity College Dublin plus some less well-known places such as Federico Santa Maria Technical University, Orebro University, King Abdulaziz University, National Research Nuclear University MEPhI Moscow, and Charles Darwin University.

Among THE’s activities are regional events that promise “partnership opportunities for global thought leaders” and where rankings like “the WUR are presented at these events with our award-winning data team on hand to explain them, allowing institutions better understanding of their findings”.

At some of these summits the rankings presented are trimmed and tweaked and somehow the hosts emerge in a favourable light. In February 2015, for example, THE held a Middle East and North Africa (MENA) summit that included a “snapshot ranking” that put Texas A and M University Qatar, a branch campus that offers nothing but engineering courses, in first place and Qatar University in fourth. The ranking consisted of precisely one indicator out of the 13 that make up THE’s world university rankings, field and year normalised citations. United Arab Emirates University (UAEU) was 11th and the American University of Sharjah in the UAE 14th.  

The next MENA summit was held in January 2016 in Al Ain in UAE. There was no snapshot this time and the methodology for the MENA rankings included 13 indicators in THE’s world rankings. Host country universities were now in fifth (UAEU) and eighth place (American University in Sharjah). Texas A and M Qatar was not ranked and Qatar University fell to sixth place.

Something similar happened to Africa. In 2015, THE went to the University of Johannesburg for a summit that brought together “outstanding global thought leaders from industry, government, higher education and research” and which unveiled THE’s Africa ranking based on citations (with the innovation of fractional counting) that put the host university in ninth place and the University of Ghana in twelfth.

In 2016 the show moved on to the University of Ghana where another ranking was produced based on all the 13 world ranking indicators. This time the University of Johannesburg did not take part and the University of Ghana went from 12th place to 7th.

I may have missed something but so far I do not see sign of THE Africa or MENA summits planned for 2017. If so, then African and MENA university leaders are to be congratulated for a very healthy scepticism.

To be fair, THE does not seem to have done any methodological tweaking for this year’s Asian, Asia Pacific and Latin American rankings.

Leiter concludes that American academics should boycott the QS survey but not THE’s and that they should lobby THE to improve its survey practices. That, I suspect, is pretty much a nonstarter. QS has never had much a presence in the US anyway and THE is unlikely to change significantly as long as its commercial dominance goes unchallenged and as long as scholars and administrators fail to see through its PR wizardry. It would be better for everybody to start looking beyond the "Big Three" rankings.





Monday, July 03, 2017

Proving anything you want from rankings

It seems that university rankings can be used to prove almost anything that journalists want to prove.

Ever since the Brexit referendum experts and pundits of various kinds have been muttering about the dread disease that is undermining or about to undermine the research prowess of British universities. The malignity of Brexit is so great that it can send its evil rays back from the future.

Last year, as several British universities tumbled down the Quacquarelli Symonds (QS) world rankings, the Independent claimed that “[p]ost-Brexit uncertainty and long-term funding issues have seen storm clouds gather over UK higher education in this year’s QS World University Rankings”.

It is difficult to figure out how anxiety about a vote that took place on June 24th 2016 could affect a ranking based on institutional data for 2014 and bibliometric data from the previous five years.

It is just about possible that some academics or employers might have woken up on June 24th to see that their intellectual inferiors had joined the orcs to raze the ivory towers of Baggins University and Bree Poly and then rushed to send a late response to the QS opinion survey. But QS, to their credit, have taken steps to deal with that sort of thing by averaging out survey responses over a period of five years.

European and American universities have been complaining for a long time that they do not get enough money from the state and that their performance in the global rankings is undermined because they do not get enough international students or researchers. That is a bit more plausible. After all, income does account for three separate indicators in the Times Higher Education (THE) world rankings so reduced income would obviously cause universities to fall a bit. The scandal over Trinity College Dublin’s botched rankings data submission showed precisely how much a given increase in reported total income (with research and industry income in a constant proportion) means for the THE world rankings. International metrics account for 10% of the QS rankings and 7.5% of the THE world rankings. Whether a decline in income or the number of international students has a direct effect or indeed any effect at all on research output or the quality of teaching is quite another matter.

The problem with claims like this is that the QS and THE rankings are very blunt instruments that should not be used to make year by year analyses or to influence government or university policy. There have been several changes in methodology, there are fluctuations in the distribution of survey responses by region and subject and the average scores for indicators may go up and down as the number of participants changes. All of these mean that it is very unwise to make extravagant assertions about university quality based on what happens in those rankings.

Before making any claim based on ranking changes it would be a good idea to wait a few years until the impact of any methodological change has passed through the system

Another variation in this genre is the recent claim in the Daily Telegraph that “British universities are slipping down the world rankings, with experts blaming the decline on pressure to admit more disadvantaged students.”

Among the experts is Alan Smithers of the University of Buckingham who is reported as saying “universities are no longer free to take their own decisions and recruit the most talented students which would ensure top positions in league tables”.

There is certainly good evidence that British university courses are becoming much less rigorous. Every year reports come in about declining standards everywhere. The latest is the proposal at Oxford to allow students to do take home instead of timed exams.

But it is unlikely that this could show up in the QS or THE rankings. None of the global rankings has a metric that measures the attributes of graduates except perhaps the QS employers survey. It is probable that a decline in the cognitive skills of admitted undergraduate students would eventually trickle up to the qualities of research students and then to the output and quality of research but that is not something that could happen in a single year especially when there is so much noise generated by methodological changes.

The cold reality is that university rankings can tell us some things about universities and how they change over perhaps half a decade and some metrics are better than others but it is an exercise in futility to use overall rankings or indicators subject to methodological tweaking to argue about how political or economic changes are impacting western universities.

The latest improbable claim about rankings is that Oxford’s achieving parity with Cambridge in the THE reputation rankings was the result of  a positive image created by appointing its first female Vice Chancellor.

Phil Baty, THE’s editor, is reported as saying that ‘Oxford University’s move to appoint its first female Vice Chancellor sent a “symbolic” wave around the world which created a positive image for the institution among academics.’

There is a bit of a problem here. Louise Richardson was appointed Vice -Chancellor in January 2016. The polling for the 2016 THE reputation rankings took place between January and March 2016. One would expect that if the appointment of Richardson had any effect on academic opinion at all then it would be in those months. It certainly seems more likely than an impact that was delayed for more than a year. If the appointment did affect the reputation rankings then it was apparently a negative one for Oxford’s score fell massively from 80.4 in 2015 to 69.1 in 2016 (compared to 100 for Harvard in both years). 

So, did Oxford suffer in 2016 because spiteful curmudgeons were infuriated by an upstart intruding into the dreaming spires?

The collapse of Oxford in the 2016 reputation rankings and its slight recovery in 2017 almost certainly had nothing to do with the new Vice-Chancellor.

Take a look at the table below. Oxford’s reputation score tracks the percentage of THE survey responses from the arts and humanities. It goes up when there are more respondents from those subjects and goes down when there are fewer. This is the case for British universities in general and also for Cambridge except for this year.

The general trend since 2011 has been for the gap between Cambridge and Oxford to fall steadily and that trend happened before Oxford acquired a new Vice-Chancellor although it accelerated and finally erased the gap this year.

What is unusual about this year’s reputation ranking is not that Oxford recovered as the number of arts and humanities respondents increased but that Cambridge continued to fall.

I wonder if it has something to do with Cambridge’s “disastrous” performance in the THE research impact (citations) indicator in recent years.  In the 2014-15 world rankings Cambridge was 28th behind places like Federico Santa Maria Technical University and Bogazici University. In 2015-16 it was 27th behind St Petersburg Polytechnic University. But a greater humiliation came in the 2016-17 rankings. Cambridge fell to 31st in the world for research impact. Even worse it was well behind Anglia Ruskin University, a former art school. For research impact Cambridge University wasn’t the best university in Europe or England. It wasn’t even the best in Cambridge, at least if you trusted the sophisticated THE rankings.

Rankings are not entirely worthless and if they did not exist no doubt they would somehow be invented. But it is doing nobody any good to use them to promote the special interests of university bureaucrats and insecure senior academics.

Table: Scores in THE reputation rankings


Year
Oxford
Cambridge
Gap
% responses arts and
humanities
2011
68.6
80.7
12.1
--
2012
71.2
80.7
9.5
7%
2013
73.0
81.3
8.3
10.5%
2014
67.8
74.3
6.5
9%
2015
80.4
84.3
3.9
16%
2016
67.6
72.2
4.6
9%
2017
69.1
69.1
0
12.5%




Sunday, June 18, 2017

Comparing the THE and QS Academic Reputation Surveys

Times Higher Education (THE) has just published its 2017 reputation rankings which include 100 universities. These are based on a survey distributed between January and March of this year and will be included, after standardisation, in the 2017-18 (or 2018) World University Rankings scheduled for publication in a few months. In the forthcoming world rankings the reputation survey will be divided into two metrics in the research and teaching indicator groups, with a combined weighting of 33 percent. The survey asked about research and postgraduate teaching but since the correlation between these two questions is very high there is effectively only one indicator.

The QS world rankings released last week included scores derived from two surveys, one of academics with a 40% weighting and one of employers with 10%. The academic survey was concerned only with research.

The methodology of the THE survey is relatively simple. The respondents are drawn from the database of researchers with publications in Scopus indexed journals, in other words those who get to be listed as corresponding author. THE claims that this makes them experienced senior researchers although in many parts of the world being a member or leader of a research team often has more to do with politics than merit.

In contrast, the QS methodology has changed quite a lot over the last few years. It began with scouring the mailing lists of World Scientific, a Singapore based academic publisher with links to Imperial College London, then adding various other channels including lists supplied by institutions and sign up facilities for potential respondents. The result is a survey that appears more inclusive than THE's with more respondents from outside the elite but one whose validity may be rather suspect.

The THE ranking found that there were six super-brand universities that stood out from everyone else, Harvard, MIT, Stanford, Cambridge, Oxford, and Berkeley. There was a big gap between Berkeley and number seven Princeton and then the long smooth slope continues. 

After that, the ranking is dominated by English speaking universities, with the USA contributing 42, the UK 10, Canada 3 and Australia 3.  East Asia and the Chinese diaspora (Hong Kong, Taiwan and Singapore) are fairly well represented, while South and Central Asia, the Middle East and Africa are absent.

For any survey a great deal depends on how the forms are distributed. Last year, the THE survey had a lot more responses from the social sciences, including economics and business studies, and fewer from the arts and the humanities, and that contributed to some Asian universities rising and some British ones falling.

Such falls are typically attributed in the education establishment media to anxiety about the looming horrors of Brexit, the vicious snatching of research funds and the rising tide of hostility to international students.

This year British Universities did a bit better in the THE reputation ranking this year with five going up, three staying put and three going down. No doubt we will soon hear about the invigorating effects of Brexit and the benefits of austerity.  Perhaps also it might have something to do with the number of survey responses from the arts and humanities going up from  9% to 12.5%, something that would surely benefit UK universities.

The QS reputation indicator has the same universities in the top six but not in quite the same order: Cambridge, fourth in THE, is second in the QS indicator. After that it starts looking very different. Number seven is the University of Tokyo, which THE puts in 11th place for  academic reputation. Other Asian universities do much better in the QS indicator. The National University of Singapore is 11th ( 27th in THE) Nanyang Technological University  Singapore is 50th (THE 81-90 band), Peking University is 14th (THE 17h) Chulalongkorn University Thailand is 99th (not in the THE top 100).

It is noticeable that Latin American universities such as  the University of Sao Paulo, the University of Buenos Aires and the Pontifical Catholic University of Chile get a higher placing in the QS indicator than they do in the THE ranking as do some Southern European universities such as Barcelona, Sapienza and Bologna.

The THE reputation ranking gives us a snapshot of the current views of the world's academic elite and probably underestimates the rising universities of Greater China and Korea. QS cast their nets further and have probably caught a few of tomorrow's world class institutions although I suspect that the Latin American high fliers, apart from Sao Paulo, are very overrated.




Thursday, June 15, 2017

The Abuse and Use of Rankings

International university rankings have become a substantial industry since the first appearance of the Shanghai rankings (Academic Ranking of World Universities or ARWU) back in 2003. The various rankings are now watched closely by governments and media and for some students they play a significant role in choosing universities, They have become a factor in national higher education policies and are an important element in the race to enter and dominate the lucrative transnational higher education market. In Malaysia a local newspaper, Utusan Malaysia, recently had a full page on the latest QS world rankings including a half page of congratulations from the Malaysian Qualification Agency for nine universities who are part of a state-backed export drive.

Reaction to international rankings often goes to one of two extremes, either outright rejection or uncritical praise, sometimes descending into grovelling flattery that would make Uriah Heep ashamed (the revered QS rankings, Phil Baty a thought leader). The problem with the first, which is certainly very understandable, is that it is unrealistic. If every international ranking suddenly stopped publication we would just have, as we did before, an informal ranking system based largely on reputation, stereotypes and prejudice. 

On the other hand, many academics and bureaucrats find rankings very useful. It is striking that university administrators, the media and national governments have been so tolerant of some of the absurdities that Times Higher Education (THE) has announced in recent years. Recently, THE’s Asian rankings had Veltech University as the third best university in India and the best in Asia for research impact, the result of exactly one researcher assiduously citing himself. This passed almost unnoticed in the Indian press and seems to have aroused no great interest among Indian academics apart from a couple of blog posts. Equally, when Universiti Tunku Abdul Rahman (UTAR), a private Malaysian university, was declared to be the second best university in the country and best for research impact, on the strength of a single researcher’s participation in a high profile global medical project there was no apparent response from anyone.

International rankings have also become a weapon in the the drive by universities to maintain or increase their access to public funds. British and Irish universities often complain that their fall in the  rankings is all the fault of the government for not providing enough money. Almost any result in the better known rankings can be used to prop up the narrative of western universities starved of funds and international researchers and students. 

Neither of these two views is really valid. Rankings can tell us a great deal about the way that higher education and research are going. The early Shanghai rankings indicated that China was a long way behind the West and that research in  continental Europe was inferior to that in the USA. A recent analysis by Nature Index shows that American research is declining and that the decline is concentrated in diverse Democrat voting states such as California, Massachusetts, Illinois and New York.

But if university rankings are useful they not equally so and neither are the various indicators from which they are constructed.

Ranking indicators that rely on self-submitted information should be mistrusted. Even if everybody concerned is fanatically honest, there are many ways in which data can be manipulated, massaged, refined, defined and redefined, analysed and distorted as it makes it way from branch campuses, affiliated colleges and research institutes through central administration to the number munching programs of the rankers.   

Then of course there are the questionable validation processes within the ranking organisations. There was a much publicised case concerning Trinity College Dublin where for two years in a row the rankers missed an error of orders of magnitude in the data submitted for three income indicators.

Any metric that measures inputs rather than outputs should be approached with caution including THE's measures of income that amount to a total weighting of 10.75%. THE and QS both have indicators that count staff resources. It is interesting to have  this sort of information but there is no guarantee that having loads of money or staff will lead to quality whether of research, teaching or anything else.

Reputation survey data is also problematic. It is obviously subjective, although that is not necessarily a bad thing, and everything depends on the distribution of responses between countries, disciplines, subjects and levels of seniority. Take a look at the latest QS rankings and the percentages of respondents from various countries.

Canada has 3.5% of survey respondents and China has  1.7%.
Australia has 4% and Russia 4.2%.
Kazakhstan has 2.1% and India 2.3%'

There ought to be a sensible middle road between rejecting rankings altogether and passively accepting the errors, anomalies and biases of the popular rankers.

Universities and governments should abide by a self denying ordinance and reject ranking results that challenge common sense or contradict accepted national rankings. I remember a few years ago someone at Duke University saying that they were puzzled why the THES-QS rankings put the school in first place for faculty student ratio when this contradicted data in the
 US News rankings. Few, if any, major universities or higher education misters seem to have done anything like this lately.

It would also be a good idea if universities and governments stopped looking at rankings holistically and started setting targets according to specific indicators. High flying research university could refer to the Leiden Ranking, Nature Index or the
 Nature and Science and Publications indicators in ARWU. New universities could target a place in the Excellence indicators in the Webometrics rankings which lists 5,777 institutions as having some sort of research presence.

As for the teaching mission, the most directly relevant indicators are the QS employer survey in the world rankings, the QS Graduate Employability Index, and the Global University Ranking Employability Ranking published by THE.


Governments and universities would be advised not to got too excited about a strong performance in the rankings. What the rankings have given the rankings can take away.


            

Monday, May 29, 2017

Ten Universities with a Surprisingly Large Research Impact

Every so often newspapers produce lists of universities that excel in or are noteworthy for something. Here is a list of ten universities that, according to Times Higher Education (THE), have achieved remarkable success in the world of global research. In a time of austerity when the wells of patronage are running dry, they should be an example to us all: they have achieved a massive global research impact, measured by field-normalised citations, despite limited funding, minimal reputations and few or very few publications. The source is the THE World and Asian rankings citations indicator.

1. First on the list is Alexandria University in Egypt,  4th in the world and a near perfect score for research impact in 2010-11.

2. In the same year Hong Kong Baptist University was tenth for research impact, ahead of the University of Chicago and the University of Hong Kong.

3. In 2011-12 Royal Holloway, University of London, was in 12th place, ahead of any other British or European institution.

4. The National Research Nuclear University MEPhI, in Moscow, a specialist  institution, was top of the table for citations in 2012-13.

5. In 2013-14 and 2014-15 Tokyo Metropolitan University had a perfect score of 100 for citations, a distinction shared only with MIT.

6. In 2014-15 Federico Santa Maria Technical University was sixth in the world for research impact and first in Latin America with a near perfect score of 99.7.

7. In the same year Bogazici University in Turkey reached the top twenty for research impact.

8. St George's, University of London, was the top institution in the world for research impact in 2016-17.

9. In that year Anglia Ruskin University, a former art school, was tenth for this metric, equal to Oxford and well ahead of the other university in Cambridge.

10. Last year's THE Asian rankings saw Vel Tech University in Chennai achieve the highest impact of any Asian university. 

Sunday, May 28, 2017

The View from Leiden

Ranking experts are constantly warning about the grim fate that awaits the universities of the West if they are not provided with all the money that they want and given complete freedom to hire staff and recruit students from anywhere that they want. If this does not happen they will be swamped by those famously international Asian universities dripping with funds from indulgent patrons.

The threat, if we are to believe the prominent rankers of Times Higher Education (THE), QS and Shanghai Ranking Consultancy, is always looming but somehow never quite arrives. The best Asian performer in the THE world rankings  is the National University of Singapore (NUS) in 24th place followed by Peking University in 29th. The QS World University Rankings have NUS 12th, Nanyang Technological University 13th and Tsinghua University 24th.  The Academic Ranking of World Universities published in Shanghai puts the University of Tokyo in 20th place and Peking University in 71st.

These rankings are in one way or another significantly biased towards Western European and North American institutions and against Asia. THE has three separate indicators that measure income, adding up to a combined weighting of 10.75% . Both QS and THE have reputations surveys. ARWU gives a 30 % weighting to Nobel and Fields awards winners, some of them from several decades ago.

Let's take a look at a set of rankings that is technically excellent, namely the Leiden Ranking. The producers do not provide an overall score. Instead it is possible to create a variety of rankings, total publications, publications by subject groups, publications in the top 50%, 10% and 1% of journals. Users can also select fractional or absolute counting and  change the minimum threshold of number of publications.

Here is the top ten, using  the default settings, publications 2012-15, fractional counting, minimum threshold of 100 papers. Publications in 2006-09 are in brackets.

1. Harvard  (1)
2. Toronto  (2)
3. Zhejiang  (14)
4. Michigan (3)
5. Shanghai Jiao Tong (37)
6. Johns Hopkins (5)
7  Sao Paulo (8)
8. Stanford (9)
9  Seoul National University (23)
10.  Tokyo (4).

Tsinghua University is 11th, up from 32nd in 2006-09 and Peking University is 15th, up from 54th. What is interesting about this is not just that East Asian universities are moving into the highest level of research universities but how rapidly they are doing so.

No doubt there are many who will say that this is a matter of quantity and that what really counts is not the number of papers but their reception by other researchers. There is something to this. If we look at publications in the top 1 % of journals (by frequency of citation) the top ten include six US universities headed by Harvard, three British and one Canadian.

Tsinghua is 28th, Zhejiang is 50th, Peking 62nd, Shanghai Jiao Tong 80th, Seoul National University 85th . Right now it looks like publication in the most reputed journals is dominated by English-speaking universities. But in the last few years Chinese and Korean universities have advanced rapidly, Peking 119th to 62nd, Zhejiang 118th to 50th, Shanghai Jiao Tong 112th to 80th, Tsinghua 101st to 28th, Seoul National University 107th to 85th.

It seems that in a few years East Asia will dominate the elite journals and will take the lead for quality as well as quantity.

Moving on to subject group rankings, Tsinghua University is in first place for mathematics and computer sciences. The top ten consists of nine Chinese and one Singaporean university. The best US performer is MIT in 16th place, the best British Imperial College London in 48th.

When we look at the top 1 % of journals, Tsinghua is still on top, although MIT moves up to 4th place and Stanford is 5th. 

The Asian tsunami has already arrived. East Asian, mainly Chinese and Chinese diaspora, universities, are dominant or becoming dominant in the STEM subjects, leaving the humanities and social sciences to the US.

There will of course be debate about what happened. Maybe money had something to do with it. But it also seems that western universities are becoming much less selective about student admissions and faculty appointments. If you admit students who write #BlackLivesMatter 100 times on their application forms or impose ideological tests for faculty appointment and promotion you may have succeed in imposing political uniformity but you will have serious problems trying to compete with the Gaokao hardened students and researchers of Chinese universities.

Monday, May 22, 2017

Arab University Rankings: Another Snapshot from Times Higher Education

Times Higher Education (THE) has produced a "snapshot" ranking of Arab universities extracted from its World University Rankings. There has been no change in the indicators or their weighting. Only 28 universities are included which raises questions about how suitable THE's methodology is for regions like the Middle East and North Africa.

This is an improvement over a remarkable MENA snapshot that THE did in 2015 which put Texas A and M University Qatar in first place by virtue of one half-time faculty member who was listed as a contributor to a multi-author, multi-cited CERN paper.

The top five universities this time are King Abdulaziz University (KAU), Saudi Arabia, King Fahad University of Petroleum and Minerals, Saudi Arabia, King Saud University, Saudi Arabia, Khalifa University of Science, Technology and Research, UAE, and Qatar University.

The top three places are held by Saudi institutions. So how did they do it? According to an article by a THE editor for the World Economic Forum it was all due to money and internationalisation. 

Up to a point that is correct. The sad story of Trinity College Dublin's botched data submission shows roughly how much a given reported increase can affect a university's overall rank: roughly 5 million Euro in reported total income (with proportionate increases for research income and income from industry) and a middling high university can jump up a spot in the overall rankings.

But does that explain KAU in top place? It did get a high score, 92.1, for international orientation but five other Arab universities did better.  For teaching it was third, for industry income third, and for research seventh. What actually made the difference was Citations. KAU had a score of 93.3, far ahead of the next contender, Jordan University of Science and Technology with 50.2.

KAU's research impact is, according to THE, second in Asia only to the shooting star of india, Vel Tech University whose single self citing prodigy supposedly had a greater impact than the whole of any other Asian university. KAU's citations score was the result of the massive recruitment of adjunct faculty, 40 at the last count, who put KAU as a second affiliation. How much time they put in at KAU is uncertain but the Shanghai rankings calculated that highly cited researchers spent an average of 16% of their time at the university of their second affiliation.

It is bad enough that THE put so much emphasis on income and internationalisation in their methodology, promoting the diversion of resources from things like primary and pre-school education, adult literacy and alternatives to oil exports. To encourage universities to rise in the rankings  by hiring adjunct faculty whose contribution is uncertain is very irresponsible. It would be a good thing if this snapshot was ignored.