Times Higher Education (THE) has just published its 2017 reputation rankings which include 100 universities. These are based on a survey distributed between January and March of this year and will be included, after standardisation, in the 2017-18 (or 2018) World University Rankings scheduled for publication in a few months. In the forthcoming world rankings the reputation survey will be divided into two metrics in the research and teaching indicator groups, with a combined weighting of 33 percent. The survey asked about research and postgraduate teaching but since the correlation between these two questions is very high there is effectively only one indicator.
The QS world rankings released last week included scores derived from two surveys, one of academics with a 40% weighting and one of employers with 10%. The academic survey was concerned only with research.
The methodology of the THE survey is relatively simple. The respondents are drawn from the database of researchers with publications in Scopus indexed journals, in other words those who get to be listed as corresponding author. THE claims that this makes them experienced senior researchers although in many parts of the world being a member or leader of a research team often has more to do with politics than merit.
In contrast, the QS methodology has changed quite a lot over the last few years. It began with scouring the mailing lists of World Scientific, a Singapore based academic publisher with links to Imperial College London, then adding various other channels including lists supplied by institutions and sign up facilities for potential respondents. The result is a survey that appears more inclusive than THE's with more respondents from outside the elite but one whose validity may be rather suspect.
The THE ranking found that there were six super-brand universities that stood out from everyone else, Harvard, MIT, Stanford, Cambridge, Oxford, and Berkeley. There was a big gap between Berkeley and number seven Princeton and then the long smooth slope continues.
After that, the ranking is dominated by English speaking universities, with the USA contributing 42, the UK 10, Canada 3 and Australia 3. East Asia and the Chinese diaspora (Hong Kong, Taiwan and Singapore) are fairly well represented, while South and Central Asia, the Middle East and Africa are absent.
For any survey a great deal depends on how the forms are distributed. Last year, the THE survey had a lot more responses from the social sciences, including economics and business studies, and fewer from the arts and the humanities, and that contributed to some Asian universities rising and some British ones falling.
Such falls are typically attributed in the education establishment media to anxiety about the looming horrors of Brexit, the vicious snatching of research funds and the rising tide of hostility to international students.
This year British Universities did a bit better in the THE reputation ranking this year with five going up, three staying put and three going down. No doubt we will soon hear about the invigorating effects of Brexit and the benefits of austerity. Perhaps also it might have something to do with the number of survey responses from the arts and humanities going up from 9% to 12.5%, something that would surely benefit UK universities.
The QS reputation indicator has the same universities in the top six but not in quite the same order: Cambridge, fourth in THE, is second in the QS indicator. After that it starts looking very different. Number seven is the University of Tokyo, which THE puts in 11th place for academic reputation. Other Asian universities do much better in the QS indicator. The National University of Singapore is 11th ( 27th in THE) Nanyang Technological University Singapore is 50th (THE 81-90 band), Peking University is 14th (THE 17h) Chulalongkorn University Thailand is 99th (not in the THE top 100).
It is noticeable that Latin American universities such as the University of Sao Paulo, the University of Buenos Aires and the Pontifical Catholic University of Chile get a higher placing in the QS indicator than they do in the THE ranking as do some Southern European universities such as Barcelona, Sapienza and Bologna.
The THE reputation ranking gives us a snapshot of the current views of the world's academic elite and probably underestimates the rising universities of Greater China and Korea. QS cast their nets further and have probably caught a few of tomorrow's world class institutions although I suspect that the Latin American high fliers, apart from Sao Paulo, are very overrated.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Sunday, June 18, 2017
Thursday, June 15, 2017
The Abuse and Use of Rankings
International
university rankings have become a substantial industry since the first
appearance of the Shanghai rankings (Academic Ranking of World Universities or
ARWU) back in 2003. The various rankings are now watched closely by governments
and media and for some students they play a significant role in
choosing universities, They have become a factor in national higher education
policies and are an important element in the race to enter and dominate
the lucrative transnational higher education market. In Malaysia a local
newspaper, Utusan Malaysia, recently had a full page on the latest QS
world rankings including a half page of congratulations from the Malaysian
Qualification Agency for nine universities who are part of a state-backed
export drive.
Reaction to
international rankings often goes to one of two extremes, either outright
rejection or uncritical praise, sometimes descending into grovelling flattery
that would make Uriah Heep ashamed (the revered QS rankings, Phil Baty a
thought leader). The problem with the first, which is certainly very
understandable, is that it is unrealistic. If every international ranking
suddenly stopped publication we would just have, as we did before, an informal
ranking system based largely on reputation, stereotypes and prejudice.
On the
other hand, many academics and bureaucrats find rankings very useful. It is
striking that university administrators, the media and national governments
have been so tolerant of some of the absurdities that Times
Higher Education (THE) has announced in recent years. Recently,
THE’s Asian rankings had Veltech University as the third best university in
India and the best in Asia for research impact, the result of exactly one
researcher assiduously citing himself. This passed almost unnoticed in the
Indian press and seems to have aroused no great interest among Indian academics
apart from a couple of blog posts. Equally, when Universiti Tunku Abdul Rahman
(UTAR), a private Malaysian university, was declared to be the second best
university in the country and best for research impact, on the strength of a
single researcher’s participation in a high profile global medical project
there was no apparent response from anyone.
International rankings have also become a weapon in the the drive by universities to maintain or increase their access to public funds. British and Irish universities often complain that their fall in the rankings is all the fault of the government for not providing enough money. Almost any result in the better known rankings can be used to prop up the narrative of western universities starved of funds and international researchers and students.
Neither of
these two views is really valid. Rankings can tell us a great
deal about the way that higher education and research are going. The early
Shanghai rankings indicated that China was a long way behind the West and that
research in continental Europe was inferior to that in the USA. A recent
analysis by Nature Index shows that American research is declining and that the
decline is concentrated in diverse Democrat voting states such as California,
Massachusetts, Illinois and New York.
But if
university rankings are useful they not equally so and neither are the various
indicators from which they are constructed.
Ranking
indicators that rely on self-submitted information should be mistrusted. Even
if everybody concerned is fanatically honest, there are many ways in which data
can be manipulated, massaged, refined, defined and redefined, analysed and
distorted as it makes it way from branch campuses, affiliated colleges and
research institutes through central administration to the number munching
programs of the rankers.
Then of
course there are the questionable validation processes within the ranking
organisations. There was a much publicised case concerning Trinity College
Dublin where for two years in a row the rankers missed an error of orders of
magnitude in the data submitted for three income indicators.
Any metric
that measures inputs rather than outputs should be approached with caution
including THE's measures of income that amount to a total weighting of 10.75%.
THE and QS both have indicators that count staff resources. It is interesting
to have this sort of information but there is no guarantee that having
loads of money or staff will lead to quality whether of research, teaching or
anything else.
Reputation
survey data is also problematic. It is obviously subjective, although that is
not necessarily a bad thing, and everything depends on the distribution of
responses between countries, disciplines, subjects and levels of seniority.
Take a look at the latest QS rankings and the percentages of respondents from
various countries.
Canada has 3.5% of survey respondents and China has 1.7%.
Australia
has 4% and Russia 4.2%.
Kazakhstan
has 2.1% and India 2.3%'
There ought
to be a sensible middle road between rejecting rankings altogether and
passively accepting the errors, anomalies and biases of the popular rankers.
Universities and governments should abide by a self denying ordinance and reject ranking results that challenge common sense or contradict accepted national rankings. I remember a few years ago someone at Duke University saying that they were puzzled why the THES-QS rankings put the school in first place for faculty student ratio when this contradicted data in the US News rankings. Few, if any, major universities or higher education misters seem to have done anything like this lately.
It would also be a good idea if universities and governments stopped looking at rankings holistically and started setting targets according to specific indicators. High flying research university could refer to the Leiden Ranking, Nature Index or the Nature and Science and Publications indicators in ARWU. New universities could target a place in the Excellence indicators in the Webometrics rankings which lists 5,777 institutions as having some sort of research presence.
As for the
teaching mission, the most directly relevant indicators are the QS employer
survey in the world rankings, the QS Graduate Employability Index, and the
Global University Ranking Employability Ranking published by THE.
Governments and universities would be advised not to got too excited about a strong performance in the rankings. What the rankings have given the rankings can take away.