The Latest Rankings
The latest Times Higher Education (THE) world rankings
have just been announced at a summit in New York. Around the world political leaders,
mass media, and academics have been proclaiming their delight about their
universities rising in the rankings. Australian universities are especially
fascinated by them, sometimes to the point of unhealthy obsession.
Study Australia reports that "Australia shines again." Insider Guides finds it "particularly exciting" that six Australian universities in the top 200 have climbed the charts. Monash University is celebrating how it has "skyrocketed" 13 places, further proof of its world-leading status.
It is unfortunate that Australian media and administrators
are so concerned with these rankings. They are not the only global rankings and
certainly not the most reliable, although they are apparently approved by
universities in the traditional elite or their imitators. They are not totally without value, but they do
need a lot of deconstructions to get to any sort of meaningful insight.
Transparency
One problem with the THE rankings, to be painfully repetitive,
is that they are far from transparent. Three of their five current “pillars”
consist of more than one indicator so we cannot be sure exactly what is
contributing to a rise or fall. If, for example, a university suddenly improves
for THE’s teaching pillar that might be because its income has increased, or
the number of faculty has increased, or the number of students has decreased,
or it has awarded more doctorates or fewer bachelor’s degrees, or it has got
more votes in THE’s reputation survey, or a combination of two or more of
these.
THE's citations indicator, which purportedly measures
research impact or research quality, stands alone but it is also extremely
opaque. To calculate a university’s score for citations you have to work out
the number of citations in 8,000 “boxes” (300 plus fields multiplied by five
years of publication multiplied by five types of documents) and compare them to
the world average. Add them up and then apply the country bonus, the square
root of the national impact score, to half of the university’s score. Then
calculate Z scores. For practical purposes this indicator is a black box into
which masses of data disappear, are chopped up, shuffled around, processed, reconstituted
and then turned into numbers and ranks that are, to say the least, somewhat
counter-intuitive.
This indicator, which accounts for a 30% weighting, has
produced some remarkable results over the last decade, with a succession of
improbable institutions soaring into the upper reaches of this metric. This
year’s tables are no exception. The world leader is Arak University of Medical
Sciences, Iran, followed by Cankaya University, Turkey, Duy Tan University, Vietnam,
Golestan University of Medical Sciences, Iran, and Jimma University, Ethiopia.
Another two Iranian medical universities are in the top 25. They may not last
long. Over the last few years quite a lot of universities have appeared briefly
at the top and then in a few years slumped to a much lower position.
One of the more interesting things about the current success
of the THE rankings is the apparent suspension of critical thought among the superlatively
credentialed and accredited leaders of the academic world. One wonders how
those professors, rectors and deans who gather at the various summits,
seminars, webinars, and masterclasses would react to a graduate student who
wrote a research paper that claimed that Arak University of Medical Sciences
leads the world for “research quality”, Istanbul Technical University for
“knowledge transfer”, or Macau University of Science and Technology for
“international outlook”.
Volatility
Not only do the rankings lack transparency they are also
extremely volatile. The top fifty list, or even the top one hundred, is
reasonably stable but after that THE has seen some quite remarkable and
puzzling ascents and descents. There have been methodological changes and there
is a big one coming next year but that alone does not explain why there should be
such dramatic changes. One cause of instability in the rankings is the
citations indicator which is constructed so that one or a few researchers,
often those working on the Gates-funded Global Burden of Disease Study (GBDS),
can have a massively disproportionate impact.
Another possible cause of volatility is that the number of
ranked institutions is not fixed. If the rankings expand new universities will
usually be at the lower end of the scale and the effect of this is that the
mean score for each indicator is lowered and this will affect the final score
for every institution since the standardised scores that appear in the published
tables are based on means and deviations.
There may be other reasons for the volatility of this year’s
rankings. Fluctuating exchange rates may have affected reported income data,
international students’ numbers may have fallen or even recovered. Some
universities might have performed better in the surveys of teaching or
research.
Australian universities rising and falling
Some Australian universities appear to have been quite mobile
this year. In some cases, this has a lot to do with the citation indicator. Two years ago,
Bond University was ranked in the 501 – 600 band and 26th in
Australia. Now it is tenth in Australia and in the world top 300, driven
largely by a remarkable rise in the citations score from 56.4 to 99.7. A lot of
that seems to have come from a small number of publications relating to the
2020 PRISMA statement which amassed a massive number of citations in 2021 and
2022.
Another example is Australian Catholic University. In 2018 it
was in the world 501-600 band and this year it is in band 251-300. This is mainly
due to an improvement in its citations score from 59.5 to 98.5, the result of a
series of articles between 2017 and 2020 related to the multi-author and
massively cited GBDS.
The problem with relying on citations to get ahead in the THE
rankings is that if the researchers who have been racking up the citations move
on or retire the scores will eventually decline as their papers pass outside
the period for counting publications. This might have happened with the University
of Canberra which has benefitted from GBDS papers published between 2015 and
2018. This year, however, the 2015 and 2016 papers no longer count, and the
result is that Canberra’s citation score has fallen from 98.6 to 92.6 and its
world rank from 170th to 250-300. A university might even start
falling just because its peers have started racking up scores of 99 plus for
citations.
This is similar to the trajectory of quite a few
international universities that have risen and fallen in the wake of a few
highly cited papers such as Babol Noshirvani University of Technology, Iran,
the Indian Institute of Technology Ropar, the University of Occupational and
Environmental Health, Japan, Durban University of Technology, South Africa, and
Nova Southeastern University, USA.
Citations have a lot to do with Australia’s success in the
THE rankings. All the Australian universities the world rankings have a higher
score for citations than for research, which is measured by publications,
reputation, and research income and six have citation scores in the 90s.
Compare that with Japan, where the highest citation score is 82.8. and leading
universities do better for research than for citations. If THE had taken some
of the weight from citations and given it to research, Australian universities
might be in a different position.
Are the THE rankings any use?
Over the long term the THE rankings might have some value in
charting the general health of an institution or a national system. Should a
university fall steadily across several indicators despite changes in
methodology and despite proclaimed excellence initiatives, then that might be a
sign of systemic decline.
The success of Australian universities in the THE rankings might represent genuine progress but it is necessary to identify exactly why they are rising and how sustainable that progress is.
The rankings certainly should not be used to punish or reward researchers and
teachers for “success” or “failure” in the rankers, to allocate funds, or to attract
talented faculty or wealthy students.
Other rankings
The THE rankings are not the only game in town or in the
world. In fact, for most purposes there are several rankings that are no worse
and probably a lot better than THE. It would be a good idea for Australian universities, students and stakeholders to shop around a bit,
For a serious analysis of research quantity and quality there are straightforward rankings of research conducted
by universities or research centres such as Shanghai Ranking, CWTS Leiden
University, University Ranking by Academic Performance, or National Taiwan
University. They can be a bit boring since they do not change very much from
year to year, but they are at least reasonably competent technically and they rely on
data that is fairly objective and transparent.
For prospective graduate and professional students, the
research-based rankings might be helpful since the quality of research is
likely to have an effect, even if an unpredictable, on the quality of
postgraduate and professional instruction.
For undergraduate students there is not really too much that
is directly relevant to their needs. The QS employability rankings, the
Employer opinion survey in the QS world rankings, the Emerging/Trendence rankings
employability rankings, the student quality section in the Center for World
University Ranking tables, now based in the Emirates, can all help to provide
some helpful insights.
Next year?
It seems that THE has finally steeled itself to introduce a
set of changes. The precise effect is unclear except that the world rankings
look to be getting even more complex and even more burdensome for the underpaid
drones toiling away to collect, process and transmit the data THE requires of
its “customers”. It is not clear exactly how this will affect Australian universities.
No doubt Australian deans and rectors will be wondering what lies ahead of
them in the 2024 rankings coming next year. But not to worry. THE is offering
“bespoke” shadow rankings that will tell them how they would have done if the new methodology had been applied this year.