Wednesday, October 23, 2024

Are Australian universities really on a precipice?

  


Times Higher Education (THE) recently published the latest edition of their World University Rankings (WUR), which contained bad news for Australian higher education. The country’s leading universities have fallen down the rankings, apparently because of a decline in their scores for research and teaching reputation scores and international outlook, that is international students, staff, and collaboration.

THE has reported that Angel Calderon of RMIT had said that the “downturn had mainly been driven by declining scores in THE’s reputation surveys” and that he was warning that there was worse to come.

Australian universities have responded by demanding that the cap on international students be lifted to avoid financial disaster. Nobody seems to consider how the universities got to the point where they could not survive without recruiting researchers and students from abroad.

It is, however, a mistake to predict catastrophe from a single year’s ranking. Universities have thousands of faculty, employees, and students and produce thousands of patents, articles, books, and other outputs. If a ranking produces large-scale fluctuations over the course of a year, that might well be due to deficiencies in the methodology rather than any sudden change in institutional quality.

There are now several global university rankings that attempt to assess universities' performance in one way or another. THE is not the only one, nor is it the best, and in some ways, it is the worst or nearly the worst.  For universities to link their public image to a single ranking, or even a single indicator, especially one that is as flawed as THE, is quite risky.

To start with, THE is very opaque. Unlike QS, US News, National Taiwan University, Shanghai Ranking, Webometrics, and other rankings, THE does not provide ranks or scores for each of the metrics that it uses to construct the composite or overall score. Instead, they are bundled together in five “pillars”. It is consequently difficult to determine exactly what causes a university to rise or fall in any of these pillars. For example, an improvement in the teaching pillar might be due to increased institutional income, fewer students, fewer faculty, an improved reputation for teaching, more doctorates, fewer bachelor degrees awarded, or some combination of these or some of these.

Added to this are some very dubious results from the THE world and regional rankings over the years. Alexandria University, Aswan University, Brighton and Sussex Medical School, Anglia Ruskin University, Panjab University, Federico Santa Maria Technical University, Kurdistan University of Medical Sciences, and the University of Perediniya have been at one time or another among the supposed world leaders for research quality measured by citations. Leaders for industry income, which is claimed to reflect knowledge transfer, have included Anadolu University, Asia University, Taiwan, the Federal University of Itajubá, and Makerere University,

The citations indicator has been reformed and is now the research quality indicator, but there are still some oddities at its upper level, such as Humanitas University, Vita-Salute San Raffaele University, Australian Catholic University, and St George’s, University of London, probably because they participated in a few highly cited multi-author medical or physics projects.

It now seems that the reputation indicators in the THE WUR are producing results that are similarly lacking in validity. Altogether, reputation counts for 33%, divided between the research and teaching pillars. A truncated version of the survey with the top 200 universities, the scores of fifty of which were provided, was published earlier this year, and the full results were incorporated in the recent world rankings.

Until 2021 THE used the results of a survey conducted by Elsevier among researchers who had published in journals in the Scopus database. After that THE brought the survey in-house and ran it themselves. That may have been a mistake. THE is brilliant at convincing journalists and administrators that it is a trustworthy judge of university quality, but it is not so good at actually assessing such quality, as the above examples demonstrate.

After bringing the survey in-house, THE increased the number of respondents from 10,963 in 2021 to 29,606 in 2022. 38,796 in 2023 and 55,689 in 2024. It seems that this is a different kind of survey since the new influx of respondents is likely to contain fewer researchers from countries like Australia. One might also ask how such a significant increase was achieved.

Another issue is the distribution of survey responses by subject. In 2021 a THE post on the reputation ranking methodology indicated the distribution of responses among academic by which the responses were rebalanced. So, while there were 9.8% computer science responses this was reduced to reflect a 4.2% proportion of international researchers. It seems that this information has not been provided for the 2022 or 2023 reputation surveys.

In 2017 I noted that Oxford’s reputation score tracked the percentage of THE survey responses from the arts and humanities, rising when there are more respondents from those fields and falling when there are fewer. So, the withholding of information about the distribution of responses by subjects is also significant since this could affect the ranking of Australian universities.

Then we have the issue of the geographical distribution of responses. THE has a long-standing policy of recalibrating its results to align with the number of researchers in a country, based on the number of researchers in countries according to data submitted and published by UNESCO.

There are good reasons to be suspicious of data emanating from UNESCO, some of which have been presented by Sasha Alyson.                               

But even if the data were totally accurate, there is still a problem that a university’s rise or fall in reputation might simply be due to a change in the relative number of researchers reported by government departments to the data crunching machines at THE.

According to UNESCO, the number of researchers per million inhabitants in Australia and New Zealand fell somewhat between 2016 and 2021. On the other hand, the number rose for Western Asia, Southern Asia, Eastern Asia, Latin America and the Caribbean, and Northern Africa.

If these changes are accurate, it means that some of Australia's declining research reputation is due to the increase in researchers in other parts of the world and not necessarily to any decline in the quality or quantity of its research.

Concerns about THE's reputation indicators are further raised by looking at some of the universities that did well in the recent reputation survey.

Earlier this year, THE announced that nine Arab universities had achieved the distinction of reaching the top 200 of the reputation rankings, although none were able to reach the top 50, where the exact score and rank were given. THE admitted that the reputation of these universities was regional rather than local. In fact, as some observers noted at the time, it was probably less than regional and primarily national.

It was not Arab universities' rising in the reputation rankings that was disconcerting. Quite a few leading universities from that region have begun to produce significant numbers of papers, citations, and patents and attract the attention of international researchers, but they were not among those doing so well in THE’s reputation rankings.

Then, last May, THE announced that it had detected signs of “possible relationships being agreed between universities”  and that steps would be taken, although not, it would seem, in time for the recent WUR.

More recently, a LinkedIn post by Egor Yablonsky, CEO of E-Quadratic Science & Education, reported that a few European universities had significantly higher reputation scores than the overall world rankings.

Another reason Australia should be cautious of the THE rankings and their reputation metrics is that Australian universities' ranks in the THE reputation rankings are much lower than they are for Global Research Reputation in the US News (USN) Best Global Universities or Academic Reputation in the QS World rankings.

In contrast, some French, Chinese and Emirati universities do noticeably better in the THE reputation ranking than they do in QS or USN.

 

Table: Ranks of leading Australian universities

University

THE reputation

2023

USN global research reputation 2024-2025

QS academic reputation 2025

Melbourne

51-60

43

21

Sydney

61-70

53

30

ANU

81-90

77

36

Monash

81-90

75

78

Queensland

91-100

81

50

UNSW Sydney

126-150

88

43

 

It would be unwise to put too much trust in the THE reputation survey or in the world rankings where it has nearly a one-third weighting. There are some implausible results this year, and it stretches credibility that the American University of the Middle East has a better reputation among researchers than the University of Bologna, National Taiwan University, the Technical University of Berlin, or even UNSW Sydney. THE has admitted that some of these results may be anomalous, and it is likely that some universities will fall after THE takes appropriate measures.

Moreover, the reputation scores and ranks for the leading Australian universities are significantly lower than those published by US News and QS. It seems very odd that Australian universities are embracing a narrative that comes from such a dubious source and is at odds with other rankings. It is undeniable that universities in Australia are facing problems. But it is no help to anyone to let dubious data guide public policy.

So, please, will all the Aussie academics and journalists having nervous breakdowns relax a bit and read some of the other rankings or just wait until next year when THE will probably revamp its reputation metrics.

 

No comments: