Recently, Universiti Tunku Abdul Rahman (UTAR), a private
Malaysian university, welcomed what appeared to be an outstanding performance
in the Times Higher Education (THE) Asian
Universities Rankings, followed by a good score in the magazine’s Young
University Rankings. This has been interpreted as a remarkable achievement not
just for UTAR but also for Malaysian higher education in general.
In the Asian rankings, UTAR is ranked in the top 120 and
second in Malaysia behind Universiti Malaya (UM) and ahead of the major
research universities, Universiti Sains Malaysia, Universiti Kebangsaan Malaysia
and Universiti Putra Malaysia.
This is sharp contrast to other rankings. There is a research
based ranking published by Middle East Technical
University that puts
UTAR 12th in Malaysia and 589th in Asia. The Webometrics ranking, which is mainly web based with one research indicator, has it 17th
in Malaysia and 651st in Asia.
The QS rankings, known to be kind to South East
Asian universities, puts UTAR in the 251-300 band for Asia and 14th=
in Malaysia behind places like Taylor’s University and Multi Media University
and in the same band as Universiti Malaysia Perlis and Universiti Malaysia Terengganu.
UTAR does not appear in the Shanghai rankings or the Russian Round University
Rankings.
Clearly, THE is the odd man out among rankings in its
assessment of UTAR. I suspect that if challenged a spokesperson for THE might
say that this is because they measure things other than research. That is very debatable.
Bahram Bekhradnia of the Higher Education Policy Institute has argued in a widely-cited report that these rankings are of little value
because they are almost entirely research-orientated.
In fact, UTAR did not perform so well in the THE Asian
rankings because of teaching, internationalisation or links with industry. It
did not even do well in research. It did well because of an “outstanding” score
for research impact and it got that score because of the combination of a
single obviously talented researcher with a technically defective methodology.
Just take a look at UTAR’s scores for the various components in the THE Asian rankings. For Research
UTAR got a very low score of 9.6, the lowest of the nine Malaysian universities
featured in these rankings (100 represents the top score in all the indicators).
For Teaching it has a score of 15.9, also the lowest of the
ranked Malaysian universities.
For International Orientation, it got a score of 33.2. This
was not quite the worst in Malaysia. Universiti Teknologi MARA (UiTM), which
does not admit non– bumiputra Malaysians, let alone international
students, did worse.
For Industry Income UTAR’s score was 32.9, again surpassed by
every Malaysian university except UiTM.
So how on earth did UTAR manage to get into the top 120 in Asia
and second in Malaysia?
The answer is that it got an “excellent” score of 56.7 for Research
Impact, measured by field-normalised citations, higher than every other Malaysian
university, including UM, in these rankings.
That score is also higher than several major international research
universities such as National Taiwan University, the Indian Institute of
Technology Bombay, Kyoto University and Tel Aviv University. That alone should
make the research impact score very suspicious. Also, compare the score with
the low score for research which combines three metrics, research reputation,
research income and publications. Somehow UTAR has managed to have a huge
impact on the research world even though it receives little money for research,
does not have much of a reputation for research, and does not publish very
much.
The THE research impact (citations) indicator is very problematical
in several ways. It regularly produces utterly absurd results such as Alexandria
University in Egypt in fourth place for research impact in the world in 2010
and St George’s, University of London (a medical school), in first place last
year, or Anglia Ruskin University, a former art school, equal to Oxford and
well ahead of Cambridge University.
In addition, to flog a horse that should have decomposed by
now, Veltech University in Chennai, India, according to THE has biggest
research impact in Asia and perhaps, if it qualified for the World Rankings, in
the world. This was done by massive self-citation by exactly one researcher and a
little bit of help from a few friends.
Second in Asia for research, THE would have us believe, is King
Abdulaziz University of Jeddah which has been on recruiting spree of adjunct faculty whose duties might include visiting
the university but certainly do require putting its name as secondary
affiliation in research papers.
To rely on the THE rankings as a measure of excellence is unwise.
There were methodological changes in 2011, 2015 and 2016, which have
contributed to universities moving up or down many places even if there has
been no objective change. Middle East Technical University in Ankara, for
example, fell from 85th place in 2014-15 to the 501-600 band in
2015-6 and then to the 601-800 band in 2016-17. Furthermore, adding new universities means
that the average scores from which the final scores are calculated are likely
to fluctuate.
In addition, THE has been known to recalibrate the weight given to its indicators in their
regional rankings and this has sometimes worked to the advantage of whoever is
the host of THE’s latest exciting and prestigious summit. In 2016, THE’s Asian
rankings featured an increased weight for research income from industry and a
reduced one for teaching and research reputation. This was to the disadvantage
of Japan and to the benefit of Hong Kong where the Asian summit was held.
So, is UTAR really more influential among international
researchers than Kyoto University or the National Taiwan University?
What actually happened to UTAR is that it has an outstanding
medical researcher who is involved in a massive international medical project
with hundreds of collaborators from hundreds of institutions that produces
papers that have been cited hundreds of times and will in the next few years be
cited thousands of times. One of these papers had, by my count, 720
contributors from 470 universities and research centres and has so far received
1,036 citations, 695 in 2016 alone.
There is absolutely nothing wrong with such projects but it
is ridiculous to treat every one of those 720 contributors as though they were
the sole author of the paper with credit for all the citations, which is what
THE does. This could have been avoided simply by using fractional counting and
dividing the number of citations by the number of authors or number of
affiliating institutions. This is an option available in the Leiden Ranking,
which is the most technically expert of the various rankings. THE already does
this for publications with over 1,000 contributors but that is obviously not
enough.
I would not go as far as Bahram Bekhradnia and other higher
education experts and suggest that universities should ignore rankings
altogether. But if THE are going to continue to peddle such a questionable
product then Malaysian universities would be well advised to keep their
distance. There are now several other rankings on the marking that could be
used for benchmarking and marketing.
It is not a good idea for UTAR to celebrate its achievement
in the THE rankings. It is quite possible that the researcher concerned will one day go
elsewhere or that THE will tweak its methodology again. If either happens the
university will suffer from a precipitous fall in the rankings along with a
decline in its public esteem. UTAR and other Malaysian universities would be
wise to treat the THE rankings with a great deal of caution and scepticism.