The University of New South Wales (UNSW) has produced an aggregate ranking of global universities, known as ARTU. This is based on the "Big Three" rankers, QS, Times Higher Education (THE) and the Shanghai ARWU. The scores that are given are not an average but a aggregate of their ranks, which is then inverted. Nor surprisingly, Australian universities do well and the University of Melbourne is the best in Australia.
Nicholas Fisk, Deputy Vice Chancellor of Research, hopes that this ranking will become "the international scoreboard, like the ATP tennis rankings" and "the indisputable scoreboard for where people fit in on the academic rankings."
This is not a new idea. I had a go at producing an aggregate ranking a few years ago, called Global Ranking of Academic Performance or GRAPE. It was going to be the Comparative Ranking of Academic Performance: maybe I was right the first time. It was justifiably criticised by Ben Sowter of QS. I think though that it was quite right to note that some of the rankings of the time underrated the top Japanese universities and overrated British and Australian schools.
The ARTU is an another example of the emergence of a cartel or near cartel of the three global rankings that are apparently considered the only ones worthy of attention by academic administrators and the official media.
There are in fact a lot more and these three are not even the best three rankings, far from it. A pilot study, Rating the Rankers, conducted by the International Network of Research Management Systems (INORMS), has found that on four significant dimensions, transparency, governance, measuring what matters, and rigour, the performance of six well known rankings is variable and that of the big three is generally unimpressive. That of THE is especially deficient.
Seriously, should we consider as indisputable a ranking that includes indicators that proclaim Anglia Ruskin University as a world leader for research impact and Anadolu University as tops for innovation, another that counts that long dead winners of Nobel and Fields awards, and another that gives disproportionate weight to a survey with more respondents from Australia than from China?
There does seem to be a new mood of ranking skepticism emerging in many parts of the international research community. Rating the Rankers has been announced in an article in Nature. The critical analysis of rankings will, I hope, do more to create fair and valid systems of comparative assessment than simply adding up a bunch of flawed and opaque indicators.