I have commented on Indian responses to the rankings before and many times on problems with the better known rankings so I apologize for repeating myself.
The influence of global university rankings continues to expand. There seem to be few areas of higher education or research where they are not consulted or used for appointments, promotion, admissions, project evaluation, grant approval, assessment, publicity and so on.
The latest example is from India. The Indian Express reports that the Minister of Education has announced that the progress of "institutions of eminence" [IoEs]will be charted using the "renowned" QS and THE rankings. Apparently, "an incentive mechanism will be developed for those institutes which are performing well." That is definitely not a good idea: it will reward behaviour that leads to improved ranking performance not to improved output or quality.
Recently, some of the leading Indian Institutes of Technology (IITs), four of which are on the list of IoEs, announced that they would be boycotting the THE rankings. I am not sure whether this means that there is now a split within the higher education sector in India or whether the IITs are rethinking their opposition to the rankings.
There is nothing wrong with evaluating and comparing universities, research centers, researchers, or departments. Indeed it would seem very helpful if a country is going to maintain an effective higher education system. But it is questionable whether these rankings are the best way or even a good way of doing it. Research might be evaluated by panels of peer researchers, provided these are unbiased and fair, by international experts, surveys, or by bibliometric and scientometric indicators. The quality of teaching and learning is more problematic but national rankings around the world have used several measures that, although not very satisfactory, might provide a rough and imperfect assessment.
There is now a broad range of international rankings covering publications, citations, innovation, web presence, and other metrics. The IREG Inventory of International Rankings identified 17 global rankings in addition to regional and specialist ones and there are now more. If the Indian government wanted to use a ranking to measure research output and quality then it would probably be better to refer to the Leiden Ranking produced by the CWTS at Leiden University, or other straightforward research-based rankings such as URAP, published by the Middle East Technical University in Ankara, the Shanghai Rankings, or the National Taiwan University Rankings, which have a generally stable and transparent methodology. Another possibility is the Scimago Institution Rankings which include indicators measuring web activity and altmetrics. Round University Rankings uses several metrics that might have a relationship to teaching quality.
It is, however, debatable whether the THE rankings are useful for evaluating Indian universities. There are several serious problems, which I have been talking about since 2011. I will discuss just three of them.
The THE world rankings lack transparency. Eleven of its 13 indicators are bundled in three super-indicators so that it is impossible to figure out exactly what is doing what. If a university, for example, gets an improved score for Teaching: The Learning Environment this could be because of an improved score for teaching reputation, an increase in the number of staff, a reduction in the number of students, an increase in the number of doctorates awarded, a reduction in the number of bachelor degrees, a decrease in the number of academic staff, an increase in institutional income, or a combination of two or more of these.
THE does make disaggregated data available to universities but that is of little use for students or other stakeholders.
Another problem is the face validity of the two stand-alone indicators. Take a look at the citations indicator results for 2020-2021, which are supposed to measure research impact or research quality These are the top universities for 2020-2021: Anglia Ruskin University, Babol Noshirvani University of Technology, Brighton and Sussex Medical School, Cankaya University, the Indian Institute of Technology Ropar, Kurdistan University of Medical Sciences.
Similarly with the industry income indicator, which is presented as a measure of innovation.. At the top of this indicator are Anadolu University, Asia University Taiwan, University of Freiburg, Istanbul Technical University, Khalifa University, LMU Munich, the Korea Advanced Institute of Science and Technology, and Makerere University. The German and Korean universities seem plausible but one wonders about the others.
THE has not discovered a brilliant new method of finding diamonds in the rough. It is just using a flawed and eccentric methodology, one that it has repeatedly claimed that it will reform but somehow has never quite got round to doing so.
Third, the THE rankings include indicators that dramatically favour high status western universities with money, prestige and large numbers of postgraduate students. There are three measure of income, two reputation surveys accounting for a third of the total weighting, and two measures counting doctoral students.
The QS rankings are somewhat better but there are issues here as well. There is only one indicator for research, citations per faculty, and only one that is directly related to teaching quality, that is the employer reputation indicator with a ten per cent weighting.
The QS rankings are heavily overweight on research reputation, which has a 40% weighting and is hugely biased to certain countries. There are more respondents from Malaysia than from China and more from Kazakhstan than from India.
Using either of these rankings opens the way to attempts to manipulate the system. It is possible to get a high score in the THE rankings by recruiting somebody involved in the Global Burden of Disease Study. Doing well in the QS rankings might be influenced by signing up for a reputation management program.
It seems, however, that there is a new mood of scepticism about rankings in academia. One sign is the Rating the Rankings project by the Research Evaluation Working group of the International Network of Research Management Systems. This is a rating, definitely not a ranking, of six rankings by an international team of expert reviewers who did an evaluation according to four criteria: good governance, transparency, measure what matters, and rigour.
The results are interesting. No ranking is perfect but it seem that the famous brands are more likely to fall short of the criteria.
The Indian government and others would be wise to take note of the analysis and criticism that is available before committing themselves to using rankings for the assessment of research or higher education.