The New THE Ranking Methodology
Times Higher Education has given some information about the proposed structure and methodology of their forthcoming World University Rankings. At first sight, the new rankings look as thought they might be an improvement on the THE-QS rankings of 2004-2009 but there are still unanswered questions and it is possible that the new rankings might have some defects of their own.
The proposed methodology will feature 13 indicators, possibly rising to 16 next year. Here we have the first problem. Frequent changes of method bedevilled the THE-QS rankings, producing, along with a series of errors, implausible rises and falls. If the new rankings are going to see further changes not just in the fine detail of data collection but in the actual indicators themselves then we going to see more spurious celebration or lamentation as universities bounce up down the rankings. Still, if THE are going to standardise the indicator scores from the beginning it is unlikely that their rankings will ever be as interesting as the THE-QS used to be.
The largest component of the proposed ranking is "research indicators" which accounts for 55% of the weighting. These include academic papers, citation impact, research income, research income from public sources and industry and a reputational survey of research.
Another category is "institutional indicators", which together get 25%: number of undergraduate entrants, number of PhDs awarded, a reputation survey of teaching and institutional income.
Ten per cent will go to "international diversity", divided equally, as in the THE-QS rankings, into international students and international faculty.
Another ten per cent goes to economic activity/ innovation. At the moment this consists entirely of research income from industry although there are apparently plans to add two other measures next year.
There are some obvious rough edges in the proposals. The economic activity/innovation income consists entirely of research income from industry but research income from public sources and industry appears under research indicators. In the institutional indicators, universities will get credit for admitting undergraduate students and for PhD students but nothing for anyone in between. I doubt if this will go unchanged. If undergraduates and PhD students are to be institutional indicators then we will see seriously negative backwash effects with masters programs being phased out and marginal students being herded into doctoral programs.
The new methodology is less diverse than appears from a simple count of the number of indicators. It is heavily research orientated. As noted, more than half of the weighting goes to a bundle of research indicators. However, economic activity/innovation is for this year nothing more than research income.
Adding to the emphasis on research, the institutional indicators include the number of doctorates awarded and the the ratio of doctorate to bachelor degrees awarded. Under institutional indicators there is a survey of teaching but the respondents are largely selected on the basis of their being authors of academic articles published in ISI indexed journals. There seems to be no evidence that the respondents do very much teaching and if Thomson Reuters include researchers with a non-university affiliations, of whom there are many in medicine and engineering ,then it is likely that many of those called upon to evaluate teaching have never done any teaching at all. Meanwhile student faculty ratio, a crude measure of teaching quality, has been removed.
It is regrettable that QS has apparently decided to keep the international students indicator. This has caused demonstrable harm to universities in several countries by encouraging the recruitment of students with inadequate linguistic and cognitive skills. One modification that THE should consider if they want to keep this measure, is declaring the EU a single entity. That was supposed to be the point of the Bologna process.
The proposed rankings include several indicators related to university income including research income. This is not a bad idea. After all, the provision of adequate funds is a necessary although far from a sufficient condition for the attainment of a reasonable level of quality. The inclusion of research income will, however, be detrimental to the interests of institutions like LSE that focus on the humanities and social science.
There are still unanswered questions. Some of these indicators will be scaled by dividing by the number of faculty. There will be many raised eyebrows if universities are required to include teaching staff who do no research in the measures of research output or research only staff in the other indicators. Whatever decision is made there is bound to be acrimonious wrangling.
Also unstated is the period from which the data for publications and citations are drawn. The further back the data collectors go the better for traditional elite universities. It is also not stated whether they will count self citations or publications in conference proceedings that are not rigorously reviewed.
So, if you want rankings that emphasise research and funding then THE and Thomson Reuters may be heading, somewhat uncertainly, in the right direction but perhaps at the price of neglecting other aspects of university quality.
Thank you, that was extremely valuable and interesting...I will be back again to read more on this topic.
ReplyDelete