Reviewing the THE Rankings
An article by Phil Baty in Times Higher Education looks at the various components of last year's THE World University Rankings and gives some hints about changes to come this year. Some good points but also some problems. My comments are in red.
We look at research in a number of different ways, examining reputation, income and volume (through publication in leading academic journals indexed by Thomson Reuters). But we give the highest weighting to an indicator of “research influence”, measured by the number of times published research is cited by academics across the globe.
We looked at more than 25 million citations over a five-year period from more than five million articles.
Yes, but when you normalise by field and by year, then you get very low benchmark figures and a few hundred citations to a few dozen articles can acquire disproportionate influence.
All the data were normalised to reflect variations in citation volume between different subject areas, so universities with strong research in fields with lower global citation rates were not penalised.
The lower the global citation rates the more effect a strategically timed and placed citation can have and the greater the possibility of gaming the system.
We also sought to acknowledge excellence in research from institutions in developing nations, where there are less-established research networks and lower innate citation rates, by normalising the data to reflect variations in citation volume between regions. We are proud to have done this, but accept that more discussion is needed to refine this modification.
In principle this sounds like a good idea but it could just mean that Singapore, Israel, South Africa and the south of Brazil might be rewarded for being located in under-achieving regions of which they are not really a part.
The “research influence” indicator has proved controversial, as it has shaken up the established order, giving high scores to smaller institutions with clear pockets of research excellence and boosting those in the developing world, often at the expense of larger, more established research-intensive universities.
Here is a list of universities that benefited disproportionately from high scores for the "research influence" indicator. Are they really smaller, are they really in the developing world? And as for those clear pockets of excellence, that would certainly be the case for Bilkent (you can find out who he is in five minutes) but for Alexandria...?
Boston College
University of California Santa Cruz
Royal Holloway, University of London
Pompeu Fabra
Bilkent
Kent State University
Hong Kong Baptist University
Alexandria
Barcelona
Victoria University WellingtonTokyo Metropolitan University
University of Warsaw
Something else about this indicator that nobody seems to have noticed is that even if the methodology remains completely unchanged, it is capable of producing dramatic changes from year to year. Suppose that an article in a little cited field like applied math was cited ten times in its first year of publication. That could easily be 100 times the benchmark figure. But in the second year that might be only ten times the benchmark. So if the clear pocket of research excellence stops doing research and becomes a newspaper columnist or something like that, the research influence score will go tumbling down.
We judge knowledge transfer with just one indicator – research income earned from industry – but plan to enhance this category with other indicators.
This is a good idea since it represents, however indirectly, an external assessment of universities.
Internationalisation is recognised through data on the proportion of international staff and students attracted to each institution.
Enough has been said about the abuses involved in recruiting international students. Elsewhere THE have said that they are adding more measures of internationalisation.
The flagship – and most dramatic – innovation is the set of five indicators used to give proper credit to the role of teaching in universities, with a collective weighting of 30 per cent.
But I should make one thing very clear: the indicators do not measure teaching “quality”. There is no recognised, globally comparative data on teaching outputs at present. What the THE rankings do is look at the teaching “environment” to give a sense of the kind of learning milieu in which students are likely to find themselves.
The key indicator for this category draws on the results of a reputational survey on teaching. Thomson Reuters carried out its Academic Reputation Survey – a worldwide, invitation-only poll of 13,388 experienced scholars, statistically representative of global subject mix and geography – in early 2010.
It examined the perceived prestige of institutions in both research and teaching. Respondents were asked only to pass judgement within their narrow area of expertise, and we asked them “action-based” questions (such as: “Where would you send your best graduates for the most stimulating postgraduate learning environment?”) to elicit more meaningful responses.
In some ways, the survey is an improvement on the THE-QS "peer review" but the number of responses was lower than the target and we still do not know how many survey forms were sent out. Without knowing the response rate we cannot determine the validity of the survey.
The rankings also measure staff-to-student ratios. This is admittedly a relatively crude proxy for teaching quality, hinting at the level of personal attention students may receive from faculty, so it receives a relatively low weighting of just 4.5 per cent.
Wait a minute. This means the measure is the number of faculty or staff per student. But the THE web site says "undergraduates admitted per academic", which is the complete opposite. An explanation is needed
We also look at the ratio of PhD to bachelor’s degrees awarded, to give a sense of how knowledge-intensive the environment is, as well as the number of doctorates awarded, scaled for size, to indicate how committed institutions are to nurturing the next generation of academics and providing strong supervision.
Counting the proportion of postgraduate students is not a bad idea. If nothing else, it is a crude measure of the maturity of the students. However, counting doctoral students may well have serious backwash effects as students who would be quite happy in professional or masters programs are coerced or cajoled into Ph D courses that they may never finish and which will lead to a life of ill-paid drudgery if they do.
The last of our teaching indicators is a simple measure of institutional income scaled against academic staff numbers. This figure, adjusted for purchasing-price parity so that all nations compete on a level playing field, gives a broad sense of the general infrastructure and facilities available.
Yes, this is important and it's time someone started counting it.
No comments:
Post a Comment