Henk F
Moed, Sapienza University of Rome
ABSTRACT
To
provide users insight into the value and limits of world university rankings, a
comparative analysis is conducted of 5 ranking systems: ARWU, Leiden, THE, QS
and U-Multirank. It links these systems with one another at the level of
individual institutions, and analyses the overlap in institutional coverage,
geographical coverage, how indicators are calculated from raw data, the
skewness of indicator distributions, and statistical correlations between
indicators. Four secondary analyses are presented investigating national
academic systems and selected pairs of indicators. It is argued that current
systems are still one-dimensional in the sense that they provide finalized,
seemingly unrelated indicator values rather than offering a data set and tools
to observe patterns in multi-faceted data. By systematically comparing
different systems, more insight is provided into how their institutional
coverage, rating methods, the selection of indicators and their normalizations
influence the ranking positions of given institutions.
" Discussion and conclusions
The overlap analysis clearly illustrates that there
is no such set as ‘the’ top 100 universities in terms of excellence: it depends
on the ranking system one uses which universities constitute the top 100. Only
35 institutions appear in the top 100 lists of all 5 systems, and the number of
overlapping institutions per pair of systems ranges between 49 and 75. An
implication is that national governments executing a science policy aimed to
increase the number of academic institutions in the ‘top’ of the ranking of
world universities, should not only indicate the range of the top segment
(e.g., the top 100), but also specify which ranking(s) are used as a standard,
and argue why these were selected from the wider pool of candidate world
university rankings."
Scientometrics
DOI 10.1007/s11192-016-2212-y
1 comment:
As the most powerful and influential faction in the university, the quality of the university administration should be an important component of any comprehensive ranking. Indicators such as operational/cost efficiency & effectiveness, student & faculty well-being and satisfaction, accountability index, etc. are worth considering.
:p
Post a Comment