I am reproducing Phil Baty's column from Times Higher Education in its entirety
One of the things that I have been keen to do as editor of the Times Higher
Education World University Rankings is to engage as much as possible with our
harshest critics.
Our editorial board was trenchant in its criticism of our old rankings. In particular, Ian Diamond, principal of the University of Aberdeen and former chief executive of the Economic and Social Research Council, was scathing about our use of research citations.
The old system failed to normalise data to take account of the dramatically different citation volumes between different disciplines, he said - unfairly hitting strong work in fields with lower average figures. We listened, learned and have corrected this
weakness for the 2010 rankings.
Another strong critic is blogger Richard Holmes, an academic at the Universiti Teknologi MARA in Malaysia. Through his University Ranking Watch blog, he has perhaps done more than anyone to highlight the weaknesses in existing systems: indeed, he highlighted many of the problems that helped convince us to develop a new methodology with a new data provider, Thomson Reuters.
He has given us many helpful suggestions as we develop our improved methodology. For example, he advised that we should reduce the weighting given to the proportion of international students on campus, and we agreed. He added that we should increase the weighting given to our new teaching indicators, and again we concurred.
Of course, there are many elements that he and others will continue to disagree with us on, and we welcome that. We are not seeking anyone's endorsement. We simply ask for open engagement - including criticism - and we expect that process will continue long after the new tables are published.
There are still issues to be resolved but it does appear that the new THE rankings are making progress on several fronts. There is a group of indicators that attempts to measure teaching effectiveness. The weighting given to international students, an indicator that is easily manipulable and that has had very negative backwash effects, has been reduced. The inclusion of funding as a criterion, while obviously favouring wealthy regions, does measure an important input. The weighting assigned to the subjective academic survey has been reduced and it is now drawn from a clearly defined and at least moderately qualified set of respondents.
There are still areas where questions remain. I am not sure that citations per paper is the only way to measure impact. At the very least, the h-index could be added, which would add another ingredient to the mix.
Also, there are details that need to be sorted out. Exactly what sort of faculty will be counted in the various scalings? Is self-citation be counted? I also suspect that not everybody will be enthusiastic about using statistics from UNESCO for weighting the results of the reputational survey. That is not exactly the most efficient organization in the world. There is also a need for a lot more information about the workings of the reputational survey. What was the response rate and exactly how many responses were there from individual countries?
Something that may well cause problems in the future is the proposed indicator of the ratio of doctoral degrees to undergraduate degrees. if this is retained it is easy to predict that universities everywhere will be encouraging or coercing applicants to master's programs to switch to doctoral programs.
Still, it does seem that THE is being more open and honest about the creation of the new rankings than other ranking organizations and that the final result will be a significant improvement.
No comments:
Post a Comment