Thursday, January 26, 2017

Comments on the HEPI Report

The higher education industry tends to respond to global rankings in two ways. University bureaucrats and academics either get overexcited, celebrating when they are up, wallowing in self-pity when down, or they reject the idea of rankings altogether.

Bahram Bekhradnia of the Higher Education Policy Institute in the UK has  published a report on international rankings which adopts the second option. University World News has several comments including a summary of the report by Bekhradnia.

To start off, his choice of rankings deserves comment. He refers to the "four main rankings", Academic Ranking of World Universities (ARWU) from Shanghai, Quacquarelli Symonds (QS), Times Higher Education (THE) and U- Multirank. It is true that the first three are those best known to the public, QS and Shanghai by virtue of their longevity and THE because of its skilled branding and assiduous cultivation of the great, the good and the greedy of the academic world. U- Multirank is chosen presumably because of its attempts to address, perhaps not very successfully, some of the issues that the author discusses.

But focusing on these four gives a misleading picture of the current global ranking scene. There are now several rankings that are mainly concerned with research -- Leiden, Scimago, URAP, National Taiwan University, US News -- and redress some of the problems with the Shanghai ranking by giving due weight to the social sciences and humanities, leaving out decades old Nobel and Fields laureates and including more rigorous markers of quality. In addition, there are rankings that measure web activity, environmental sustainability, employability and innovation. Admittedly, they do not do any of these very well but the attempts should at least be noted and they could perhaps lead to better things.

In particular, there is now an international ranking from Russia, Round University Ranking (RUR), which could be regarded as an improved version of the THE world rankings and which tries to give more weight to teaching, It uses almost the same array of metrics as THE plus some more but with rational and sensible weightings, 8% for field normalised citations, for example, rather than 30%.

Bekhradnia has several comments on the defects of current rankings. First, he says that they are concerned entirely or almost entirely with research. He claims that there are indicators in the QS and THE rankings that are actually although not explicitly about research. International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.

Bekhradnia is being a little unfair to THE. He asserts that if universities add to their faculty with research-only staff this will add to their faculty student metric, supposedly a proxy for teaching quality, thus turning the indicator into a measure of research. This is true of QS but it appears that THE does require universities to list research staff separately and excludes them from some indicators as appropriate. In any case, the number of research only staff is quite small outside the top hundred or so for most universities 

It is true that most rankings are heavily, perhaps excessively, research-orientated but it would be a mistake to conclude that this renders them totally useless for evaluating teaching and learning. Other things being equal, a good record for research is likely to be associated with positive student and graduate outcomes such as satisfaction with teaching, completion of courses and employment.

For English universities the Research Excellence Framework (REF) score is more predictive of student success and satisfaction, according to indicators in the Guardian rankings and the recent THE Teaching Excellence Framework simulation than the percentage of staff with educational training or certification, faculty salaries or institutional spending, although it is matched by staff student ratio.

If you are applying to English universities and you want to know how likely you are to complete your course or be employed after graduation, probably the most useful things to  know are average entry tariff (A levels), staff student ratio and faculty scores for the latest REF. There are of course intervening variables and the arrows of causation do not always fly in the same direction but scores for research indicators are not irrelevant to comparisons of teaching effectiveness and learning outcomes.

Next, the report deals with the issue of data, noting that internal data checks by THE and QS do not seem to be adequate. He refers to the case of Trinity College Dublin where a misplaced decimal point caused the university to drop several places in the THE word rankings. He then goes on to criticise QS for "data scraping" that is getting information from any available source. He notes that they caused Sultan Qaboos University (SQU) to drop 150 places in their world rankings apparently because QS took data from the SQU website that identified non teaching staff as teaching. I assume that the staff in question were administrators: if they were researchers then it would not have made any difference.

Bekhradnia is correct to point out that data from web sites is often incorrect or subject to misinterpretation. But to assume that such data is necessarily inferior to that reported by institutions to the rankers is debatable. QS has no need to be apologetic about resorting to data scraping. On balance information about universities is more likely to be correct if it comes from one of several and similar competitive sources, if it is from a source independent of the ranking organisation and the university, if it has been collected for reasons other than submission to the rankers, or if there are serious penalties for submitting incorrect data.

The best data for university evaluation and comparison is likely to be from third party databases that collect masses of information or from government agencies that require accuracy and honesty. After that institutional data from web sites and the like is unlikely to be significantly worse  than that specifically submitted for ranking purposes.

There was an article in University World News in which Ben Sowter of QS took a rather defensive position with regard to data scraping. He need not have done so. In fact it would not be a bad idea for QS and others to do a bit more.

Bekhradnia goes on to criticise the reputation surveys. He notes that recycling unchanged responses over a period of five years, originally three, means that it is possible that QS is counting the votes of dead or retired academics. He also points out that the response rate to the surveys is very low. All this is correct although it is nothing new. But it should be pointed out that what is significant is not how many respondents there are but how representative they are of the group that is being investigated. The weighting given to surveys in the THE and QS rankings is clearly too much and QS's  methods of selecting respondents are rather incoherent and can produce counter-intuitive results such as extremely high scores for some Asian and Latin American universities.

However, it is going too far to suggest that surveys should have no role. First reputation and perceptions are far from insignificant. Many students would, I suspect, prefer to go a university that is overrated by employers and professional schools than to one that provides excellent instruction and facilities but has failed to communicate this to the rest of the world.

In addition, surveys can provide a reality check when a university does a bit of gaming. For example King Abdulaziz University (KAU) has been diligently offering adjunct contracts to dozens of highly cited researchers around the world that require them to put the university as a secondary affiliation and thus allow it to get huge numbers of citations. The US News Arab Region rankings have KAU in the top five among Arab universities for a range of research indicators, publications, cited publications, citations, field weighted citation impact, publications in the top 10 % and the top 25%. But its academic reputation rank was only 26, definitely a big thumbs down.

Bekhradnia then refers to the advantage that universities get in the ARWU rankings simply by being big. This is certainly a valid point. However, it could be argued that quantity is a necessary prerequisite to quality and enables the achievement of economies of scale. 

He also suggest that the practice of presenting lists in order is  misleading since a trivial difference in the raw data could mean a substantial difference in the presented ranking. He proposes that it would be better to group universities into bands. The problem with this is that when rankers do resort to banding, it is fairly easy to calculate an overall score by adding up the published components. Bloggers and analysts do it all the time.

Bekhradnia concludes:
"The international surveys of reputation should be dropped
– methodologically they are flawed, effectively they only
measure research performance and they skew the results in
favour of a small number of institutions."

This is ultimately self defeating. The need and the demand for some sort of  ranking is too widespread to set aside. Abandon explicit rankings and we will probably have implicit rankings of recommendations by self declared experts.

There is much to be done to make rankings better. The priority should be finding objective and internationally comparable measures of student attributes and attainment. That will be some distance in the future. For the moment what universities should be doing is to focus not on composite rankings but on the more reasonable and reliable indicators within specific rankings. 

Bekhradnia does have a very good point at the end:

"Finally, universities and governments should discount therankings when deciding their priorities, policies and actions.In particular, governing bodies should resist holding seniormanagement to account for performance in flawed rankings.Institutions and governments should do what they do becauseit is right, not because it will improve their position in therankings."

I would add that universities should stop celebrating when they do well in the rankings. The grim fate of Middle East Technical University should be present in the mind of every university head.







No comments:

Post a Comment