Three years ago the administration at Universiti Malaya (UM) was celebrating getting into the world's top 100 universities according to the THES-QS rankings. A year later it turned out that it was just the result of one of many errors by QS Quacquarelli Symonds, the consultants who produced the rankings.
Now it looks as though the same thing is happening all over again, but in the opposite direction.
This year four Malaysian universities have fallen in the rankings. UM and Universiti Kebangsaan Malaysia (UKM) went right out of the top 200.
Commentators have listed several factors responsible for the apparent slide and proposed remedies. Tony Pua at Education in Malaysia says that
our universities are not competitive, are not rigourous in nature, do not promote and encourage merit and the total lack of transparency in admission and recruitment exercises served the perfect recipe for continual decline in global recognition and quality
According to the Malaysian opposition leader, Lim Kit Siang
JUST as Vice Chancellors must be held responsible for the poor rankings of their universities, the Higher Education Minister, Datuk Mustapha Mohamad must bear personal responsibility for the dismal international ranking of Malaysian universities - particularly for Malaysia falling completely out of the list of the world’s Top 200 Universities this year in the 2007 Times Higher Education Supplement (THES)-Quacquarelli Symonds (QS) World University Rankings.An article in the Singapore Straits Times reported that
eminent academician Khoo Kay Kim felt there was too much emphasis on increasing the number of PhD holders, instead of producing quality doctorate graduates. 'If this goes on, next year I expect the rankings to slip further,' he said.
Everyone seems to assume that the decline in the rankings reflects a real decline or at least a lack of quality that the rankings have finally exposed.
But is this in fact the case ?
To understand what really happened it is necessary to look at the methodological changes that have been introduced this year.
QS`have done four things. They have stopped respondents to their "peer review" selecting their own institutions. They are using full time equivalent (FTE) numbers for staff and students instead of counting heads. They now use the Scopus data base instead of ISI. They use Z scores which means that the mean of all scores is subtracted from the raw score. The result is divided by the standard deviation. Then the resulting figures are normalised with the mean score converted to 50.
The prohibition on voting for one's own university would seem like a good idea if was also extended to voting for one's alma mater. I suspect that Oxford and Cambridge are getting and will continue to get many votes from their graduates in Asia and Australia, which would seem just as bad as picking one's current employer.
Using FTEs is not a bad idea in principle. But note that QS` are still apparently counting research staff as faculty, giving a large and undeserved boost to places like Imperial College London.
I am sceptical about the shift to Scopus . This database includes a lot of conference papers, reviews and so on and therefore not all of the items included would have been subject to rigorous peer review It therefore might include research that is a a lower quality than that in the ISI database. There are some also strange things the citations section this year . The fifth best university for citations is the University of Alabama. (You have to look at the details for the top 400 to see this because it is not in the overall top 200.) According to QS's data, theEcole Normale Superieure in Paris is better than Harvard. Oxford is down at number 85, which seems a bit too low. It could be that the database is measuring quantity more than quality or perhaps there have been s number of errors.
Using Z scores is a standard practice among other rankers but it does cause huge fluctuations when introduced for the first time. What Z scores do is, in effect, to compress scores at the top and stretch them out lower down the rankings. They make it easier to distinguish among the mediocre universities at the price of blurring differences among the better ones.
So how did Malaysian universities do in 2007?
There is no point in looking at the scores for the various criteria. The introduction of Z scores means that scores in 2006 and 2007 cannot be compared. What we have to do is to work out the relative position of the universities on each component.
[Two days ago the scores for the six components of the rankings were available (registration required) at the QS topuniversities site for the top 400 universities. They could not be accessed today, which might mean that the details for the 400-500 univerities are being prepared or, just possibly, that errors are being corrected.]
Peer review
In 2006 UM was 90th for peer review among the top 400 in that year. In 2007 it fell to 131st position among the top 400.
Recruiter rating
In 2006 it was 238th for recruiter rating . In 2007 it rose to159th place.
Student faculty ratio
In 2006 it was in 274th place for student faculty ratio. In 2007 it rose to 261st place.
International faculty
In 2006 it was 245th for international faculty. In 2007 it rose to 146th place.
International Staff
In 2006 it was 308th for international students. In 2007 it rose to 241st place.
International students
In 2007 it was 342nd for citations per faculty . In 2007 it fell to 377th place.
This means that UM did much better compared to other universities on the following measures:
- Recruiter rating
- Student faculty ratio
- International students
- International faculty
It did somewhat worse on two items , peer review and citations. But notice that the number of places by which it fell are much less than than the number of places by which it rose, except for student faculty ratio.
The peer review was given a weighting of forty per cent and this meant that the modest fall here cancelled out the greater rises on the other sections.
It was, however, the citations part that scuppered UM this year. Without this, it would have remained roughly where it was. Basically falling from position 342 to 377 meant losing a bit more than 30 points on this section or about six points on the total score, sufficient to eject UM from the top two hundred.
Why should such a modest fall have such dire consequences?
Basically what happened is that Z scores , as noted earlier, compress scores at the top and stretch them out over the middle . Since the mean for the whole group is normalised at fifty and since the maximum score is hundred, an institution like Caltech will never get more than twice as many points as a university scoring around the mean, even if it were, as in fact it does, to produce ten times as much research.
So, in 2006 Caltech scored 100, Harvard 55 , the National University of Singapore (NUS) 8, Peking University 2 and UM 1.
Now in 2007, Caltech gets 100, Harvard 96, NUS 84, Peking 53 and UM 14.
The scores have been stretched out at the bottom and compressed at the top. But there has almost certainly been no change in the underlying reality.
So what is the real position? UM, it seems, has, relative to other universities, recruited more international staff and admitted more international students. Its faculty student ratio has improved very slightly relative to other universities. The employers contacted by QS think more highly of its graduates this year.
This was all cancelled out by the fall in the "peer review", which may in part have been caused by the prohibition on voting for the respondent's own institution.
The real killer for UM, however, was the introduction of Z scores I'll leave it to readers to decide whether a procedure that represents Caltech as only slightly better than Pohong University of Science and the University of Helsinki is superior to one that gives Peking only twice as many points as UM.
The pattern for the other Malaysian universities was similar, although less pronounced. It is also unfortunately noticeable that UKM got a very high score for international faculty, suggesting that an error similar to that of 2004 has occurred.
What is the real situation with regard to Malaysian universities? Frankly, I consider the peer review a dubious exercise and the components relating to student faculty ratio and internationalisation little better.
Counts of research and citations produced by third parties are, however, fairly reliable. Looking at the Scopus database (don't trust me -- get a 30 day free trial)l I found that 1,226 research papers (defined broadly to include things like reviews and conference papers) by researchers affiliated to Malaysian universities and other institutions were published in 2001 and 3,372 in 2006. This is an increase of 175% over 5 years.
For Singapore the figures are 5, 274 and 9,630, an increase of 83%.
For Indonesia they are 511 and 958, an increase of 87%.
For Australia they are 25,939 and 38, 852, an increase of 56%.
For Japan they are 89, 264 and 103, 428, an increase of 16%.
It is of course easy to grow rapidly when you start from a low base and these statistics say nothing about the quality of the research. Nor do they distinguish between a conference paper with a graduate student as sixth author and an article in a leading indexed journal.
Still the picture is clear. The amount of research done in Malaysia has increased rapidly over the last few years and has increased more rapidly than in Singapore, Japan and Australia. Maybe Malaysia is not improving fast enough but it is clear that there has been no decline, either relative or absolute, and that the THES-QS rankings have, once again, given a false impression of what has happened.
I