He then continues:
"Those who have used our rankings to cast judgment on the state of Malaysian higher education (and many, in very senior positions have done so) must be told that the annual tables had some serious flaws — flaws which I have a responsibility to put right."
He is absolutely right about the flawed rankings of 2004 - 2009 and about the use of ranking data for political purposes. It is particularly noticeable that any fall by Malaysian universities in the rankings is treated by some writers as the consequence of serious problems in the Malaysian education system. I remember at the end of 2007 receiving a request from a Singaporean newspaper to comment on the latest rankings in which Universiti Malaya (UM) had suffered a serious decline. I replied in detail that it was highly likely that the apparent fall in UM's position was due to changes in methodology and nothing else. This was confirmed a few days later when detailed indicator scores were published showing that UM's fall between 2006 and 2007 was almost entirely the result of the introduction of Z scores which boosted the scores for research for moderately productive research universities like Peking while slightly lifting those for relatively less productive ones like UM. The newspaper article, however, simply asserted that the decline was the result of deficiencies in UM and Malaysian higher education in general.
I do not dispute that Malaysian universities have problems. It is also obvious that in many years they tumbled down the QS rankings. The two just did not have anything to do with each other.
Equally it is true that the quantity of research in Malaysian universities has expanded greatly in recent years and that in some years some Malaysian universities rose. But again these two things were quite unrelated.
In the critique of the old rankings the focus is on the survey of academic opinion, which accounted for 40% of the rankings. Baty points out that a relatively small number of responses were collected from world academics, 563 from the UK, 180 from Malaysia, 201 from the Philippines.
It is true that the old THE-QS rankings collected a small number of responses but size alone is not the crux of the matter. What matters is whether the the sample is an adequate repesentation of the population from which it is drawn. It is arguable that subscribers to World Scientific (THE-QS) are less representative of international academic opinion than published researchers in peer reviewed journals (THE and Thomson Reuters . The actual number is less important. The new rankings will be vindicated not so much by the number of responses received but by how representative and qualified they are.
The article suggests that Malaysian scholars will be able to participate in the ranking than before. I am wondering about that. I know a few people in Malaysia who took part in the 2008 survey but have not received a form this year (perhaps they do not deserve to). It will be interesting to see the exact number of voters from Malaysia and elsewhere when the polls close.