Showing posts sorted by relevance for query QS. Sort by date Show all posts
Showing posts sorted by relevance for query QS. Sort by date Show all posts

Wednesday, July 19, 2017

Comments on an Article by Brian Leiter

Global university rankings are now nearly a decade and a half old. The Shanghai rankings (Academic Ranking of World Universities or ARWU) began in 2003, followed a year later by Webometrics and the THES-QS rankings which, after an unpleasant divorce, became the Times Higher Education (THE) and the Quacquarelli Symonds (QS) world rankings. Since then the number of rankings with a variety of audiences and methodologies has expanded.

We now have several research-based rankings, University Ranking by Academic Performance (URAP) from Turkey, the National Taiwan University Rankings, Best Global Universities from US NewsLeiden Ranking, as well as rankings that include some attempt to assess and compare something other than research, the Round University Rankings from Russia and U-Multirank from the European Union. And, of course, we also have subject rankingsregional rankings, even age group rankings.

It is interesting that some of these rankings have developed beyond the original founders of global rankings. Leiden Ranking is now the gold standard for the analysis of publications and citations. The Russian rankings use the same Web of Science database that THE did until 2014 and it has 12 out of the 13 indicators used by THE plus another eight in a more sensible and transparent arrangement. However, both of these receive only a fraction of the attention given to the THE rankings.

The research rankings from Turkey and Taiwan are similar to the Shanghai rankings but without the elderly or long departed Fields and Nobel award winners and with a more coherent methodology. U-Multirank is almost alone in trying to get at things that might be of interest to prospective undergraduate students.

It is regrettable that an article by Professor Brian Leiter of the University of Chicago in the Chronicle of Higher Education , 'Academic Ethics: To Rank or Not to Rank' ignores such developments and mentions only the original “Big Three”, Shanghai, QS and THE. This is perhaps forgivable since the establishment media, including THE and the Chronicle, and leading state and academic bureaucrats have until recently paid very little attention to innovative developments in university ranking. Leiter attacks the QS rankings and proposes that they should be boycotted while trying to improve the THE rankings.

It is a little odd that Leiter should be so caustic, not entirely without justification, about QS while apparently being unaware of similar or greater problems with THE.

He begins by saying that QS stands for “quirky silliness”. I would not disagree with that although in recent years QS has been getting less silly. I have been as sarcastic as anyone about the failings of QS: see here and here for an amusing commentary.

But the suggestion that QS is uniquely bad in contrast to THE is way off the target. There are many issues with the QS methodology, especially with its employer and academic surveys, and it has often announced placings that seem very questionable such as Nanyang Technological University (NTU) ahead of Princeton and Yale or the University of Buenos Aires in the world top 100, largely as a result of a suspiciously good performance in the survey indicators. The oddities of the QS rankings are, however, no worse than some of the absurdities that THE has served up in their world and regional rankings.  We have had places like University of Marakkesh Cadi Ayyad University in Morocco, Middle East Technical University in Turkey, Federico Santa Maria Technical University in Chile, Alexandria University and Veltech University in India rise to ludicrously high places, sometimes just for a year or two, as the result of a few papers or even a single highly cited author.

I am not entirely persuaded that NTU deserves its top 12 placing in the QS rankings. You can see here QS’s unconvincing reply to a question that I provided. QS claims that NTU's excellence is shown by its success in attracting foreign faculty, students and collaborators, but when you are in a country where people show their passports to drive to the dentist, being international is no great accomplishment. Even so, it is evidently world class as far as engineering and computer science are concerned and it is not impossible that it could reach an undisputed overall top ten or twenty ranking the next decade.

While the THE top ten or twenty or even fifty looks quite reasonable, apart from Oxford in first place, there are many anomalies as soon as we start breaking the rankings apart by country or indicator and THE has pushed some very weird data in recent years. Look at these places supposed to be regional or international centers of across the board research excellence as measured by citations: St Georges University of London, Brandeis University, the Free University of Bozen-Bolsano,  King Abdulaziz University, the University of Iceland, Veltech University. If QS is silly what are we to call a ranking where Anglia Ruskin University is supposed to have a greater research impact than Chicago, Cambridge or Tsinghua.

Leiter starts his article by pointing out that the QS academic survey is largely driven by the geographical distribution of its respondents and by the halo effect. This is very probably true and to that I would add that a lot of the responses to academic surveys of this kind are likely driven by simple self interest, academics voting for their alma mater or current employer. QS does not allow respondents to vote for the latter but they can vote for the former and also vote for grant providers or collaborators.

He says that “QS does not, however, disclose the geographic distribution of its survey respondents, so the extent of the distorting effect cannot be determined". This is not true of the overall survey. QS does in fact give very detailed figures about the origin of its respondents and there is good evidence here of probable distorting effects. There are, for example, more responses from Taiwan than from Mainland China, and almost as many from Malaysia as from Russia. QS does not, however, go down to subject level when listing geographic distribution.

He then refers to the case of University College Cork (UCC) asking faculty to solicit friends in other institutions to vote for UCC. This is definitely a bad practice, but it was in violation of QS guidelines and QS have investigated. I do not know what came of the investigation but it is worth noting that the message would not have been an issue if it had referred to the THE survey.

On balance, I would agree that THE ‘s survey methodology is less dubious than QS’s and less likely to be influenced by energetic PR campaigns. It would certainly be a good idea if the weighting of the QS survey was reduced and if there was more rigorous screening and classification of potential respondents.

But I think we also have to bear in mind that QS does prohibit respondents from voting for their own universities and it does average results out over a five- year period (formerly three years).

It is interesting that while THE does not usually combine and average survey results it did so in the 2016-17 world rankings combining the 2015 and 2016 survey results. This was, I suspect, probably because of a substantial drop in 2016 in the percentage of respondents from the arts and humanities that would, if unadjusted, have caused a serious problem for UK universities, especially those in the Russell Group.

Leiter then goes on to condemn QS for its dubious business practices. He reports that THE dropped QS because of its dubious practices. That is what THE says but it is widely rumoured within the rankings industry that THE was also interested in the financial advantages of a direct partnership with Thomson Reuters rather than getting data from QS.

He also refers to QS’s hosting a series of “World Class events” where world university leaders pay $950 for “seminar, dinners, coffee breaks” and “learn best practice for branding and marketing your institution through case studies and expert knowledge” and the QS stars plan where universities pay to be audited by QS in return for stars that they can use for promotion and advertising. I would add to his criticism that the Stars program has apparently undergone a typical “grade inflation” with the number of five-star universities increasing all the time.

Also, QS offers specific consulting services and it has a large number of clients from around the world although there are many more from Australia and Indonesia than from Canada and the US. Of the three from the US one is MIT which has been number one in the QS world rankings since 2012, a position it probably achieved after a change in the way in which faculty were classified.

It would, however, be misleading to suggest that THE is any better in this respect. Since 2014 it has launched a serious and unapologetic “monetisation of data” program.

There are events such as the forthcoming world "academic summit" where for 1,199 GBP (standard university) or 2,200 GBP (corporate), delegates can get "Exclusive insight into the 2017 Times Higher Education World University Rankings at the official launch and rankings masterclass,”, plus “prestigious gala dinner, drinks reception and other networking events”. THE also provides a variety of benchmarking and performance analysis services, branding, advertising and reputation management campaigns and a range of silver and gold profiles, including adverts and sponsored supplements. THE’s data clients include some illustrious names like the National University of Singapore and Trinity College Dublin plus some less well-known places such as Federico Santa Maria Technical University, Orebro University, King Abdulaziz University, National Research Nuclear University MEPhI Moscow, and Charles Darwin University.

Among THE’s activities are regional events that promise “partnership opportunities for global thought leaders” and where rankings like “the WUR are presented at these events with our award-winning data team on hand to explain them, allowing institutions better understanding of their findings”.

At some of these summits the rankings presented are trimmed and tweaked and somehow the hosts emerge in a favourable light. In February 2015, for example, THE held a Middle East and North Africa (MENA) summit that included a “snapshot ranking” that put Texas A and M University Qatar, a branch campus that offers nothing but engineering courses, in first place and Qatar University in fourth. The ranking consisted of precisely one indicator out of the 13 that make up THE’s world university rankings, field and year normalised citations. United Arab Emirates University (UAEU) was 11th and the American University of Sharjah in the UAE 14th.  

The next MENA summit was held in January 2016 in Al Ain in UAE. There was no snapshot this time and the methodology for the MENA rankings included 13 indicators in THE’s world rankings. Host country universities were now in fifth (UAEU) and eighth place (American University in Sharjah). Texas A and M Qatar was not ranked and Qatar University fell to sixth place.

Something similar happened to Africa. In 2015, THE went to the University of Johannesburg for a summit that brought together “outstanding global thought leaders from industry, government, higher education and research” and which unveiled THE’s Africa ranking based on citations (with the innovation of fractional counting) that put the host university in ninth place and the University of Ghana in twelfth.

In 2016 the show moved on to the University of Ghana where another ranking was produced based on all the 13 world ranking indicators. This time the University of Johannesburg did not take part and the University of Ghana went from 12th place to 7th.

I may have missed something but so far I do not see sign of THE Africa or MENA summits planned for 2017. If so, then African and MENA university leaders are to be congratulated for a very healthy scepticism.

To be fair, THE does not seem to have done any methodological tweaking for this year’s Asian, Asia Pacific and Latin American rankings.

Leiter concludes that American academics should boycott the QS survey but not THE’s and that they should lobby THE to improve its survey practices. That, I suspect, is pretty much a nonstarter. QS has never had much a presence in the US anyway and THE is unlikely to change significantly as long as its commercial dominance goes unchallenged and as long as scholars and administrators fail to see through its PR wizardry. It would be better for everybody to start looking beyond the "Big Three" rankings.





Tuesday, June 24, 2008

Resumption of Posting

Teaching and family affairs have kept me away from this blog for a few months. I hope to start posting regularly again soon.


QS’s Greatest Hits: Part One

For the moment, it might be interesting to review some of the more spectacular errors of QS Quacquarelli Symonds Ltd (QS), the consultants who collect the data for the Times Higher Education Supplement’s (THES) World University Rankings.

During its short venture into the ranking business QS has shown a remarkable flair for error. In terms of quantity and variety they have no peers. All rankers make mistakes now and then but so far there has been nobody quite like QS.

Here is a list of what I think are QS’s ten best errors, mainly from the rankings and their book, Guide to the World’s Top Universities (2007). Most of them have been discussed in earlier posts on this blog. The date of these posts is given in brackets. There is one error, relating to Washington University in St. Louis, from last year’s rankings,

It has to be admitted that QS seem to be doing better recently. Or perhaps I have not been looking as hard as I used to. I hope that another ten errors will follow shortly.


One: Faculty Student Ratio in Guide to the World’s Top Universities (2007). (July 27, 2007; May 11, 2007)

This is a beautiful example of the butterfly effect, with a single slip of the mouse leading to literally hundreds of mistakes.

QS’s book, Guide to the World’s Top Universities, was produced at the end of 2006 after the publication of the rankings for that year and contained data about student faculty ratios of over 500 ranked universities. It should have been obvious immediately that there was something wrong with this data. Yale is given a ratio of 34.1, Harvard 18, Cambridge 18.9 and Pretoria 590 .3. On the other hand, there are some ridiculously low figures such as 3.5 for Dublin Institute of Technology and 6.1 for the University of Santo Tomas (Philippines).

Sometimes the ratios given flatly contradict information given on the same page. So, on page 127 in the FACTFILE, we are told that Yale has a student faculty ratio of 34.3. Over on the left we are informed that Yale has around 10,000 students and 3,333 faculty.

There is also no relationship between the ratios and the scores out of 100 in the THES QS rankings for student faculty ratio, something that Matt Rayner asked about, without ever receiving a reply, on QS’s topuniversities web site.

So what happened? It’s very simple. Someone slipped three rows when copying and pasting data and every single student faculty ratio in the book, over 500 of them, is wrong. Dublin Institute of Technology was given Duke’s ratio (more about that later), Pretoria got Pune’s, Aachen RWT got Aberystwyth’s (Wales). And so on. Altogether over 500 errors.


Two: International Students and Faculty in Malaysian Universities.

In 2004 there was great jubilation at Universiti Malaya (UM) in Malaysia. The university had reached 89th place in the THES-QS world rankings. Universiti Sains Malaysia (USM) also did very well. Then in 2005 came disaster. UM crashed 100 places, seriously damaging the Vice-Chancellor’s career, and USM disappeared from the top 200 altogether. The Malaysian political opposition had a field day blasting away at the supposed incompetence of the university leadership.

The dramatic decline should have been no surprise at all. A Malaysian blogger had already noticed that the figures for international students and faculty in 2004 were unrelated to reality. What happened was that in 2004 QS were under the impression that larger numbers of foreigners were studying and teaching at the two Malaysian universities. Actually, there were just a lot of Malaysian citizens of Indian and Chinese descent. In 2005 the error was corrected causing the scores for international faculty and students to fall precipitously.

Later, THES referred to this as “a clarification of data”, a piece of elegant British establishment obfuscation that is almost as good as “being economical with the truth”


Three: Duke’s student faculty ratio 2005 ( October 30, 2006 )

Between 2004 and 2005 Duke rose dramatically in the rankings. It did so mainly because it had been given a very low and incredible student faculty ratio in the latter year, less than two students per faculty. This was not the best ratio in the rankings. That supposedly belonged to Ecole Polytechnique in Paris (more of that later). But it was favourable enough to give Duke a powerful boost in the rankings.

The ratio was the result of a laughable error. QS listed Duke as having 6,244 faculty, well in excess of anything claimed on the university’s web site. Oddly enough, this was exactly the number of undergraduate students enrolled at Duke in the fall of 2005. Somebody evidently had copied down the figure for undergraduate students and counted them as faculty, giving Duke four times the number of faculty it should have.


Four: Duke’s student faculty ratio 2006 (December 16, 2006)

Having made a mess of Duke’s student faculty ratio in 2005, QS pulled off a truly spectacular feat in 2006 by making an even bigger mess. The problem, I suspect, was that Duke’s public relations office had its hands full with the Lacrosse rape hoax and that the web site had not been fully updated since the fall of 2005. For students, QS apparently took undergraduate student enrollment in the fall of 2005, subtracted the number of undergraduate degrees awarded and added the 2005 intake. This is a bit crude because some students would leave without taking a degree, Reade Seligmann and Colin Finnerty for example, but probably not too inaccurate. Then, there was a bit of a problem because while the number of postgraduate degrees awarded was indicated on the site there was no reference to postgraduate admissions. So, QS seem to have deducted the degrees awarded and added what they thought was number of postgraduate students admitted, 300 of them, to the Pratt School of Engineering, which is an undergraduate, not a graduate school. Then, in a final flourish they calculated the number of faculty by doubling the figure on the Duke site, apparently because Duke listed the same number classified first by department and then by status.

The result was that the number of students was undercounted and the number of faculty seriously overcounted, giving Duke the best student faculty ratio for the year. Although the ratio was higher than in 2005 Duke was now in first place for this section because QS had calculated more realistic ratios for the Ecole Polytechnique and the Ecole Normale Superieure.


Five: Omission of Kenan Flagler from the Fortune business school rankings. (March 05, 2007)

On the surface this was a trivial error compared to some that QS has committed. They got the business school at the University of North Carolina mixed up with that of North Carolina State University. The grossness of this error is that while most American universities seem unconcerned about the things that QS writes or does not write about them, business schools evidently feel that more is at stake and also have considerable influence over the magazines and newspaper that publish rankings. Kenan-Flagler protested vociferously over its omission, Fortune pulled the ranking off its site, Nunzio Quacquarelli, director of QS, explained that it was the result of a lapse by a junior employee and stated that this sort of thing had never happened before and would never happen again.


Six: "Beijing University"

China’s best or second best university is Peking University. The name has not been changed to Beijing University apparently to avoid confusion with Beijing Normal University. There are also over twenty specialist universities in Beijing: Traditional Chinese Medicine, Foreign Languages, Aeronautics and so on.

In 2004 and 2005 THES and QS referred to Beijing University finally correcting it to Peking University in 2006.

This was perhaps not too serious an error except that it revealed something about QS’s knowledge of its own sources and procedures.

In November 2005. Nunzio Quacquarelli went to a meeting in Kuala Lumpur, Malaysia. Much of the meeting was about the international students and faculty at UM and USM. There was apparently also a question about how Beijing University could have got such a magnificent score on the peer review while apparently producing almost no research. The correct answer would have been that QS was trying to find research written by scholars at Beijing University, which does not exist. Quacquarelli, however, answered that “we just couldn’t find the research” because Beijing University academics published in Mandarin (Kuala Lumpur New Straits Times 20/11/05).

This is revealing because QS’s “peer review” is actually nothing more than a survey of the subscribers to World Scientific, a Singapore-based company that publishes academic books and journals, many of them Asia-orientated and mostly written in English. World Scientific has very close ties with Peking University. If Quacquarelli knew very much about the company that produces his company’s survey he would surely have known that it had a cozy relationship with Peking University and that Chinese researchers, in the physical sciences at least, do quite a lot of publishing in English.


Seven: Student faculty ratios at Yonsei and Korea universities (November 08, 2006)

Another distinguished university administrator whose career suffered because of a QS error was of Yonsei University. This university is a rival of Korea University and was on most measures its equal or superior. But on the THES – QS rankings it was way behind, largely because of a poor student faculty ratio. As it happened, the figure given for Korea University was far too favourable and much better even than the ratio admitted by the university itself. This did not, however, help Jung Chang-Young who had to resign.


Eight: Omission of SUNY – Binghamton, Buffalo and Albany

THES and QS have apologized for omitting the British universities of Lancaster, Essex and Royal Holloway. A more serious omission is the omission of the State University of New York’s (SUNY) University Centres at Buffalo, Albany and Binghamton. SUNY has four autonomous university centres which are normally treated as independent and are now often referred to as the Universities of Buffalo and Albany and Binghamton University. THES-QS does refer to one university centre as Stony Brook University, probably being under the impression that this is the entirety of the SUNY system. Binghamton is ranked 82nd according to the USNWR and 37th among public national universities (2008). It can boast several internationally known scholars such as Melvin Dubofsky in labour history and Immanuel Wallerstein in sociology. To exclude it from the rankings while including the likes of Dublin Institute of Technology and the University of Pune is ridiculous.


Nine: Student faculty ratio at Ecole Polytechnique (September 08, 2006)

In 2005 the Ecole Polytechnique went zooming up the rankings to become the best university in continental Europe. Then in 2006 it went zooming down again. All this was s because of extraordinary fluctuations in the student faculty ratio. What happened could be determined by looking at the data on QS’s topgraduate site. Clicking on the rankings for 2005 led to the data that was used for that year (it is no longer available). There were two sets of data for students and faculty for that year, evidently one containing part-time faculty and another with only full time faculty. It seems that in 2005 part-time faculty were counted but not in 2006.


Ten: Washington University in St Louis (November 11, 2007)

This is a leading university in every respect. Yet in 2007, QS gave it a score of precisely one for citations per faculty, behind Universitas Gadjah Mada, the Dublin Institute of Technology and Politecnico di Milano and sent it falling from 48th to 161st in the overall rankings. What happened was that QS got mixed up with the University of Washington (in Seattle) and gave all WUSL’s citations to the latter school.

Sunday, November 05, 2017

Ranking debate: What should Malaysia do about the rankings?


A complicated relationship

Malaysia has had a complicated relationship with global university rankings. There was a moment back in 2004 when the first Times Higher Education Supplement- Quacquarelli Symonds (THES-QS) world rankings put the country's flagship, Universiti Malaya (UM), in the top 100. That was the result of an error, one of several QS made in its early days. Over the next few years UM has gone down and up in the rankings, but generally trending upwards with other Malaysian universities following behind. This year it is 114th in the QS world rankings and the top 100 seems in sight once again.

There has been a lot of debate about the quality of the various ranking systems, but it does seem that UM and some other universities have been steadily improving, especially with regard to research, although, as the recent Universitas 21 report shows, output and quality are still lagging behind the provision of resources.  

There is, however, an unfortunate tendency in many places, including Malaysia, for university rankings to get mixed up with local politics. A good ranking performance is proclaimed a triumph by the government and a poor one is deemed by the opposition to be punishment for failed policies.

QS rankings criticised

Recently Ong Kian Ming, a Malaysian opposition MP, said that it was a mistake for the government to use the QS world rankings as a benchmark to measure the quality of Malaysian universities and that the ranking performance of UM and other universities is not a valid measure of quality.

"Serdang MP Ong Kian Ming today slammed the higher education ministry for using the QS World University Rankings as a benchmark for Malaysian universities.
In a statement today, the DAP leader called the decision “short-sighted” and “faulty”, pointing out that the QS rankings do not put much emphasis on the criteria of research output.

According to the QS World University Rankings  for 2018, released on June 8, five Malaysian varsities were ranked in the top 300, with Universiti Malaya (UM) occupying 114th position."

The article went on to say that:


"However, Ong pointed to the Times Higher Education (THE) World University Rankings for 2018, which he said painted Malaysian universities in a different light.

According to the THE rankings, which were released earlier this week, none of Malaysia’s universities made it into the top 300.



Ong suggests that they should rely on locally developed measures.

“Instead of being “obsessed” with the ranking game, he added, the ministry should work to improve the existing academic indicators and measures which have been developed locally by the ministry and the Malaysian Qualifications Agency to assess the quality of local public and private universities”

Multiplication of rankings

It is certainly not a good idea for anyone to rely on any single ranking. There are now over a dozen global rankings and several regional ones that assess universities according to a variety of criteria. Universities in Malaysia and elsewhere could make more use of these rankings some of which are technically much better than the well known big three or four, QS, THE, The Shanghai Academic Ranking of World Universities (ARWU) and sometimes the US News Best Global Universities.

Dr. Ong is also quite right to point out the QS rankings have methodological flaws.  However, the THE rankings are not really any better, and they are certainly not superior in the measurement of research quality. They also have the distinctive attribute that 11 of their 13 indicators are not presented separately but bundled into three groups of indicators so that the public cannot, for example, tell whether a good score for research is the result of an increase in research income, more publications, an improvement in reputation for research, or a reduction in the number of faculty.

The important difference between the QS and THE rankings is not that the latter are focussed on research. QS's academic survey is specifically about research and its faculty student ratio, unlike THE's, includes research-only staff. The salient difference is that the THE academic survey is restricted to published researchers while QS's  allows universities to nominate potential respondents, something that gives an advantage to upwardly mobile institutions in Asia and Latin America.


Ranking vulnerabilities
All of the three well known rankings, THE, QS and ARWU now have  vulnerabilities, metrics that can be influenced by institutions and where a modest investment of resources can produce a disproportionate and implausible rise in the rankings.

In the Shanghai rankings the loss or gain of a single highly cited researcher can make a university go up or down dozens of places in the top 500. In addition the recruitment of scientists whose work is frequently cited, even for adjunct positions, can help universities excel in ARWU’s publications and Nature and Science indicators.

The THE citations indicator has allowed a succession of institutions to over-perform  in the world or regional rankings:  Alexandria University, Anglia Ruskin University in Cambridge, Moscow Engineering Physics Institute, Federico Santa Maria Technical University in Chile, Middle East Technical University, Tokyo Metropolitan University, Veltech University in India, Universiti Tunku Abdul Rahman (UTAR) in Malaysia. The indicator officially has a 30% weighting but in reality it is even greater because of THE’s “regional modification” that gives a boost to every university except those in the top scoring country. The modification used to apply to all of the citations but now covers half.

The vulnerability of the QS rankings is the two survey indicators accounting for  50% of the total weighting which allows universities to propose their own respondents. In recent years some Asian and Latin American universities such as Kyoto University, Nanyang Technological University (NTU), the University of Buenos Aires, the Pontifical Catholic University of Chile and the National University of Colombia have received scores for research and employer reputation that are out of line with their performance on any other indicator.

QS may have discovered a future high flyer in NTU but I have my doubts about the Latin American places. It is also most unlikely that Anglia Ruskin, UTAR and  Veltech will do so well in the THE rankings if they lose their highly cited researchers.

Consequently, there are limits to the reliability of the popular rankings and none of them should be considered the only sign of excellence. Ong is quite correct to point out the problems of the QS rankings but the other well known ones also have defects.


Beyond the Big Four


Ong points out that if we look at "the big four" then the high position of UM in the QS rankings is anomalous.  It is in 114th place in the QS world rankings (24th in the Asian rankings), 351-400 in THE, 356 in US News global rankings and 401-500  in ARWU.

The situation looks a little different when you consider all of the global rankings. Below is UM's position in 14 global rankings. The QS world  rankings are still where UM does best but here it is at the end of a curve. UM is 135th  for publications in the Leiden Ranking, generally considered by experts to be the best technically, although it is lower for high quality publications, 168th in the Scimago Institution Rankings, which combine research and innovation and 201-250 in the QS graduate employability rankings.

The worst performance is in the Unirank rankings (formerly ic4u), based on web activity, where UM is 697th.

The Shanghai rankings are probably a better guide to research prowess than either QS or THE since they deal only with research and, with one important exception, have a generally stable methodology. UM is 402nd overall, having fallen from 353rd in 2015 because of changes in the list of highly cited researchers used by the Shanghai rankers.  UM does better for publications, 143rd this year and 142nd in 2015.

QS World University Rankings: 114 [general, mainly research]
CWTS Leiden Ranking:  publications 135,  top 10% of journals 195 [research]
Scimago Institutions Rankings:  168 [research and innovation]
QS Graduate Employability Rankings: 201-250 [graduate outcomes]
Round University Ranking: 268 [general]
THE World University Rankings: 351-400 [general, mainly research]
US News Best Global Universities: 356 [research]
Shanghai ARWU: 402 [research]
Webometrics: overall 418 (excellence 228) [mainly web activity]
Center for World University Rankings: 539 [general, quality of graduates]
Nature Index: below 500 [high impact research]
uniRank: 697 [web activity]


The QS rankings are not such an outlier. Looking at indicators in other rankings devoted to research gives us results that are fairly similar. Malaysian universities would, however, be wise to avoid concentrating on any single ranking and  they should look at the specific indicators that measure features that are considered important.


Universities with an interest in technology and innovation could look at the Scimago rankings which include patents. Those with strengths in global medical studies might find it beneficial to go for the THE rankings but should always watch out for changes in methodology. 

Using local benchmarks is not a bad idea and it can be valuable for those institutions that are not so concerned with research but many Malaysian institutions are now competing on the global stage and are subject to international assessment and that, whether they like it or not, means assessment by rankings. It would be an improvement if benchmarks and targets were expressed as reaching a certain level in two or three rankings, not just one. Also, they should focus on specific indicators rather than the overall score and different rankings and indicators should be used to assess and compare different places.


For example, the Round University Rankings from Russia, which include five of the six metrics in the QS rankings plus others but with sensible weightings, could be used to supplement the QS world rankings.


For measuring research output and quality universities the Leiden Ranking might be a better alternative to either the QS or the THE rankings. Those universities with an innovation mission could refer to the innovation knowledge metric in the Scimago Institutions Rankings

When we come to measuring teaching and the quality of graduates there is little of value from the current range of global rankings. There have been some interesting initiatives such as the OECD's AHELO project and U-Multirank but these have yet to be widely accepted. The only international metric that even attempts to directly assess graduate quality is QS's employer survey.

So, universities, governments and stakeholders need to stop thinking about using one ranking as a benchmark for everyone and also to stop looking at the overall rankings. 

Tuesday, May 07, 2013

Unsolicited Advice



There has  been a lot of debate recently about the reputation survey component in the QS World University Rankings.

The president of University College Cork asked faculty to find friends at other universities who "understand the importance of UCC improving its university world ranking". The reason for the reference to other universities is that the QS survey very sensibly does not permit respondents to vote for their own universities, those that they list as their affiliation.  

This request appears to violate QS's guidelines which permit universities to inform staff about the survey but not to encourage them to nominate or refrain from nominating any particular university. According to an article in Inside Higher Ed QS are considering whether it is necessary to take any action.

This report has given Ben Sowter of QS sufficient concern to argue that it is not possible to effectively manipulate the survey.  He has set out a reasonable case why it is unlikely that any institution could succeed in marching graduate students up to their desktops to vote for favoured institutions to avoid being sent to a reeducation camp or to teach at a community college.

However, some of his reasons sound a little unconvincing: signing up, screening, an advisory board with years of experience. It would help if he were a little more specific, especially about the sophisticated anomaly detection algorithm, which sounds rather intimidating.

The problem with the academic survey is not that an institution like University College Cork is going to push its way into the global  top twenty or top one hundred  but that there could be a systematic bias towards those who are ambitious or from certain regions. It is noticeable that some universities in East and Southeast Asia do very much better on the academic survey than on other indicators. 

The QS academic survey is getting overly complicated and incoherent. It began as a fairly simple exercise. Its respondents were at first drawn form the subscription lists of World Scientific, an academic publishing company based in Singapore. Not surprisingly, the first academic survey produced a strong, perhaps too strong, showing for Southeast and East Asia and Berkeley. 

The survey turned out to be unsatisfactory, not least because of an extremely small response rate. In succeeding years QS has added respondents drawn from the subscription lists of Mardev, an academic database, largely replacing those from World Scientific, lists supplied by universities, academics nominated by respondents to the survey and those joining the online sign up facility. It is not clear how many academics are included in these groups or what the various response rates are. In addition, counting responses for three years unless overwritten by the respondent might enhance the stability of the indicator but it also means that some of the responses might be from people who have died or retired.

The reputation survey does not have a good reputation and it is time for QS to think about revamping the methodology. But changing the methodology means that rankings cannot be used to chart the progress or decline of universities over time. The solution to this dilemma might be to launch a new ranking and keep the old one, perhaps issuing it later in the year or giving it less prominence.

My suggestion to QS is that they keep the current methodology but call it the Original QS Rankings or the QS Classic Rankings. Then they could introduce the  QS Plus or New QS rankings or something similar which would address the issues about the academic survey and introduce some other changes. Since QS are now offering a wide range of products, Latin American Rankings, Asian Rankings, subject rankings, best student cities and probably more to come, this should  not impose an undue burden.

First, starting with the academic survey, 40 percent is too much for any indicator. It should be reduced to 20 per cent.

Next, the respondents should be divided into clearly defined categories, presented with appropriate questions and appropriately verified.

It should be recognised that subscribing to an online database or being recommended by another faculty member is not really a qualification for judging international research excellence. Neither is getting one’s name listed as corresponding author. These days that  can have as much to do with faculty politics as with ability.  I suggest that the academic survey should be sent to:

(a) highly cited researchers  or those with a high h-index who should be asked about international research excellence;
(b) researchers drawn from the Scopus database who should be asked to rate the regional or national research standing of universities.

Responses should be weighted according to the number of researchers per country.

This could be supplemented with a survey of student satisfaction with teaching based on a student version of the sign up facility and requiring a valid academic address with verification.

Also, a sign up facility could be established for anyone interested and asking a question about general perceived quality.

If QS ever do change the academic survey they might as well review the other indicators. Starting with the employer review, this should be kept since, whatever its flaws, it is an external check on universities. But it might be easier to manipulate than the academic survey. Something was clearly going on in the 2011 ranking when there appeared to be a disproportionate number of respondents from some Latin American countries, leading QS to impose caps on universities exceeding the national average by a significant amount. 

"QS received a dramatic level of response from Latin America in 2011, these counts and all subsequent analysis have been adjusted by applying a weighting to responses from countries with a distinctly disproportionate level of response."

It seems that this problem was sorted out in 2012. Even so, QS might consider giving   half the weighting for this survey to an invited panel of employers. Perhaps  they could also broaden their database by asking NGOS and non-profit groups about their preferences.

There is little evidence that overall the number of international students has anything to do with any measure of quality and it also may have undesirable backwash effects as universities import large numbers of less able students. The problem is that QS are doing a good business moving graduate students across international borders so it is unlikely that they will ever consider doing away this indicator.

Staff student ratio is by all accounts a very crude indicator of quality of teaching. Unfortunately, at the moment there does appear to be any practical alternative. 
One thing that QS could do is to remove research staff from the faculty side of the equation. At the moment a university that hires an army of underpaid research assistants  and sacks a few teaching staff, or packs them off to a branch campus, would be recorded as having brought about a great improvement in teaching quality.

Citations are a notoriously problematical way of measuring research influence or quality. The Leiden Ranking shows that there are many ways of measuring research output and influence. It would be a good  idea to combine several different ways of counting citations. QS have already started to use the h- index in their subject rankings starting this year and have used citations per paper in the Asian University Rankings.

With the 20 per cent left over from reducing the weighting for the academic survey QS might consider introducing a measure of research output rather than quality since this would help distinguish among universities outside the elite and perhaps use internet data from Webometrics as in the Latin American rankings.

Thursday, March 28, 2019

Global University Rankings and Southeast Asia


Global University Rankings and Southeast Asia
Paper presented at Asia-Pacific Association for International Education, Kuala Lumpur 26 March 2019

Background
Global rankings began in a small way in 2003 with the publication of the first edition of the Shanghai Rankings. These were quite simple, comprising six indicators that measured scientific research. Their purpose was to show how far Chinese universities had to go to reach world class status. Public interest was limited although some European universities were shocked to find how far they were behind English-speaking institutions.
Then came the Times Higher Education Supplement (THES) – Quacquarelli Symonds (QS) World University Rankings. Their methodology was very different from that of Shanghai, relying heavily of a survey of academic opinion. In most parts of the world interest was limited and the rankings received little respect but Malaysia was different. The country’s flagship university, Universiti Malaya (UM), reached the top one hundred, an achievement that was cause for great if brief celebration. That achievement was the result of an error on the part of the rankers, QS, and in 2005 UM crashed out of the top 100.

Current Ranking Scene
International rankings have made substantial progress over the last decade and a half. In 2003 there was one, Shanghai. Now according to the IREG Inventory there are 45 international rankings of which 17 are global, plus subject, regional, system, business school and sub- rankings.
They cover a broad range of data that could be of interest to students, researchers, policy makers and other stakeholders. They include metrics like number of faculty and students, income, patents, web activity, publications, books, conferences, reputation surveys, patents, and contributions to environmental sustainability.

Rankings and Southeast Asia
For Malaysia the publication of the THES-QS rankings in 2004 was the beginning of years of interest, perhaps obsession, with the rankings. The country has devoted resources and support to gain favourable places in the QS rankings.
Singapore has emphasised both the QS and THE rankings since that unpleasant divorce in 2009. It has hosted the THE academic summit and has performed well in nearly all rankings especially in the THE and QS world rankings.
A few universities in Thailand, Indonesia and the Philippines have been included at the lower levels of rankings such as those published by the University of Leiden, National Taiwan University, Scimago, THE and QS.
Other countries have shown less interest. Myanmar and Cambodia are included only in the Webometrics and uniRank rankings, which include thousands of places with the slightest pretension of being a university or college.

Inclusion and Performance
There is considerable variation in the inclusiveness of the rankings. There are five Southeast Asian universities in the Shanghai Rankings and 3,192 in Webometrics.
Among Southeast Asian universities Singapore is clearly the best performer, followed by Malaysia, while Myanmar is the worse.

Targets
The declaration of targets with regard to rankings is a common strategy across the world.  Malaysia has a specific policy of getting universities into the QS rankings, 4 in the top 200, 2 in the top 100 and one in the top 25.
In Thailand the 20-year national strategy aims at getting at least five Thai universities into the top 100 of the world rankings.
Indonesia wants to get five specified universities into the QS top 500 by 2019 and a further six by 2024.

The Dangers of Rankings
The cruel reality is that we cannot escape rankings. If all the current rankings were banned and thrown into an Orwellian memory hole then we would simply revert to informal and subjective rankings that prevailed before.
If we must have formal rankings then they should be as valid and accurate as possible and they should take account of the varying missions of universities and their size and clientele and they should be as comprehensive as possible.
To ignore the data that rankings can provide is to seriously limit public awareness. At the moment Southeast Asian universities and governments seem interested mainly or only in the QS rankings or perhaps the THE rankings.
To focus on any single ranking could be self-defeating. Take a look at Malaysia’s position in the QS rankings. It is obvious that UM, Malaysia’s leading university in most rankings, does very much better in the QS rankings than in every single ranking, except the GreenMetric rankings.
Why is this? The QS rankings allot a 40 % weighting to a survey of academic opinion supposedly about research, more than any other ranking. They allow universities to influence the composition of survey respondents, by submitting names or by alerting researchers to the sign-up facility where they can take part in the survey.
To their credit, QS have published the number of survey respondents by country. The largest number is from the USA with almost as many from the UK. The third largest number of respondents is from Malaysia, more than China and India combined. Malaysian universities do much better in the academic survey than they do for citations.
It is problematical to present UM as a top 100 university. It has a good reputation among local and regional researchers but is not doing so well in the other metrics especially research of the highest quality.
There is also a serious risk that the performance in the QS ranking is precarious. Already countries like Russia, Colombia, Iraq, and Kazakhstan are increasing their representation in the QS survey. More will join them. The top Chinese universities are targeting the Shanghai rankings but one day the second tier may try out for the QS rankings.
Also, any university that relies too much on the QS rankings could easily be a victim of methodological changes. QS has, with good reason, revamped its methodology several times and this can easily affect the scores of universities through no fault or credit of their own. This may have happened again during the collection of data for this year’s rankings. QS recently announced that universities can either submit names of potential respondents or alert researchers to the sign-up facility but not, as in previous years, both. Universities that have not responded to this change may well suffer a reduced score in the survey indicators.
If not QS, should another ranking be used for benchmarking and targets? Some observers claim that Asian universities should opt for the THE rankings which are alleged to be more rigorous and sophisticated and certainly more prestigious.
That would be a mistake. The value of the THE rankings, but not their price, is drastically reduced by their lack of transparency so that it is impossible, for example, to tell whether a change in the score for research results from an increase in publications, a decline in the number of staff, an improved reputation or an increase in research income.
Then there is the THE citations indicator. This can only be described as bizarre and ridiculous.
Here are some of the universities that appeared in the top 50 of last year’s citation indicator which supposedly measures research influence or quality: Babol Noshirvani University of Technology, Brighton and Sussex medical School, Reykjavik University, Anglia Ruskin University Jordan University of Science and Technology, Vita-Salute San Raffaele University.

Proposals
1.      It is not a good idea to use any single ranking but if one is to be then it should be one that is methodologically stable and technically competent and does not emphasise a single indicator. For research, probably the best bet would be the Leiden Ranking. If a ranking is needed that includes metrics that might be related to teaching and learning then Round University Rankings would be helpful.
2.  Another approach would be to encourage universities to target more than one university.
3.     A regional database should be created that would provide information about ranks and scores in all relevant rankings and data about faculty, students, income, publications, citations and so on.
4.     Regional universities should work to develop measures of the effectiveness of teaching and learning.

Links