The Japan Times recently published an article by Takamitsu Sawa, President and Distinguished Professor at Shiga University, discussing the apparent decline of Japan's universities in the global rankings.
He notes that in 2014 there were five Japanese universities in the top 200 of the Times Higher Education (THE) world rankings but only two in 2016. He attributes Japan's poor performance to the bias of the citations indicator towards English language publications and the inability or reluctance of Japanese academics to write in English. Professor Sawa seems to be under the impression that THE does not count research papers not written in English, which is incorrect. It is, however, true that the failure of Japanese scholars to write in English prevents their universities doing better in the rankings. He also blames lack of funding from the government and the Euro-American bias of the THE reputation survey.
The most noticeable thing about this article is that the author talks about exactly one table, the THE World University Rankings. This is unfortunately very common especially among Asian academics, There are now over a dozen global rankings of varying quality and some of them tell a different, and perhaps more accurate, story than THE's. For example, there are several well known international rankings in which there are more Japanese universities in the world top 200 than there are in THE's.
There are currently two in the THE top 200 but seven in the Shanghai Academic Ranking of World Universities (ARWU), ten in the QS World University Rankings, ten in the Russian Round University Rankings, seven in the CWTS Leiden Ranking total publications indicator and ten in the Nature Index.
Let's now take a look at the University of Tokyo (Todai), the country's best known university, and it's position in these rankings. Currently it is 46th in the world in THE but in ARWU it is 23rd, in QS 28th, in Leiden Ranking tenth for publications and tenth in the Nature Index. RUR put the university in 43rd place, still a little better than THE. It is very odd that Professor Sawa should focus on the rankings that puts Japanese universities in the worst possible light and ignore the others.
As noted in an earlier post, Tokyo's tumble in the THE rankings came suddenly in 2015 when THE made some drastic changes in its methodology, including switching to Scopus as data supplier, excluding papers with large numbers of authors such as those derived from the CERN projects, and applying a country adjustment to half instead of all the citations indicator. Then in 2016 THE made further changes for its Asian rankings that further lowered the scores of Japanese universities.
It is true that scores of leading Japanese universities in most rankings have drifted downwards over the last few years but this is a relative trend caused mainly by the rise of a few Chinese and Korean universities. Japan's weakest point, as indicated by the RUR and THE rankings, is internationalisation. These rankings show that the major Japanese universities still have strong reputations for postgraduate teaching and research while the Nature Index and the Leiden Ranking point to an excellent performance in research in the natural science at the highest levels.
Nobody should rely on a single ranking and changes caused mainly by methodological tweaking should be taken with a large bucket of salt.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Friday, December 29, 2017
Tuesday, December 19, 2017
Saturday, December 16, 2017
Rankings in Hong Kong
My previous post on the City University of Hong Kong has been republished in the Hong Kong Standard.
So far I can find no reference to anyone asking about the City University of Hong Kong's submission of student data to THE or data about faculty numbers for any Hong Kong university.
I also noticed that the Hong Kong University of Science and Technology is not on the list of 500 universities in the QS Employability Rankings although it is 12th in the one published in THE. Is there a dot here?
So far I can find no reference to anyone asking about the City University of Hong Kong's submission of student data to THE or data about faculty numbers for any Hong Kong university.
I also noticed that the Hong Kong University of Science and Technology is not on the list of 500 universities in the QS Employability Rankings although it is 12th in the one published in THE. Is there a dot here?
Measuring graduate employability; two rankings
Global university rankings are now well into their second decade. Since 2003, when the first Shanghai rankings appeared, there has been a steady growth of global and regional rankings. At the moment most global rankings are of two kinds, those that focus entirely or almost entirely on research and those such as the Russian Round Rankings, Times Higher Education (THE) and Quacquarelli Symonds (QS) that claim to also measure teaching, learning or graduate quality in some way, although even those are biased towards research when you scratch the surface a little.
The ranking industry has become adept at measuring research productivity and quality in various ways. But the assessment of undergraduate teaching and learning is another matter.
Several ranking organisations use faculty student ratio as a proxy for quality of teaching which in turn is assumed to have some connection with something that happens to students during their programmes. THE also count institutional income, research income and income from industry, again assuming that there is a significant association with academic excellence. Indicators like this are usually based on those supplied by institutions. For examples of problems here see an article by Alex Usher and a reply by Phil Baty.
An attempt to get at student quality is provided by the CWUR rankings now based in UAE, counting alumni who win international awards or who are CEOs of major companies. But obviously this is relevant only for a very small number of universities. A new pilot ranking from Moscow also counts international awards.
The only attempt to measure student quality by the well known rankers that is relevant to most institutions is the survey of employers in the QS world and regional rankings. There are some obvious difficulties here. QS gets respondents from a variety of channels and this may allow some universities to influence the survey. In recent years some Latin American universities have done much better on this indicator than on any other.
THE now publish a global employability ranking which is conducted by two European firms, Trendence and Emerging. This is based on two surveys of recruiters in Argentina, Australia, Austria, Brazil, Canada, China, Germany, France, India, Israel, Italy, Japan, Mexico, Netherlands, Singapore, Spain, South Africa, South Korea, Turkey, UAE, UK, and USA. There were two panels with a total of over 6,000 respondents.
A global survey that does not include Chile, Sweden, Egypt, Nigeria, Saudi Arabia, Russia, Pakistan, Indonesia, Bangladesh, Poland, Malaysia or Taiwan can hardly claim to be representative of international employers. This limited representation may explain some oddities of the rankings such as the high places of the American University of Dubai and and the National Autonomous University of Mexico.
The first five places in these rankings are quite similar to the THE world rankings: Caltech, Harvard, Columbia, MIT, Cambridge. But there some significant differences after that and some substantial changes since last year. Here Columbia, 14th in the world rankings, is in third place, up from 12th last year. Boston University is 6th here but 70th in the world rankings. Tokyo Institute of Technology in 19th place is in the 251-300 band in the world rankings. CentraleSupelec, is 41st, but in the world 401-500 group.
These rankings are useful only for a small minority of universities, stakeholders and students. Only 150 schools are ranked and only a small proportion of the world's employers consulted.
QS have also released their global employability rankings with 500 universities. These combine the employer reputation survey, used in their world rankings with other indicators: alumni outcomes, based on lists of high achievers, partnership with employers, that is research collaboration noted in the Scopus database, employer-student connections, that is employers actively present on campus, and graduate employment rate. There seems to be a close association, at least at the top, between overall scores, employer reputation and alumni outcomes. Overall the top three are Stanford, UCLA, Harvard. For employer reputation they are Cambridge, Oxford, Harvard and for alumni outcomes Harvard, Stanford Oxford.
The other indicators are a different matter. For employer-student connections the top three are Huazhong University of Science and Technology, Arizona State University, and New York University. In fact seven out of the top ten on this measure are Chinese. For graduate employment rate they are Politecnico di Torino, Moscow State Institute of International Relations, and Sungkyunkwan University and for partnership with employers Stanford, Surrey and Politecnico Milano. When the front runners in indicators are so different one has to wonder about their validity.
There are some very substantial differences in the ranks given to various universities in these rankings. Caltech is first in the Emerging-Trendence rankings and 73rd in QS. Hong Kong University of Science and Technology is 12th in Emerging-Trendence but not ranked at all by QS. The University of Sydney is 4th in QS and 48th in Emerging-Trendence. The American University of Dubai is in QS's 301-500 band but 138th for Emerging-Trendence
The rankings published by THE could be some value to those students contemplating careers with the leading companies in the richest countries.
The QS rankings may be more helpful for those students or stakeholders looking at universities outside the very top of the global elite. Even so QS have ranked only a fraction of the world's universities.
It still seems that the way forward in the assessment of graduate outcomes and employability is through standardised testing along the lines of AHELO or the Collegiate Learning Assessment.
The ranking industry has become adept at measuring research productivity and quality in various ways. But the assessment of undergraduate teaching and learning is another matter.
Several ranking organisations use faculty student ratio as a proxy for quality of teaching which in turn is assumed to have some connection with something that happens to students during their programmes. THE also count institutional income, research income and income from industry, again assuming that there is a significant association with academic excellence. Indicators like this are usually based on those supplied by institutions. For examples of problems here see an article by Alex Usher and a reply by Phil Baty.
An attempt to get at student quality is provided by the CWUR rankings now based in UAE, counting alumni who win international awards or who are CEOs of major companies. But obviously this is relevant only for a very small number of universities. A new pilot ranking from Moscow also counts international awards.
The only attempt to measure student quality by the well known rankers that is relevant to most institutions is the survey of employers in the QS world and regional rankings. There are some obvious difficulties here. QS gets respondents from a variety of channels and this may allow some universities to influence the survey. In recent years some Latin American universities have done much better on this indicator than on any other.
THE now publish a global employability ranking which is conducted by two European firms, Trendence and Emerging. This is based on two surveys of recruiters in Argentina, Australia, Austria, Brazil, Canada, China, Germany, France, India, Israel, Italy, Japan, Mexico, Netherlands, Singapore, Spain, South Africa, South Korea, Turkey, UAE, UK, and USA. There were two panels with a total of over 6,000 respondents.
A global survey that does not include Chile, Sweden, Egypt, Nigeria, Saudi Arabia, Russia, Pakistan, Indonesia, Bangladesh, Poland, Malaysia or Taiwan can hardly claim to be representative of international employers. This limited representation may explain some oddities of the rankings such as the high places of the American University of Dubai and and the National Autonomous University of Mexico.
The first five places in these rankings are quite similar to the THE world rankings: Caltech, Harvard, Columbia, MIT, Cambridge. But there some significant differences after that and some substantial changes since last year. Here Columbia, 14th in the world rankings, is in third place, up from 12th last year. Boston University is 6th here but 70th in the world rankings. Tokyo Institute of Technology in 19th place is in the 251-300 band in the world rankings. CentraleSupelec, is 41st, but in the world 401-500 group.
These rankings are useful only for a small minority of universities, stakeholders and students. Only 150 schools are ranked and only a small proportion of the world's employers consulted.
QS have also released their global employability rankings with 500 universities. These combine the employer reputation survey, used in their world rankings with other indicators: alumni outcomes, based on lists of high achievers, partnership with employers, that is research collaboration noted in the Scopus database, employer-student connections, that is employers actively present on campus, and graduate employment rate. There seems to be a close association, at least at the top, between overall scores, employer reputation and alumni outcomes. Overall the top three are Stanford, UCLA, Harvard. For employer reputation they are Cambridge, Oxford, Harvard and for alumni outcomes Harvard, Stanford Oxford.
The other indicators are a different matter. For employer-student connections the top three are Huazhong University of Science and Technology, Arizona State University, and New York University. In fact seven out of the top ten on this measure are Chinese. For graduate employment rate they are Politecnico di Torino, Moscow State Institute of International Relations, and Sungkyunkwan University and for partnership with employers Stanford, Surrey and Politecnico Milano. When the front runners in indicators are so different one has to wonder about their validity.
There are some very substantial differences in the ranks given to various universities in these rankings. Caltech is first in the Emerging-Trendence rankings and 73rd in QS. Hong Kong University of Science and Technology is 12th in Emerging-Trendence but not ranked at all by QS. The University of Sydney is 4th in QS and 48th in Emerging-Trendence. The American University of Dubai is in QS's 301-500 band but 138th for Emerging-Trendence
The rankings published by THE could be some value to those students contemplating careers with the leading companies in the richest countries.
The QS rankings may be more helpful for those students or stakeholders looking at universities outside the very top of the global elite. Even so QS have ranked only a fraction of the world's universities.
It still seems that the way forward in the assessment of graduate outcomes and employability is through standardised testing along the lines of AHELO or the Collegiate Learning Assessment.
Monday, December 11, 2017
Rankings Calendar
The Times Higher Education (THE) Asian Universities Summit will be held at the Southern University of Science and Technology, Shenzen, China, 5th-7th February, 2018. The 2018 THE Asian universities rankings will be announced.
Monday, November 27, 2017
Rankings Uproar in Hong Kong
There is a controversy brewing in Hong Kong about the submission of data to the QS World University Rankings. It seems that the City University of Hong Kong (CityU) has submitted a smaller figure for the total number of its students than that presented by the SAR's University Grants Committee (UGC). The objective of this was presumably to boost the score for faculty student ratio, which accounts for 20% of the total score in the QS rankings. The complaints apparently began with two other local universities and were reported in the Chinese language Apple Daily.
There is nothing new about this sort of thing. Back in 2006 I commented on the difference between the number of students at "Beijing University" on the university web site and that declared by QS. Ong Kian Ming has noted discrepancies between the number of students at Malaysian universities reported on web sites and that published by QS and there have been questions about the number of international students at Singapore universities
The first thing that strikes an outside observer about the affair is that the complaint seems to be just about QS and does not mention the THE rankings although exactly the same number of students, 9,240, appears on both the QS and THE pages. The original article in Chinese apparently makes no mention of THE.
This suggests that there might be a bit of politics going on here. THE seems to have a good relationship with some of the leading universities in Hong Kong such as the University of Hong Kong (UHK) and the Hong Kong University of Science and Technology (HKUST). In 2015 THE held a prestigious summit at HKUST where it announced after "feedback from the region" that it was introducing methodological changes that would dethrone the University of Tokyo from the number one spot in the Asian rankings and send it down to seventh place behind HKUST and UHK. It looks as though whoever is complaining about CityU is diverting their eyes from THE.
There is certainly a noticeable difference between the number of students submitted to QS and THE by CityU and that published by the UGC. This is not, however, necessarily nefarious. There are many ways in which a university could massage or trim data in ways compliant with the rankers' guidelines: using a specific definition of Full Time Equivalent, omitting or including branch campuses, research centres, affiliated institutions, counting students at the beginning or the end of the semester, counting or not counting exchange students or those in certificate, diploma, transitional or preparatory programmes. It is also not totally impossible that the government data may not be 100% accurate.
Other Hong Kong universities have also submitted student data that differs from that available at the UGC site but to a lesser extent.
The UGC's data refers to 13,725 full time equivalent students in 2014-15. It is possible that City University has found legitimate ways of whittling down this number. If nothing else, they could claim that they had to use data from earlier years because of uncertainty about the validity of current data.
The real problem here is that it is possible that some universities have learned that success in the rankings is sometimes as much a matter of careful reading of statistics and guidelines as it is of improved teaching or research.
Another thing that has so far gone unnoticed is that CityU has also been reducing the number of faculty. The UGC reports a total of 2,380 full time equivalent faculty while QS reports 1,349. If the university had just used the raw UGC figures it would have a faculty student ratio of 5.77. The QS figure is 6.85. So by modifying the UGC data, if that is where the university started, CityU actually got a worse result on this indicator. They would, however, have done a bit better on the citations per faculty indicator.
This leads on to what the Hong Kong universities did with their faculty numbers.
For the University of Hong Kong the UGC reports a total of 5,093 FTE staff but the QS site has 3,012. THE does not give a figure for the number of faculty but it is possible to calculate this from the number of students and the faculty student ratio, which are provided. The current THE profile of UHK has 18,364 students and 18 students per staff, which gives us 1,020 staff.
For HKUST the UGC number of staff is 2,398. The number calculated from THE data is 442. QS has a total of 1,150.
For the Chinese University of Hong Kong (CUHK) we have these numbers: UGC 5,070, QS 2,208, THE 1,044.
For the Polytechnic University of Hong Kong (PUHK): UGC, 3,356, QS 2,447, THE 809.
The UGC gives 2,380 FTE staff for CityU, QS 1,349, and THE 825.
The UGC also provides the number of faculty wholly funded by the UGC and this number is always much lower than the total faculty. The QS faculty numbers are generally quite similar to these although I do not know if there was a decision to exclude non-funded faculty. The calculated THE faculty numbers are much lower than those provided by the UGC and lower than the QS numbers.
I suspect that what is going on is that the leading Hong Kong universities have adopted the strategy of aiming for the THE rankings where their income, resources and international connections can yield maximum advantage. They presumably know that the weighting of the staff/student indicator, where it is better to have more faculty, is only 4.5% but the indicators where fewer total staff are better (international faculty, research income, research productivity, industry income, doctorates awarded, institutional income) have a combined weighting of 25.25%.
CityU in contrast has focussed on the QS rankings and looked for ways of reducing the number of students submitted.
It is possible that HKUST and UHK could justify the data the submitted to the rankers while CityU might not, It does, however, seem rather strange and unfair that City University's student data has come under such intense scrutiny while the faculty data of the other universities is so far unquestioned.
Ranking organisations should heed the suggestion by the International Rankings Experts Group (IREG) that indicators measure outcomes rather than inputs such as staff, facilities or income. They also should think about how much they should use data submitted by institutions. This may have been a good idea when they were ranking 200 or 300 places mainly in North America and Western Europe but now they are approaching 1,000 universities, sometimes very decentralised, and data collection is becoming more complicated and difficult.
QS used to talk about its "validation hierarchy" with central agencies such as HESA and NCES at the top, followed by direct contact with institutions, websites, and ending with "smart" averages. Perhaps this could be revived but with institutional data further down the hierarchy. The lesson of the latest arguments in Hong Kong and elsewhere is that data submitted by universities can often be problematical and unreliable.
Friday, November 17, 2017
Another global ranking?
In response to suggestion by Hee Kim Poh of Nanyang Technological University, I have had a look at the Worldwide Professional University Rankings which appear to be linked to "Global World Communicator" and the "International Council of Scientists" and may be based in Latvia.
There is a methodology page but it does not include essential information. One indicator is "number of publications to number of academic staff" but there is nothing about how either of these are calculated or where the data comes from. There is a reference to a survey of members of the International Council of Scientists but nothing about the wording of the survey, date of survey, distribution of respondents or the response rate.
Anyway, here is the introduction to the methodology:
"The methodology of the professional ranking of universities is based on comparing universities and professional evaluation by level of proposed training programs (degrees), availability and completeness of information on activities of a university, its capacity and reputation on a national and international levels. Main task is to determine parameters and ratios needed to assess quality of the learning process and obtained specialist knowledge. Professional formalized ranking system based on a mathematical calculation of the relation of parameters of the learning process characterizing quality of education and learning environment. Professional evaluation criteria are developed and ranking is carried out by experts of the highest professional qualification in relevant fields - professors of universities, specialists of the highest level of education, who have enough experience in teaching and scientific activities. Professional rating of universities consists of three components.. "
The top five universities are 1. Caltech, 2. Harvard, 3. MIT, 4. Stanford, 5. ETH Zurich.
Without further information, I do not think that this ranking is worth further attention.
http://www.cicerobook.com/en/ranks
There is a methodology page but it does not include essential information. One indicator is "number of publications to number of academic staff" but there is nothing about how either of these are calculated or where the data comes from. There is a reference to a survey of members of the International Council of Scientists but nothing about the wording of the survey, date of survey, distribution of respondents or the response rate.
Anyway, here is the introduction to the methodology:
"The methodology of the professional ranking of universities is based on comparing universities and professional evaluation by level of proposed training programs (degrees), availability and completeness of information on activities of a university, its capacity and reputation on a national and international levels. Main task is to determine parameters and ratios needed to assess quality of the learning process and obtained specialist knowledge. Professional formalized ranking system based on a mathematical calculation of the relation of parameters of the learning process characterizing quality of education and learning environment. Professional evaluation criteria are developed and ranking is carried out by experts of the highest professional qualification in relevant fields - professors of universities, specialists of the highest level of education, who have enough experience in teaching and scientific activities. Professional rating of universities consists of three components.. "
The top five universities are 1. Caltech, 2. Harvard, 3. MIT, 4. Stanford, 5. ETH Zurich.
Without further information, I do not think that this ranking is worth further attention.
http://www.cicerobook.com/en/ranks
Wednesday, November 15, 2017
Rankings Calendar: QS BRICS University Rankings
The QS BRICS (Brazil, Russia, India, China, South Africa) university rankings will be announced on November 23 at the QS-APPLE conference in Taiwan.
Tuesday, November 14, 2017
China overtakes USA in supercomputing
The website TOP500 keeps track of the world's most powerful computers. Six months ago the USA had 169 supercomputers in the top 500 and China 160. Now China has 202 and the USA 143.
They are followed by Japan with 35, Germany 20, France 18 and the UK 15.
There are four supercomputers in India, four in the Middle East (all in Saudi Arabia), one in Latin America (Mexico), one in Africa (South Africa),
They are followed by Japan with 35, Germany 20, France 18 and the UK 15.
There are four supercomputers in India, four in the Middle East (all in Saudi Arabia), one in Latin America (Mexico), one in Africa (South Africa),
The closing gap: When will China overtake the USA in research output?
According to the Scopus database, China produced 387,475 articles in 2016 and the USA 409,364, a gap of 21,889.
To be precise, there were 387,475 articles with at least one author affiliated to a Chinese university or research center and 409,364 with at least one author affiliated to an American university or research center.
So far this year there have been 346,425 articles with Chinese affiliations and 352,275 with US affiliations.
The gap is now 5,850 articles.
I think it safe to say that at some point early next year the gap will close and that China will then pull ahead of the USA.
Some caveats. A lot of those articles are just routine stuff and not very significant. For a while, the US may do better in high impact research as measured by citations. Also, US universities contribute more heads of research projects.
On the other hand, I suspect that many of the researchers listed as having American affiliations did their undergraduate degrees or secondary education in China.
And if we counted Hong Kong as part of China, then the gap would already have been closed.
To be precise, there were 387,475 articles with at least one author affiliated to a Chinese university or research center and 409,364 with at least one author affiliated to an American university or research center.
So far this year there have been 346,425 articles with Chinese affiliations and 352,275 with US affiliations.
The gap is now 5,850 articles.
I think it safe to say that at some point early next year the gap will close and that China will then pull ahead of the USA.
Some caveats. A lot of those articles are just routine stuff and not very significant. For a while, the US may do better in high impact research as measured by citations. Also, US universities contribute more heads of research projects.
On the other hand, I suspect that many of the researchers listed as having American affiliations did their undergraduate degrees or secondary education in China.
And if we counted Hong Kong as part of China, then the gap would already have been closed.
Sunday, November 05, 2017
Ranking debate: What should Malaysia do about the rankings?
A complicated relationship
Malaysia has had a complicated relationship with global university
rankings. There was a moment back in 2004 when the first Times Higher Education Supplement- Quacquarelli Symonds (THES-QS) world rankings
put the country's flagship, Universiti Malaya (UM), in the top 100. That
was the result of an error, one of several QS made in its early days. Over the
next few years UM has gone down and up in the rankings, but generally trending upwards with other Malaysian universities following behind. This year it is 114th in the QS world rankings
and the top 100 seems in sight once again.
There has been a lot of debate about the quality of the various ranking systems, but it does seem that UM and some other universities have been steadily improving, especially with regard to research, although, as the recent Universitas 21 report shows, output and quality are still lagging behind the provision of resources.
There has been a lot of debate about the quality of the various ranking systems, but it does seem that UM and some other universities have been steadily improving, especially with regard to research, although, as the recent Universitas 21 report shows, output and quality are still lagging behind the provision of resources.
There is, however, an unfortunate tendency in many places, including
Malaysia, for university rankings to get mixed up with local politics. A good
ranking performance is proclaimed a triumph by the government and a poor one is
deemed by the opposition to be punishment for failed policies.
QS rankings criticised
Recently Ong Kian Ming, a Malaysian opposition MP, said that it was a
mistake for the government to use the QS world rankings as a benchmark to
measure the quality of Malaysian universities and that the ranking performance
of UM and other universities is not a valid measure of quality.
"Serdang MP Ong Kian Ming
today slammed the higher education ministry for using the QS World University
Rankings as a benchmark for Malaysian universities.
In a statement today, the DAP
leader called the decision “short-sighted” and “faulty”, pointing out that the
QS rankings do not put much emphasis on the criteria of research output.
According to the QS World
University Rankings for 2018, released
on June 8, five Malaysian varsities were ranked in the top 300, with Universiti
Malaya (UM) occupying 114th position."
The article went on to say that:
"However, Ong pointed to the Times Higher Education (THE) World University Rankings for 2018, which he said painted Malaysian universities in a different light.
The article went on to say that:
"However, Ong pointed to the Times Higher Education (THE) World University Rankings for 2018, which he said painted Malaysian universities in a different light.
According to the THE rankings, which were released earlier this week, none of Malaysia’s universities made it into the top 300.
“Instead of being “obsessed” with the ranking game, he added, the ministry should work to improve the existing academic indicators and measures which have been developed locally by the ministry and the Malaysian Qualifications Agency to assess the quality of local public and private universities”
Ong suggests that they should rely on locally developed measures.
Multiplication
of rankings
It is certainly not a
good idea for anyone to rely on any single ranking. There are now over a dozen
global rankings and several regional ones that assess universities according to
a variety of criteria. Universities in Malaysia and
elsewhere could make more use of these rankings some of which are technically
much better than the well known big three or four, QS, THE, The Shanghai
Academic Ranking of World Universities (ARWU) and sometimes the US News Best Global Universities.
Dr. Ong is also quite right to point out the QS rankings have methodological flaws. However, the THE rankings are not really any better, and
they are certainly not superior in the measurement of research quality.
They also have the distinctive attribute that 11 of their 13 indicators are not
presented separately but bundled into three groups of indicators so that the
public cannot, for example, tell whether a good score for research is the
result of an increase in research income, more publications, an improvement in
reputation for research, or a reduction in the number of faculty.
The important difference between the QS and THE rankings is not that the latter are focussed on research. QS's academic survey is specifically about research and its faculty student ratio, unlike THE's, includes research-only staff. The salient difference is that the THE academic survey is restricted to published researchers while QS's allows universities to nominate potential respondents, something that gives an advantage to upwardly mobile institutions in Asia and Latin America.
The important difference between the QS and THE rankings is not that the latter are focussed on research. QS's academic survey is specifically about research and its faculty student ratio, unlike THE's, includes research-only staff. The salient difference is that the THE academic survey is restricted to published researchers while QS's allows universities to nominate potential respondents, something that gives an advantage to upwardly mobile institutions in Asia and Latin America.
Ranking
vulnerabilities
All of the three well known rankings, THE, QS and ARWU now have vulnerabilities, metrics that can be influenced by institutions
and where a modest investment of resources can produce a disproportionate and
implausible rise in the rankings.
In the Shanghai rankings the loss or gain of a single highly cited
researcher can make a university go up or down dozens of places in the top 500.
In addition the recruitment of scientists whose work is frequently cited, even
for adjunct positions, can help universities excel in ARWU’s publications and Nature and Science indicators.
The THE citations indicator has allowed a succession of institutions to over-perform in the world or regional rankings: Alexandria University, Anglia Ruskin
University in Cambridge, Moscow Engineering Physics Institute, Federico Santa
Maria Technical University in Chile, Middle East Technical University, Tokyo
Metropolitan University, Veltech University in India, Universiti Tunku Abdul
Rahman (UTAR) in Malaysia. The indicator officially has a 30% weighting but
in reality it is even greater because of THE’s “regional
modification” that gives a boost to every university except those in the top
scoring country. The modification used to apply to all of the citations but now
covers half.
The vulnerability of the QS rankings is the two survey
indicators accounting for 50% of the total weighting which allows universities
to propose their own respondents. In recent years some Asian and
Latin American universities such as Kyoto University, Nanyang Technological University
(NTU), the University of Buenos Aires, the Pontifical Catholic University of
Chile and the National University of Colombia have received scores for research and
employer reputation that are out of line with their performance on any other
indicator.
QS may have discovered a future high flyer in
NTU but I have my doubts about the Latin American places. It is also most unlikely that Anglia Ruskin, UTAR and Veltech will do so well in the THE rankings if they
lose their highly cited researchers.
Consequently, there are limits to the reliability of the popular rankings and none of them should be considered the only sign of excellence. Ong is quite correct to point out the problems of the QS rankings but the other well known ones also have defects.
Beyond the Big Four
Ong points out that if we look at "the big four" then the high position of UM in the QS rankings is anomalous. It is in 114th place in the QS world rankings (24th in the Asian rankings), 351-400 in THE, 356 in US News global rankings and 401-500 in ARWU.
Consequently, there are limits to the reliability of the popular rankings and none of them should be considered the only sign of excellence. Ong is quite correct to point out the problems of the QS rankings but the other well known ones also have defects.
Beyond the Big Four
Ong points out that if we look at "the big four" then the high position of UM in the QS rankings is anomalous. It is in 114th place in the QS world rankings (24th in the Asian rankings), 351-400 in THE, 356 in US News global rankings and 401-500 in ARWU.
The situation looks a little different when you consider all of the global rankings. Below is UM's position in 14 global rankings. The QS world rankings are still
where UM does best but here it is at the end of a curve. UM is 135th for publications in the Leiden Ranking,
generally considered by experts to be the best technically, although it is lower
for high quality publications, 168th
in the Scimago Institution Rankings, which combine research and innovation and 201-250 in the QS
graduate employability rankings.
The worst performance is in the Unirank rankings (formerly ic4u), based on web
activity, where UM is 697th.
The Shanghai rankings are probably a better guide to research prowess than either QS or THE since they deal only with research and, with one important exception, have a generally stable methodology. UM is 402nd overall, having fallen from 353rd in 2015 because of changes in the list of highly cited researchers used by the Shanghai rankers. UM does better for publications, 143rd this year and 142nd in 2015.
The Shanghai rankings are probably a better guide to research prowess than either QS or THE since they deal only with research and, with one important exception, have a generally stable methodology. UM is 402nd overall, having fallen from 353rd in 2015 because of changes in the list of highly cited researchers used by the Shanghai rankers. UM does better for publications, 143rd this year and 142nd in 2015.
QS World University Rankings: 114 [general, mainly research]
CWTS Leiden Ranking:
publications 135, top 10% of journals 195 [research]
Scimago Institutions Rankings: 168 [research and innovation]
QS Graduate Employability Rankings: 201-250 [graduate outcomes]
University Ranking by Academic Performance: 192 [research]
Round University Ranking: 268 [general]
National Taiwan University Rankings: 323 [research]
THE World University Rankings: 351-400 [general, mainly research]
US News Best Global Universities: 356 [research]
Shanghai ARWU: 402 [research]
Webometrics: overall 418 (excellence 228) [mainly web activity]
Center for World University Rankings: 539 [general, quality of graduates]
Nature Index: below 500 [high impact research]
uniRank: 697 [web activity]
The QS rankings are not such an outlier. Looking at indicators in other rankings devoted to research gives us results that are fairly similar. Malaysian universities would, however, be wise to avoid concentrating on any single ranking and they should look at the specific indicators that measure features that are considered important.
The QS rankings are not such an outlier. Looking at indicators in other rankings devoted to research gives us results that are fairly similar. Malaysian universities would, however, be wise to avoid concentrating on any single ranking and they should look at the specific indicators that measure features that are considered important.
Universities with an interest in technology and innovation could look at the Scimago rankings which include patents. Those with strengths in global medical studies might find it beneficial to go for the THE rankings but should always watch out for changes in methodology.
Using local benchmarks is not a bad idea and it can be valuable for those institutions that are not so concerned with research but many Malaysian institutions are now competing on the global stage and are subject to international assessment and that, whether they like it or not, means assessment by rankings. It would be an improvement if benchmarks and targets were expressed as reaching a certain level in two or three rankings, not just one. Also, they should focus on specific indicators rather than the overall score and different rankings and indicators should be used to assess and compare different places.
For example, the Round University Rankings from Russia, which include five of the six metrics in the QS rankings plus others but with sensible weightings, could be used to supplement the QS world rankings.
For measuring research output and quality universities the Leiden Ranking might be a better alternative to either the QS or the THE rankings. Those universities with an innovation mission could refer to the innovation knowledge metric in the Scimago Institutions Rankings
When we come to measuring teaching and the quality of graduates there is little of value from
the current range of global rankings. There have been some interesting
initiatives such as the OECD's AHELO project and U-Multirank but these have yet to be widely accepted. The only international metric that even attempts to directly assess graduate quality is QS's employer survey.
So, universities, governments and stakeholders need to stop thinking about using one ranking as a benchmark for everyone and
also to stop looking at the overall rankings.
Friday, November 03, 2017
Ranking Calendar
Over on the right there will be a list of events such as conferences, workshops, and announcements of rankings.
First is the 7th World-Class Universities Conference in Shanghai starting next Monday, November 6th.
First is the 7th World-Class Universities Conference in Shanghai starting next Monday, November 6th.
Resuming Posting
I have been busy with family and work matters recently but I shall resume posting tomorrow.
I shall be adding some features that I hope will make the blog more of a useful resource.
I shall be adding some features that I hope will make the blog more of a useful resource.
Sunday, September 17, 2017
Criticism of rankings from India
Some parts of the world seem to be increasingly sceptical of international rankings, or least those produced by Times Higher Education (THE). MENA (Middle East and North Africa) and Africa did not seem to be very enthusiastic about THE's snapshot or pilot rankings. Many Latin American universities have chosen not to participate in the world and regional rankings.
India also seems to be suspicious of the rankings. An article by Vyasa Shastri in the E-paper, livemint, details some of the ways in which universities might attempt to manipulate rankings to their advantage.
It is well worth reading although I have one quibble. The article refers to King Abdulaziz University recruiting faculty who would list the university as their secondary affiliation (now 41) when publishing papers. The original idea was to get top marks in the Shanghai Ranking's highly cited researchers indicator. The article correctly notes that the Shanghai rankings no longer count secondary affiliations but they can still help in the Nature and Science and publications indicators and in citations and publications metrics in other rankings.
Also, other Saudi universities do not recruit large numbers of secondary affiliations. There are only four for the rest of Saudi Arabia although I notice that there are now quite a few for Chinese and Australian universities, including five for the University of Melbourne.
India also seems to be suspicious of the rankings. An article by Vyasa Shastri in the E-paper, livemint, details some of the ways in which universities might attempt to manipulate rankings to their advantage.
It is well worth reading although I have one quibble. The article refers to King Abdulaziz University recruiting faculty who would list the university as their secondary affiliation (now 41) when publishing papers. The original idea was to get top marks in the Shanghai Ranking's highly cited researchers indicator. The article correctly notes that the Shanghai rankings no longer count secondary affiliations but they can still help in the Nature and Science and publications indicators and in citations and publications metrics in other rankings.
Also, other Saudi universities do not recruit large numbers of secondary affiliations. There are only four for the rest of Saudi Arabia although I notice that there are now quite a few for Chinese and Australian universities, including five for the University of Melbourne.
Last word, I hope, on Babol Noshirvani University of Technology
If you type in 'Babol University of Technology" rather than 'Babol Noshirvani University of Technology' into the Scopus search box then the university does have enough publications to meet THE's criteria for inclusion the world rankings.
So it seems that it was those highly cited researchers in engineering that propelled the university into the research impact stratosphere. That, and a rather eccentric methodology.
So it seems that it was those highly cited researchers in engineering that propelled the university into the research impact stratosphere. That, and a rather eccentric methodology.
Saturday, September 09, 2017
More on Babol Noshirvani University of Technology
To answer the question in the previous post, how did Babol Noshirvani University of Technology in Iran do so well in the latest THE rankings, part of the answer is that it has two highly cited researchers in engineering, Davood Domiri Ganji and Mohsen Sheikholeslami. I see no reason to question the quality of their research.
But I still have a couple of questions. First THE say that they exclude universities whose research output is less than 1,000 articles between 2012 and 2016. But checking with Scopus indicates that the university had 468 articles over that period, or 591 documents of all kinds including conference papers, book chapters and reviews, which seems way below the threshold level for inclusion. Is it possible that THE have included the Babol University of Medical Sciences in the count of publications or citations?
Those documents have been cited a total of 2,601 times, which is respectable but not quite on a scale that would rival Oxford and Chicago. It is possible that some or one of those articles have, for some reason, got an unusual number of citations compared to the world average and that this has distorted the indicator score. If so, then we have yet another example of a defective methodology producing absurd results.
But I still have a couple of questions. First THE say that they exclude universities whose research output is less than 1,000 articles between 2012 and 2016. But checking with Scopus indicates that the university had 468 articles over that period, or 591 documents of all kinds including conference papers, book chapters and reviews, which seems way below the threshold level for inclusion. Is it possible that THE have included the Babol University of Medical Sciences in the count of publications or citations?
Those documents have been cited a total of 2,601 times, which is respectable but not quite on a scale that would rival Oxford and Chicago. It is possible that some or one of those articles have, for some reason, got an unusual number of citations compared to the world average and that this has distorted the indicator score. If so, then we have yet another example of a defective methodology producing absurd results.
Friday, September 08, 2017
Why did Babol Noshirvani University of Technology do so well in the THE rankings?
The THE world rankings and their regional offshoots have always been a source of entertainment mixed with a little bit of bewilderment. Every year a succession of improbable places jumps into the upper reaches of the citations indicator which is supposed to measure global research impact. Usually it is possible to tell what happened Often it is because of participation in a massive international physics project, although not so much over the last couple of years, contribution to a global medical or genetics survey, or even assiduous self-citation.
However, after checking with Scopus and the Web of Science, I still cannot see exactly how Babol Noshirvani University of Technology got into 14th place for this metric, equal to Oxford and ahead of Yale and Johns Hopkins, in the latest world rankings and 301-350 overall, well ahead of every other Iranian university?
Can anybody help with an explanation?
However, after checking with Scopus and the Web of Science, I still cannot see exactly how Babol Noshirvani University of Technology got into 14th place for this metric, equal to Oxford and ahead of Yale and Johns Hopkins, in the latest world rankings and 301-350 overall, well ahead of every other Iranian university?
Can anybody help with an explanation?
Tuesday, September 05, 2017
Highlights from THE citations indicator
The latest THE world rankings were published yesterday. As always, the most interesting part is the field- and year- normalised citations indicator that supposedly measures research impact.
Over the last few years, an array of implausible places have zoomed into the top ranks of this metric, sometimes disappearing as rapidly as they arrived.
The first place for citations this year goes to MIT. I don't think anyone would find that very controversial.
Here are some of the institutions that feature in the top 100 of THE's most important indicator which has a weighting of 30 per cent.
2nd St. George's, University of London
3rd= University of California Santa Cruz, ahead of Berkeley and UCLA
6th = Brandeis University, equal to Harvard
11th= Anglia Ruskin University, UK, equal to Chicago
14th= Babol Noshirvani University of Technology, Iran, equal to Oxford
16th= Oregon Health and Science University
31st King Abdulaziz University, Saudi Arabia
34th= Brighton and Sussex Medical School, UK, equal to Edinburgh
44th Vita-Salute San Raffaele University, Italy, ahead of the University of Michigan
45th= Ulsan National Institute of Science and Technology, best in South Korea
58th= University of Kiel, best in Germany and equal to King's College London
67th= University of Iceland
77th= University of Luxembourg, equal to University of Amsterdam
Thursday, August 24, 2017
Comment by Christian Scholz
This comment is by Christian Schulz of the University of Hamburg. He points that the University of Hamburg's rise in the Shanghai rankings was not the result of highly cited researchers moving from other institutions but the improvement of research within the university.
If this is something that applies to other German universities, then it could be that Germany has a policy of growing its own researchers rather than importing talent from around the world. It seems to have worked very well for football so perhaps the obsession of British universities with importing international researchers is not such a good idea..
If this is something that applies to other German universities, then it could be that Germany has a policy of growing its own researchers rather than importing talent from around the world. It seems to have worked very well for football so perhaps the obsession of British universities with importing international researchers is not such a good idea..
I just wanted to share with you, that we did not acquire two researchers to get on the HCR List to get a higher rank in the Shanghai Ranking. Those two researchers are Prof. Büchel and Prof. Ravens-Sieberer. Prof. Büchel is working at our university for over a decade now and Prof. Ravens-Sieberer is at our university since 2008.
Please also aknowledge, that our place in the Shanghai Ranking was very stable from 2010-2015. We were very unpleasent, when they decided to only use the one-year list of HCR, because in 2015 none of our researchers made it on the 2015-list, which caused the descend from 2015 to 2016.
Guest Post by Pablo Achard
This post is by Pablo Achard of the University of Geneva. It refers to the Shanghai subject rankings. However, the problem of outliers in subject and regional rankings is one that affects all the well known rankings and will probably become more important over the next few years
How a single article is worth 60 places
We can’t repeat it
enough: an indicator is bad when a small variation in the input is overly
amplified in the output. This is the case when indicators are based on very few
events.
I recently came
through this issue (again) with Shanghai’s subject ranking of universities. The
universities of Geneva and Lausanne (Switzerland) share the same School of Pharmacy and a huge share of published articles in this discipline are signed
under the name of both institutions. But in the “Pharmacy and pharmaceutical
sciences” ranking, one is ranked between the 101st and 150th
position while the other is 40th. Where does this difference come
from?
Comparing the scores
obtained under each category gives a clue
|
Geneva
|
Lausanne
|
Weight in the final score
|
PUB
|
46
|
44.3
|
1
|
CNCI
|
63.2
|
65.6
|
1
|
IC
|
83.6
|
79.5
|
0.2
|
TOP
|
0
|
40.8
|
1
|
AWARD
|
0
|
0
|
1
|
Weighted sum
|
125.9
|
166.6
|
|
So the main difference
between the two institutions is the score in “TOP”. Actually, the difference in
the weighted sum (40.7) is almost equal to the value of this score (40.8). If
Geneva and Lausanne had the same TOP score, they would be 40th and
41st.
Surprisingly, a look
at other institutions for that TOP indicator show only 5 different values : 0,
40.8, 57.7, 70.7 and 100. According to the methodology page of the ranking, “TOP
is the number of papers published in Top Journals in an Academic Subject for an
institution during the period of 2011-2015. Top Journals are identified through
ShanghaiRanking’s Academic Excellence Survey […] The
list of the top journals can be found here […] Only
papers of ‘Article’ type are considered.”
Looking deeper, there
is just one journal in this list for Pharmacy: NATURE REVIEWS DRUG DISCOVERY.
As its name indicates, this recognized journal mainly publishes ‘reviews’. A search
on Web of Knowledge shows that in the period 2011-2015, only 63 ‘articles’ were
published in this journal. That means a small variation in the input is overly
amplified.
I searched for several
institutions and rapidly found this rule: Harvard published 4 articles during
these five years and got a score of 100 ; MIT published 3 articles and got a
score of 70.7 ; 10 institutions published 2 articles and got a 57.7 and finally
about 50 institutions published 1 article and got a 40.8.
I still don’t get why
this score is so unlinear. But Lausanne published one single article in NATURE
REVIEWS DRUG DISCOVERY and Geneva none (they published ‘reviews’ and ‘letters’
but no ‘articles’) and that small difference led to at least a 60 places gap
between the two institutions.
This is of course just
one example of what happens too often: rankers want to publish sub-rankings and
end up with indicators where outliers can’t be absorbed into large
distributions. One article, one prize or one co-author in a large and
productive collaboration all of the sudden makes very large differences in final
scores and ranks.
Subscribe to:
Posts (Atom)