Times Higher Education (THE) has just published its 2017 reputation rankings which include 100 universities. These are based on a survey distributed between January and March of this year and will be included, after standardisation, in the 2017-18 (or 2018) World University Rankings scheduled for publication in a few months. In the forthcoming world rankings the reputation survey will be divided into two metrics in the research and teaching indicator groups, with a combined weighting of 33 percent. The survey asked about research and postgraduate teaching but since the correlation between these two questions is very high there is effectively only one indicator.
The QS world rankings released last week included scores derived from two surveys, one of academics with a 40% weighting and one of employers with 10%. The academic survey was concerned only with research.
The methodology of the THE survey is relatively simple. The respondents are drawn from the database of researchers with publications in Scopus indexed journals, in other words those who get to be listed as corresponding author. THE claims that this makes them experienced senior researchers although in many parts of the world being a member or leader of a research team often has more to do with politics than merit.
In contrast, the QS methodology has changed quite a lot over the last few years. It began with scouring the mailing lists of World Scientific, a Singapore based academic publisher with links to Imperial College London, then adding various other channels including lists supplied by institutions and sign up facilities for potential respondents. The result is a survey that appears more inclusive than THE's with more respondents from outside the elite but one whose validity may be rather suspect.
The THE ranking found that there were six super-brand universities that stood out from everyone else, Harvard, MIT, Stanford, Cambridge, Oxford, and Berkeley. There was a big gap between Berkeley and number seven Princeton and then the long smooth slope continues.
After that, the ranking is dominated by English speaking universities, with the USA contributing 42, the UK 10, Canada 3 and Australia 3. East Asia and the Chinese diaspora (Hong Kong, Taiwan and Singapore) are fairly well represented, while South and Central Asia, the Middle East and Africa are absent.
For any survey a great deal depends on how the forms are distributed. Last year, the THE survey had a lot more responses from the social sciences, including economics and business studies, and fewer from the arts and the humanities, and that contributed to some Asian universities rising and some British ones falling.
Such falls are typically attributed in the education establishment media to anxiety about the looming horrors of Brexit, the vicious snatching of research funds and the rising tide of hostility to international students.
This year British Universities did a bit better in the THE reputation ranking this year with five going up, three staying put and three going down. No doubt we will soon hear about the invigorating effects of Brexit and the benefits of austerity. Perhaps also it might have something to do with the number of survey responses from the arts and humanities going up from 9% to 12.5%, something that would surely benefit UK universities.
The QS reputation indicator has the same universities in the top six but not in quite the same order: Cambridge, fourth in THE, is second in the QS indicator. After that it starts looking very different. Number seven is the University of Tokyo, which THE puts in 11th place for academic reputation. Other Asian universities do much better in the QS indicator. The National University of Singapore is 11th ( 27th in THE) Nanyang Technological University Singapore is 50th (THE 81-90 band), Peking University is 14th (THE 17h) Chulalongkorn University Thailand is 99th (not in the THE top 100).
It is noticeable that Latin American universities such as the University of Sao Paulo, the University of Buenos Aires and the Pontifical Catholic University of Chile get a higher placing in the QS indicator than they do in the THE ranking as do some Southern European universities such as Barcelona, Sapienza and Bologna.
The THE reputation ranking gives us a snapshot of the current views of the world's academic elite and probably underestimates the rising universities of Greater China and Korea. QS cast their nets further and have probably caught a few of tomorrow's world class institutions although I suspect that the Latin American high fliers, apart from Sao Paulo, are very overrated.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Sunday, June 18, 2017
Thursday, June 15, 2017
The Abuse and Use of Rankings
International
university rankings have become a substantial industry since the first
appearance of the Shanghai rankings (Academic Ranking of World Universities or
ARWU) back in 2003. The various rankings are now watched closely by governments
and media and for some students they play a significant role in
choosing universities, They have become a factor in national higher education
policies and are an important element in the race to enter and dominate
the lucrative transnational higher education market. In Malaysia a local
newspaper, Utusan Malaysia, recently had a full page on the latest QS
world rankings including a half page of congratulations from the Malaysian
Qualification Agency for nine universities who are part of a state-backed
export drive.
Reaction to
international rankings often goes to one of two extremes, either outright
rejection or uncritical praise, sometimes descending into grovelling flattery
that would make Uriah Heep ashamed (the revered QS rankings, Phil Baty a
thought leader). The problem with the first, which is certainly very
understandable, is that it is unrealistic. If every international ranking
suddenly stopped publication we would just have, as we did before, an informal
ranking system based largely on reputation, stereotypes and prejudice.
On the
other hand, many academics and bureaucrats find rankings very useful. It is
striking that university administrators, the media and national governments
have been so tolerant of some of the absurdities that Times
Higher Education (THE) has announced in recent years. Recently,
THE’s Asian rankings had Veltech University as the third best university in
India and the best in Asia for research impact, the result of exactly one
researcher assiduously citing himself. This passed almost unnoticed in the
Indian press and seems to have aroused no great interest among Indian academics
apart from a couple of blog posts. Equally, when Universiti Tunku Abdul Rahman
(UTAR), a private Malaysian university, was declared to be the second best
university in the country and best for research impact, on the strength of a
single researcher’s participation in a high profile global medical project
there was no apparent response from anyone.
International rankings have also become a weapon in the the drive by universities to maintain or increase their access to public funds. British and Irish universities often complain that their fall in the rankings is all the fault of the government for not providing enough money. Almost any result in the better known rankings can be used to prop up the narrative of western universities starved of funds and international researchers and students.
Neither of
these two views is really valid. Rankings can tell us a great
deal about the way that higher education and research are going. The early
Shanghai rankings indicated that China was a long way behind the West and that
research in continental Europe was inferior to that in the USA. A recent
analysis by Nature Index shows that American research is declining and that the
decline is concentrated in diverse Democrat voting states such as California,
Massachusetts, Illinois and New York.
But if
university rankings are useful they not equally so and neither are the various
indicators from which they are constructed.
Ranking
indicators that rely on self-submitted information should be mistrusted. Even
if everybody concerned is fanatically honest, there are many ways in which data
can be manipulated, massaged, refined, defined and redefined, analysed and
distorted as it makes it way from branch campuses, affiliated colleges and
research institutes through central administration to the number munching
programs of the rankers.
Then of
course there are the questionable validation processes within the ranking
organisations. There was a much publicised case concerning Trinity College
Dublin where for two years in a row the rankers missed an error of orders of
magnitude in the data submitted for three income indicators.
Any metric
that measures inputs rather than outputs should be approached with caution
including THE's measures of income that amount to a total weighting of 10.75%.
THE and QS both have indicators that count staff resources. It is interesting
to have this sort of information but there is no guarantee that having
loads of money or staff will lead to quality whether of research, teaching or
anything else.
Reputation
survey data is also problematic. It is obviously subjective, although that is
not necessarily a bad thing, and everything depends on the distribution of
responses between countries, disciplines, subjects and levels of seniority.
Take a look at the latest QS rankings and the percentages of respondents from
various countries.
Canada has 3.5% of survey respondents and China has 1.7%.
Australia
has 4% and Russia 4.2%.
Kazakhstan
has 2.1% and India 2.3%'
There ought
to be a sensible middle road between rejecting rankings altogether and
passively accepting the errors, anomalies and biases of the popular rankers.
Universities and governments should abide by a self denying ordinance and reject ranking results that challenge common sense or contradict accepted national rankings. I remember a few years ago someone at Duke University saying that they were puzzled why the THES-QS rankings put the school in first place for faculty student ratio when this contradicted data in the US News rankings. Few, if any, major universities or higher education misters seem to have done anything like this lately.
It would also be a good idea if universities and governments stopped looking at rankings holistically and started setting targets according to specific indicators. High flying research university could refer to the Leiden Ranking, Nature Index or the Nature and Science and Publications indicators in ARWU. New universities could target a place in the Excellence indicators in the Webometrics rankings which lists 5,777 institutions as having some sort of research presence.
As for the
teaching mission, the most directly relevant indicators are the QS employer
survey in the world rankings, the QS Graduate Employability Index, and the
Global University Ranking Employability Ranking published by THE.
Governments and universities would be advised not to got too excited about a strong performance in the rankings. What the rankings have given the rankings can take away.
Monday, May 29, 2017
Ten Universities with a Surprisingly Large Research Impact
Every so often newspapers produce lists of universities that excel in or are noteworthy for something. Here is a list of ten universities that, according to Times Higher Education (THE), have achieved remarkable success in the world of global research. In a time of austerity when the wells of patronage are running dry, they should be an example to us all: they have achieved a massive global research impact, measured by field-normalised citations, despite limited funding, minimal reputations and few or very few publications. The source is the THE World and Asian rankings citations indicator.
1. First on the list is Alexandria University in Egypt, 4th in the world and a near perfect score for research impact in 2010-11.
2. In the same year Hong Kong Baptist University was tenth for research impact, ahead of the University of Chicago and the University of Hong Kong.
3. In 2011-12 Royal Holloway, University of London, was in 12th place, ahead of any other British or European institution.
4. The National Research Nuclear University MEPhI, in Moscow, a specialist institution, was top of the table for citations in 2012-13.
5. In 2013-14 and 2014-15 Tokyo Metropolitan University had a perfect score of 100 for citations, a distinction shared only with MIT.
6. In 2014-15 Federico Santa Maria Technical University was sixth in the world for research impact and first in Latin America with a near perfect score of 99.7.
7. In the same year Bogazici University in Turkey reached the top twenty for research impact.
8. St George's, University of London, was the top institution in the world for research impact in 2016-17.
9. In that year Anglia Ruskin University, a former art school, was tenth for this metric, equal to Oxford and well ahead of the other university in Cambridge.
10. Last year's THE Asian rankings saw Vel Tech University in Chennai achieve the highest impact of any Asian university.
1. First on the list is Alexandria University in Egypt, 4th in the world and a near perfect score for research impact in 2010-11.
2. In the same year Hong Kong Baptist University was tenth for research impact, ahead of the University of Chicago and the University of Hong Kong.
3. In 2011-12 Royal Holloway, University of London, was in 12th place, ahead of any other British or European institution.
4. The National Research Nuclear University MEPhI, in Moscow, a specialist institution, was top of the table for citations in 2012-13.
5. In 2013-14 and 2014-15 Tokyo Metropolitan University had a perfect score of 100 for citations, a distinction shared only with MIT.
6. In 2014-15 Federico Santa Maria Technical University was sixth in the world for research impact and first in Latin America with a near perfect score of 99.7.
7. In the same year Bogazici University in Turkey reached the top twenty for research impact.
8. St George's, University of London, was the top institution in the world for research impact in 2016-17.
9. In that year Anglia Ruskin University, a former art school, was tenth for this metric, equal to Oxford and well ahead of the other university in Cambridge.
10. Last year's THE Asian rankings saw Vel Tech University in Chennai achieve the highest impact of any Asian university.
Sunday, May 28, 2017
The View from Leiden
Ranking experts are constantly warning about the grim fate that awaits the universities of the West if they are not provided with all the money that they want and given complete freedom to hire staff and recruit students from anywhere that they want. If this does not happen they will be swamped by those famously international Asian universities dripping with funds from indulgent patrons.
The threat, if we are to believe the prominent rankers of Times Higher Education (THE), QS and Shanghai Ranking Consultancy, is always looming but somehow never quite arrives. The best Asian performer in the THE world rankings is the National University of Singapore (NUS) in 24th place followed by Peking University in 29th. The QS World University Rankings have NUS 12th, Nanyang Technological University 13th and Tsinghua University 24th. The Academic Ranking of World Universities published in Shanghai puts the University of Tokyo in 20th place and Peking University in 71st.
These rankings are in one way or another significantly biased towards Western European and North American institutions and against Asia. THE has three separate indicators that measure income, adding up to a combined weighting of 10.75% . Both QS and THE have reputations surveys. ARWU gives a 30 % weighting to Nobel and Fields awards winners, some of them from several decades ago.
Let's take a look at a set of rankings that is technically excellent, namely the Leiden Ranking. The producers do not provide an overall score. Instead it is possible to create a variety of rankings, total publications, publications by subject groups, publications in the top 50%, 10% and 1% of journals. Users can also select fractional or absolute counting and change the minimum threshold of number of publications.
Here is the top ten, using the default settings, publications 2012-15, fractional counting, minimum threshold of 100 papers. Publications in 2006-09 are in brackets.
1. Harvard (1)
2. Toronto (2)
3. Zhejiang (14)
4. Michigan (3)
5. Shanghai Jiao Tong (37)
6. Johns Hopkins (5)
7 Sao Paulo (8)
8. Stanford (9)
9 Seoul National University (23)
10. Tokyo (4).
Tsinghua University is 11th, up from 32nd in 2006-09 and Peking University is 15th, up from 54th. What is interesting about this is not just that East Asian universities are moving into the highest level of research universities but how rapidly they are doing so.
No doubt there are many who will say that this is a matter of quantity and that what really counts is not the number of papers but their reception by other researchers. There is something to this. If we look at publications in the top 1 % of journals (by frequency of citation) the top ten include six US universities headed by Harvard, three British and one Canadian.
Tsinghua is 28th, Zhejiang is 50th, Peking 62nd, Shanghai Jiao Tong 80th, Seoul National University 85th . Right now it looks like publication in the most reputed journals is dominated by English-speaking universities. But in the last few years Chinese and Korean universities have advanced rapidly, Peking 119th to 62nd, Zhejiang 118th to 50th, Shanghai Jiao Tong 112th to 80th, Tsinghua 101st to 28th, Seoul National University 107th to 85th.
It seems that in a few years East Asia will dominate the elite journals and will take the lead for quality as well as quantity.
Moving on to subject group rankings, Tsinghua University is in first place for mathematics and computer sciences. The top ten consists of nine Chinese and one Singaporean university. The best US performer is MIT in 16th place, the best British Imperial College London in 48th.
When we look at the top 1 % of journals, Tsinghua is still on top, although MIT moves up to 4th place and Stanford is 5th.
The Asian tsunami has already arrived. East Asian, mainly Chinese and Chinese diaspora, universities, are dominant or becoming dominant in the STEM subjects, leaving the humanities and social sciences to the US.
There will of course be debate about what happened. Maybe money had something to do with it. But it also seems that western universities are becoming much less selective about student admissions and faculty appointments. If you admit students who write #BlackLivesMatter 100 times on their application forms or impose ideological tests for faculty appointment and promotion you may have succeed in imposing political uniformity but you will have serious problems trying to compete with the Gaokao hardened students and researchers of Chinese universities.
The threat, if we are to believe the prominent rankers of Times Higher Education (THE), QS and Shanghai Ranking Consultancy, is always looming but somehow never quite arrives. The best Asian performer in the THE world rankings is the National University of Singapore (NUS) in 24th place followed by Peking University in 29th. The QS World University Rankings have NUS 12th, Nanyang Technological University 13th and Tsinghua University 24th. The Academic Ranking of World Universities published in Shanghai puts the University of Tokyo in 20th place and Peking University in 71st.
These rankings are in one way or another significantly biased towards Western European and North American institutions and against Asia. THE has three separate indicators that measure income, adding up to a combined weighting of 10.75% . Both QS and THE have reputations surveys. ARWU gives a 30 % weighting to Nobel and Fields awards winners, some of them from several decades ago.
Let's take a look at a set of rankings that is technically excellent, namely the Leiden Ranking. The producers do not provide an overall score. Instead it is possible to create a variety of rankings, total publications, publications by subject groups, publications in the top 50%, 10% and 1% of journals. Users can also select fractional or absolute counting and change the minimum threshold of number of publications.
Here is the top ten, using the default settings, publications 2012-15, fractional counting, minimum threshold of 100 papers. Publications in 2006-09 are in brackets.
1. Harvard (1)
2. Toronto (2)
3. Zhejiang (14)
4. Michigan (3)
5. Shanghai Jiao Tong (37)
6. Johns Hopkins (5)
7 Sao Paulo (8)
8. Stanford (9)
9 Seoul National University (23)
10. Tokyo (4).
Tsinghua University is 11th, up from 32nd in 2006-09 and Peking University is 15th, up from 54th. What is interesting about this is not just that East Asian universities are moving into the highest level of research universities but how rapidly they are doing so.
No doubt there are many who will say that this is a matter of quantity and that what really counts is not the number of papers but their reception by other researchers. There is something to this. If we look at publications in the top 1 % of journals (by frequency of citation) the top ten include six US universities headed by Harvard, three British and one Canadian.
Tsinghua is 28th, Zhejiang is 50th, Peking 62nd, Shanghai Jiao Tong 80th, Seoul National University 85th . Right now it looks like publication in the most reputed journals is dominated by English-speaking universities. But in the last few years Chinese and Korean universities have advanced rapidly, Peking 119th to 62nd, Zhejiang 118th to 50th, Shanghai Jiao Tong 112th to 80th, Tsinghua 101st to 28th, Seoul National University 107th to 85th.
It seems that in a few years East Asia will dominate the elite journals and will take the lead for quality as well as quantity.
Moving on to subject group rankings, Tsinghua University is in first place for mathematics and computer sciences. The top ten consists of nine Chinese and one Singaporean university. The best US performer is MIT in 16th place, the best British Imperial College London in 48th.
When we look at the top 1 % of journals, Tsinghua is still on top, although MIT moves up to 4th place and Stanford is 5th.
The Asian tsunami has already arrived. East Asian, mainly Chinese and Chinese diaspora, universities, are dominant or becoming dominant in the STEM subjects, leaving the humanities and social sciences to the US.
There will of course be debate about what happened. Maybe money had something to do with it. But it also seems that western universities are becoming much less selective about student admissions and faculty appointments. If you admit students who write #BlackLivesMatter 100 times on their application forms or impose ideological tests for faculty appointment and promotion you may have succeed in imposing political uniformity but you will have serious problems trying to compete with the Gaokao hardened students and researchers of Chinese universities.
Monday, May 22, 2017
Arab University Rankings: Another Snapshot from Times Higher Education
Times Higher Education (THE) has produced a "snapshot" ranking of Arab universities extracted from its World University Rankings. There has been no change in the indicators or their weighting. Only 28 universities are included which raises questions about how suitable THE's methodology is for regions like the Middle East and North Africa.
This is an improvement over a remarkable MENA snapshot that THE did in 2015 which put Texas A and M University Qatar in first place by virtue of one half-time faculty member who was listed as a contributor to a multi-author, multi-cited CERN paper.
The top five universities this time are King Abdulaziz University (KAU), Saudi Arabia, King Fahad University of Petroleum and Minerals, Saudi Arabia, King Saud University, Saudi Arabia, Khalifa University of Science, Technology and Research, UAE, and Qatar University.
The top three places are held by Saudi institutions. So how did they do it? According to an article by a THE editor for the World Economic Forum it was all due to money and internationalisation.
Up to a point that is correct. The sad story of Trinity College Dublin's botched data submission shows roughly how much a given reported increase can affect a university's overall rank: roughly 5 million Euro in reported total income (with proportionate increases for research income and income from industry) and a middling high university can jump up a spot in the overall rankings.
But does that explain KAU in top place? It did get a high score, 92.1, for international orientation but five other Arab universities did better. For teaching it was third, for industry income third, and for research seventh. What actually made the difference was Citations. KAU had a score of 93.3, far ahead of the next contender, Jordan University of Science and Technology with 50.2.
KAU's research impact is, according to THE, second in Asia only to the shooting star of india, Vel Tech University whose single self citing prodigy supposedly had a greater impact than the whole of any other Asian university. KAU's citations score was the result of the massive recruitment of adjunct faculty, 40 at the last count, who put KAU as a second affiliation. How much time they put in at KAU is uncertain but the Shanghai rankings calculated that highly cited researchers spent an average of 16% of their time at the university of their second affiliation.
It is bad enough that THE put so much emphasis on income and internationalisation in their methodology, promoting the diversion of resources from things like primary and pre-school education, adult literacy and alternatives to oil exports. To encourage universities to rise in the rankings by hiring adjunct faculty whose contribution is uncertain is very irresponsible. It would be a good thing if this snapshot was ignored.
This is an improvement over a remarkable MENA snapshot that THE did in 2015 which put Texas A and M University Qatar in first place by virtue of one half-time faculty member who was listed as a contributor to a multi-author, multi-cited CERN paper.
The top five universities this time are King Abdulaziz University (KAU), Saudi Arabia, King Fahad University of Petroleum and Minerals, Saudi Arabia, King Saud University, Saudi Arabia, Khalifa University of Science, Technology and Research, UAE, and Qatar University.
The top three places are held by Saudi institutions. So how did they do it? According to an article by a THE editor for the World Economic Forum it was all due to money and internationalisation.
Up to a point that is correct. The sad story of Trinity College Dublin's botched data submission shows roughly how much a given reported increase can affect a university's overall rank: roughly 5 million Euro in reported total income (with proportionate increases for research income and income from industry) and a middling high university can jump up a spot in the overall rankings.
But does that explain KAU in top place? It did get a high score, 92.1, for international orientation but five other Arab universities did better. For teaching it was third, for industry income third, and for research seventh. What actually made the difference was Citations. KAU had a score of 93.3, far ahead of the next contender, Jordan University of Science and Technology with 50.2.
KAU's research impact is, according to THE, second in Asia only to the shooting star of india, Vel Tech University whose single self citing prodigy supposedly had a greater impact than the whole of any other Asian university. KAU's citations score was the result of the massive recruitment of adjunct faculty, 40 at the last count, who put KAU as a second affiliation. How much time they put in at KAU is uncertain but the Shanghai rankings calculated that highly cited researchers spent an average of 16% of their time at the university of their second affiliation.
It is bad enough that THE put so much emphasis on income and internationalisation in their methodology, promoting the diversion of resources from things like primary and pre-school education, adult literacy and alternatives to oil exports. To encourage universities to rise in the rankings by hiring adjunct faculty whose contribution is uncertain is very irresponsible. It would be a good thing if this snapshot was ignored.
Wednesday, May 10, 2017
Ranking article: The building of weak expertise: the work of global university rankers
Miguel Antonio Lim, The building of weak expertise: the work of global university rankers
University rankers are the subject of much criticism, and yet they remain influential in the field of higher education. Drawing from a two-year field study of university ranking organizations, interviews with key correspondents in the sector, and an analysis of related documents, I introduce the concept of weak expertise. This kind of expertise is the result of a constantly negotiated balance between the relevance, reliability, and robustness of rankers’ data and their relationships with their key readers and audiences. Building this expertise entails collecting robust data, presenting it in ways that are relevant to audiences, and engaging with critics. I show how one ranking organization, the Times Higher Education (THE), sought to maintain its legitimacy in the face of opposition from important stakeholders and how it sought to introduce a new “Innovation and Impact” ranking. The paper analyzes the strategies, methods, and particular practices that university rankers undertake to legitimate their knowledge—and is the first work to do so using insights gathered alongside the operations of one of the ranking agencies as well as from the rankings’ conference circuit. Rather than assuming that all of these trust-building mechanisms have solidified the hold of the THE over its audience, they can be seen as signs of a constant struggle for influence over a skeptical audience.
Higher Education 13 April 2017
https://link.springer.com/article/10.1007%2Fs10734-017-0147-8
University rankers are the subject of much criticism, and yet they remain influential in the field of higher education. Drawing from a two-year field study of university ranking organizations, interviews with key correspondents in the sector, and an analysis of related documents, I introduce the concept of weak expertise. This kind of expertise is the result of a constantly negotiated balance between the relevance, reliability, and robustness of rankers’ data and their relationships with their key readers and audiences. Building this expertise entails collecting robust data, presenting it in ways that are relevant to audiences, and engaging with critics. I show how one ranking organization, the Times Higher Education (THE), sought to maintain its legitimacy in the face of opposition from important stakeholders and how it sought to introduce a new “Innovation and Impact” ranking. The paper analyzes the strategies, methods, and particular practices that university rankers undertake to legitimate their knowledge—and is the first work to do so using insights gathered alongside the operations of one of the ranking agencies as well as from the rankings’ conference circuit. Rather than assuming that all of these trust-building mechanisms have solidified the hold of the THE over its audience, they can be seen as signs of a constant struggle for influence over a skeptical audience.
Higher Education 13 April 2017
DOI: 10.1007/s10734-017-0147-8
https://link.springer.com/article/10.1007%2Fs10734-017-0147-8
What good are international faculty?
In the previous post I did a bit of fiddling around with correlations and found that UK universities' scores for the international student indicator in the QS world rankings did not correlate very much with beneficial outcome for students such as employability and course completion. They did, however, correlate quite well with spending per student.
That would suggest that British universities want lots of international students because it is good for their finances.
What about international faculty?
Comparing the scores for various outcomes with the QS international faculty score shows that in most cases correlation is low and statistically insignificant. This includes course satisfaction and satisfaction with teaching (Guardian University League Tables), and completion and student satisfaction (THE TEF simulation).
There is, however, one metric that is positively, albeit modestly, and significantly associated with international students and that is the Research Excellence Framework score (REF) score (from Complete University Guide): .284 (sig (2-tailed) .043), N 51).
So it seems that international students are valued for the money they bring with them and international faculty for boosting research quality.
Caveat: this applies to highly ranked universities in the UK. How far it is true of other places or even less prestigious British institutions remains to be seen.
That would suggest that British universities want lots of international students because it is good for their finances.
What about international faculty?
Comparing the scores for various outcomes with the QS international faculty score shows that in most cases correlation is low and statistically insignificant. This includes course satisfaction and satisfaction with teaching (Guardian University League Tables), and completion and student satisfaction (THE TEF simulation).
There is, however, one metric that is positively, albeit modestly, and significantly associated with international students and that is the Research Excellence Framework score (REF) score (from Complete University Guide): .284 (sig (2-tailed) .043), N 51).
So it seems that international students are valued for the money they bring with them and international faculty for boosting research quality.
Caveat: this applies to highly ranked universities in the UK. How far it is true of other places or even less prestigious British institutions remains to be seen.
Monday, May 01, 2017
What good are international students?
There has been a big fuss in the UK about the status of international students. Many in the higher education industry are upset that the government insists on including these students in the overall total of immigrants, which might lead at some point to a reduction in their numbers. Leading twitterati have erupted in anger. Phil Baty of THE has called the government's decision "bizarre, bonkers & deeply depressing" and even invoked the name of the arch demon Enoch Powell in support.
So are international students a benefit to British universities? I have just done a quick correlation of the scores for the international students indicator in the QS World University Rankings to see whether there is any link between positive outcomes for students and the number of international students.
This is of course only suggestive. QS provides scores for only 51 universities included in their world top 500 and the situation might be different for other countries. Another caveat is that international students might provide net economic benefits for surrounding communities although that is far from settled.
Here are the correlations with the QS international student score (significance 2 tailed and N in brackets).
Value added .182 (.206; 50) From the Guardian Rankings 2016. Compares entry qualifications with degrees awarded.
Career .102 (.480; 50) Graduate-level employment or postgraduate study six months after graduation. Also from the Guardian rankings. The correlation with the graduate destinations indicator, based on the same data, in the Times Higher Education TEF simulation is even lower, .018, and turns negative after benchmarking, -172.
Students completing degrees .128 (.376; 50). From the TEF simulation. Again, the correlation turns negative after benchmarking.
QS Employers reputation survey .234 (.140; 41). From the 2016 world rankings.
So the number of international students has a slight and statistically insignificant relationship with the quality of teaching and learning as measured by value added, graduate employability, course completion and reputation with employers. Why then are universities so desperate to get as many as possible?
This, I think, is the answer. The correlation between the QS international students indicator and spending per student, measured by the Guardian ranking is .414 (.003; 50) which is very significant considering the noise generated in comparisons of this sort. Of course, correlation does not equal causation, but it seems a reasonable hypothesis that it is the money brought by international students that makes them so attractive to British universities.
So are international students a benefit to British universities? I have just done a quick correlation of the scores for the international students indicator in the QS World University Rankings to see whether there is any link between positive outcomes for students and the number of international students.
This is of course only suggestive. QS provides scores for only 51 universities included in their world top 500 and the situation might be different for other countries. Another caveat is that international students might provide net economic benefits for surrounding communities although that is far from settled.
Here are the correlations with the QS international student score (significance 2 tailed and N in brackets).
Value added .182 (.206; 50) From the Guardian Rankings 2016. Compares entry qualifications with degrees awarded.
Career .102 (.480; 50) Graduate-level employment or postgraduate study six months after graduation. Also from the Guardian rankings. The correlation with the graduate destinations indicator, based on the same data, in the Times Higher Education TEF simulation is even lower, .018, and turns negative after benchmarking, -172.
Students completing degrees .128 (.376; 50). From the TEF simulation. Again, the correlation turns negative after benchmarking.
QS Employers reputation survey .234 (.140; 41). From the 2016 world rankings.
So the number of international students has a slight and statistically insignificant relationship with the quality of teaching and learning as measured by value added, graduate employability, course completion and reputation with employers. Why then are universities so desperate to get as many as possible?
This, I think, is the answer. The correlation between the QS international students indicator and spending per student, measured by the Guardian ranking is .414 (.003; 50) which is very significant considering the noise generated in comparisons of this sort. Of course, correlation does not equal causation, but it seems a reasonable hypothesis that it is the money brought by international students that makes them so attractive to British universities.
Sunday, April 23, 2017
UTAR and the Times Higher Education Asian University Rankings
Recently, Universiti Tunku Abdul Rahman (UTAR), a private
Malaysian university, welcomed what appeared to be an outstanding performance
in the Times Higher Education (THE) Asian
Universities Rankings, followed by a good score in the magazine’s Young
University Rankings. This has been interpreted as a remarkable achievement not
just for UTAR but also for Malaysian higher education in general.
In the Asian rankings, UTAR is ranked in the top 120 and
second in Malaysia behind Universiti Malaya (UM) and ahead of the major
research universities, Universiti Sains Malaysia, Universiti Kebangsaan Malaysia
and Universiti Putra Malaysia.
This is sharp contrast to other rankings. There is a research
based ranking published by Middle East Technical
University that puts
UTAR 12th in Malaysia and 589th in Asia. The Webometrics ranking, which is mainly web based with one research indicator, has it 17th
in Malaysia and 651st in Asia.
The QS rankings, known to be kind to South East
Asian universities, puts UTAR in the 251-300 band for Asia and 14th=
in Malaysia behind places like Taylor’s University and Multi Media University
and in the same band as Universiti Malaysia Perlis and Universiti Malaysia Terengganu.
UTAR does not appear in the Shanghai rankings or the Russian Round University
Rankings.
Clearly, THE is the odd man out among rankings in its
assessment of UTAR. I suspect that if challenged a spokesperson for THE might
say that this is because they measure things other than research. That is very debatable.
Bahram Bekhradnia of the Higher Education Policy Institute has argued in a widely-cited report that these rankings are of little value
because they are almost entirely research-orientated.
In fact, UTAR did not perform so well in the THE Asian
rankings because of teaching, internationalisation or links with industry. It
did not even do well in research. It did well because of an “outstanding” score
for research impact and it got that score because of the combination of a
single obviously talented researcher with a technically defective methodology.
Just take a look at UTAR’s scores for the various components in the THE Asian rankings. For Research
UTAR got a very low score of 9.6, the lowest of the nine Malaysian universities
featured in these rankings (100 represents the top score in all the indicators).
For Teaching it has a score of 15.9, also the lowest of the
ranked Malaysian universities.
For International Orientation, it got a score of 33.2. This
was not quite the worst in Malaysia. Universiti Teknologi MARA (UiTM), which
does not admit non– bumiputra Malaysians, let alone international
students, did worse.
For Industry Income UTAR’s score was 32.9, again surpassed by
every Malaysian university except UiTM.
So how on earth did UTAR manage to get into the top 120 in Asia
and second in Malaysia?
The answer is that it got an “excellent” score of 56.7 for Research
Impact, measured by field-normalised citations, higher than every other Malaysian
university, including UM, in these rankings.
That score is also higher than several major international research
universities such as National Taiwan University, the Indian Institute of
Technology Bombay, Kyoto University and Tel Aviv University. That alone should
make the research impact score very suspicious. Also, compare the score with
the low score for research which combines three metrics, research reputation,
research income and publications. Somehow UTAR has managed to have a huge
impact on the research world even though it receives little money for research,
does not have much of a reputation for research, and does not publish very
much.
The THE research impact (citations) indicator is very problematical
in several ways. It regularly produces utterly absurd results such as Alexandria
University in Egypt in fourth place for research impact in the world in 2010
and St George’s, University of London (a medical school), in first place last
year, or Anglia Ruskin University, a former art school, equal to Oxford and
well ahead of Cambridge University.
In addition, to flog a horse that should have decomposed by
now, Veltech University in Chennai, India, according to THE has biggest
research impact in Asia and perhaps, if it qualified for the World Rankings, in
the world. This was done by massive self-citation by exactly one researcher and a
little bit of help from a few friends.
Second in Asia for research, THE would have us believe, is King
Abdulaziz University of Jeddah which has been on recruiting spree of adjunct faculty whose duties might include visiting
the university but certainly do require putting its name as secondary
affiliation in research papers.
To rely on the THE rankings as a measure of excellence is unwise.
There were methodological changes in 2011, 2015 and 2016, which have
contributed to universities moving up or down many places even if there has
been no objective change. Middle East Technical University in Ankara, for
example, fell from 85th place in 2014-15 to the 501-600 band in
2015-6 and then to the 601-800 band in 2016-17. Furthermore, adding new universities means
that the average scores from which the final scores are calculated are likely
to fluctuate.
In addition, THE has been known to recalibrate the weight given to its indicators in their
regional rankings and this has sometimes worked to the advantage of whoever is
the host of THE’s latest exciting and prestigious summit. In 2016, THE’s Asian
rankings featured an increased weight for research income from industry and a
reduced one for teaching and research reputation. This was to the disadvantage
of Japan and to the benefit of Hong Kong where the Asian summit was held.
So, is UTAR really more influential among international
researchers than Kyoto University or the National Taiwan University?
What actually happened to UTAR is that it has an outstanding
medical researcher who is involved in a massive international medical project
with hundreds of collaborators from hundreds of institutions that produces
papers that have been cited hundreds of times and will in the next few years be
cited thousands of times. One of these papers had, by my count, 720
contributors from 470 universities and research centres and has so far received
1,036 citations, 695 in 2016 alone.
There is absolutely nothing wrong with such projects but it
is ridiculous to treat every one of those 720 contributors as though they were
the sole author of the paper with credit for all the citations, which is what
THE does. This could have been avoided simply by using fractional counting and
dividing the number of citations by the number of authors or number of
affiliating institutions. This is an option available in the Leiden Ranking,
which is the most technically expert of the various rankings. THE already does
this for publications with over 1,000 contributors but that is obviously not
enough.
I would not go as far as Bahram Bekhradnia and other higher
education experts and suggest that universities should ignore rankings
altogether. But if THE are going to continue to peddle such a questionable
product then Malaysian universities would be well advised to keep their
distance. There are now several other rankings on the marking that could be
used for benchmarking and marketing.
It is not a good idea for UTAR to celebrate its achievement
in the THE rankings. It is quite possible that the researcher concerned will one day go
elsewhere or that THE will tweak its methodology again. If either happens the
university will suffer from a precipitous fall in the rankings along with a
decline in its public esteem. UTAR and other Malaysian universities would be
wise to treat the THE rankings with a great deal of caution and scepticism.
Thursday, April 13, 2017
University Challenge: It wasn't such a dumb ranking.
A few years ago I did a crude ranking of winners and runners-up, derived from Wikipedia, of the UK quiz show University Challenge, to show that British university rankings were becoming too complex and sophisticated. Overall it was not too dissimilar to national rankings and certainly more reasonable than the citations indicator of the THE world rankings. At the top was Oxford, followed by Cambridge and then Manchester. The first two were actually represented by constituent colleges. Manchester is probably underrepresented because it was expelled for several years after its team tried to sabotage a show by giving answers like Marx, Trotsky and Lenin to all or most of the questions, striking a blow against bourgeois intellectual hegemony or something.
Recently Paul Greatrix of Wonk HE did a list of the ten dumbest rankings ever. The University Challenge ranking was ninth because everybody knows who will win. He has a point. Cambridge and Oxford colleges are disproportionately likely to be in the finals.
Inevitably there is muttering about not enough women and ethnic minorities on the teams. The Guardian complains that only 22 % of this years contestants were women and none of the finalists. The difference between the number of finalists and the number of competitors might, however, suggest that there is a bias in favour of women in the processes of team selection.
Anyway, here is a suggestion for anyone concerned that the university challenge teams don't look like Britain or the world. Give each competing university or college the opportunity, if they wish, to submit two teams, one of which will be composed of women and/or aggrieved minorities and see what happens.
James Thompson at the Unz Review has some interesting comments. It seems that general knowledge is closely associated with IQ and to a lesser extent with openness to experience. This is in fact a test of IQ aka intelligence aka general mental ability.
So it's not such a dumb ranking after all.
Recently Paul Greatrix of Wonk HE did a list of the ten dumbest rankings ever. The University Challenge ranking was ninth because everybody knows who will win. He has a point. Cambridge and Oxford colleges are disproportionately likely to be in the finals.
Inevitably there is muttering about not enough women and ethnic minorities on the teams. The Guardian complains that only 22 % of this years contestants were women and none of the finalists. The difference between the number of finalists and the number of competitors might, however, suggest that there is a bias in favour of women in the processes of team selection.
Anyway, here is a suggestion for anyone concerned that the university challenge teams don't look like Britain or the world. Give each competing university or college the opportunity, if they wish, to submit two teams, one of which will be composed of women and/or aggrieved minorities and see what happens.
James Thompson at the Unz Review has some interesting comments. It seems that general knowledge is closely associated with IQ and to a lesser extent with openness to experience. This is in fact a test of IQ aka intelligence aka general mental ability.
So it's not such a dumb ranking after all.
Wednesday, April 12, 2017
Exactly how much is five million Euro worth converted into ranking places
One very useful piece of information to emerge from the Trinity College Dublin (TCD) rankings fiasco is the likely effect on the rankings of injecting money into universities.
When TCD reported to Times Higher Education THE that it had almost no income at all, 355 Euro in total, of which 111 Euro was research income and 5 Euro from industry, it was ranked in the 201 - 250 band in the world university rankings. Let's be generous and say that it was 201st. But when the correct numbers were inserted, 355 million in total (of which 111 million is research income and 5 million industry income) it was in 131st= place.
So we can say crudely that increasing (or rather reporting) overall institutional income by 5 million Euro (keeping the proportions for research income and industry income constant) translates into one place in the overall world rankings.
Obviously this is not going to apply as we go up the rankings. I suspect that Caltech will need a lot more than an extra 5 million Euro or 5 million anything to oust Oxford from the top of the charts.
Anyway, there it is. Five million Euro and the national flagship advances one spot up the THE world rankings. It sounds a lot but when the minister for arts, sports and tourism spends 120 Euro for hat rental, and thousands for cars and hotels, there are perhaps worse things the Irish government could do with the taxpayers' money.
When TCD reported to Times Higher Education THE that it had almost no income at all, 355 Euro in total, of which 111 Euro was research income and 5 Euro from industry, it was ranked in the 201 - 250 band in the world university rankings. Let's be generous and say that it was 201st. But when the correct numbers were inserted, 355 million in total (of which 111 million is research income and 5 million industry income) it was in 131st= place.
So we can say crudely that increasing (or rather reporting) overall institutional income by 5 million Euro (keeping the proportions for research income and industry income constant) translates into one place in the overall world rankings.
Obviously this is not going to apply as we go up the rankings. I suspect that Caltech will need a lot more than an extra 5 million Euro or 5 million anything to oust Oxford from the top of the charts.
Anyway, there it is. Five million Euro and the national flagship advances one spot up the THE world rankings. It sounds a lot but when the minister for arts, sports and tourism spends 120 Euro for hat rental, and thousands for cars and hotels, there are perhaps worse things the Irish government could do with the taxpayers' money.
Tuesday, April 11, 2017
Should graduation rates be included in rankings?
There is a noticeable trend for university rankings to become more student- and teaching-centred. Part of this is a growing interest in using graduation rates as a ranking metric. Bob Morse of US News says "[t]his is why we factor in graduation rates. Getting into college means nothing if you can't graduate."
The big problem though is that if universities can influence or control standards for graduation then the value of this metric is greatly diminished. A high graduation rate might mean effective teaching and meritocratic admissions: it might also mean nothing more than a relaxation of standards.
But we do know that dropping out or not finishing university is the road to poverty and obscurity. Think of poor Robert Zimmerman, University of Minnesota dropout, singing for a pittance in coffee bars or Kingsley Amis toiling at University College Swansea for 13 years, able to afford only ten cigarettes a day, after failing his Oxford B Litt exam. Plus all those other failures like Mick Jagger (LSE) and Bill Gates (Harvard). So it could be that graduation rates as a ranking indicator are here to stay.
Friday, April 07, 2017
Job Application
A few years ago an elderly school teacher told me about a pupil who when asked to write an application for a dream job chose Archbishop of Canterbury "because I believe in God and know lots of Bible stories." These days he'd probably be over-qualified but never mind.
So I think it's time to start sending out applications to Ranking Task Forces and the like. I know those zeros at the end of a number are important, that I should click submit AFTER filling in the data field, and that Stellenbosch is in Africa.
Update: Corrected a spelling error in the title without a complaint from anyone.
So I think it's time to start sending out applications to Ranking Task Forces and the like. I know those zeros at the end of a number are important, that I should click submit AFTER filling in the data field, and that Stellenbosch is in Africa.
Update: Corrected a spelling error in the title without a complaint from anyone.
Thursday, April 06, 2017
Trinity College Shoots Itself in the Other Foot
The story so far. Trinity College Dublin (TCD) has been flourishing over the last decade according to the Shanghai and Round University Rankings (RUR) world rankings which have a stable methodology. The university leadership has, however, been complaining about its decline in the Times Higher Education (THE) and QS rankings, which is attributed to the philistine refusal of the government to give TCD the money that it wants.
It turns out that the decline in the THE rankings was due to a laughable error. TCD had submitted incorrect data to THE, 355 Euro for total income, 111 for research income and 5 for income from industry instead of 355 million, 111 million and 5 million. Supposedly, this was the result of an "innocent mistake."
Today, the Round University Rankings released their 2017 league table. These rankings are derived from Global Institutional Profiles Project (GIPP) run by Thomson Reuters and now by Clarivate Analytics and used until 2014 by THE. TCD has fallen from 102nd place to 647th, well below Maynooth and the Dublin Institute of Technology. The decline was catastrophic for the indicators based on institutional data and very slight for those derived from surveys and bibliometric information.
What happened? It was not the tight fists of the government. TCD apparently just submitted the data form to GIPP without providing data.
No doubt another innocent mistake. It will be interesting to see what the group of experts in charge of rankings at TCD has to say about this.
By the way, University College Dublin continues to do well in these rankings, falling a little bit from 195th to 218th.
It turns out that the decline in the THE rankings was due to a laughable error. TCD had submitted incorrect data to THE, 355 Euro for total income, 111 for research income and 5 for income from industry instead of 355 million, 111 million and 5 million. Supposedly, this was the result of an "innocent mistake."
Today, the Round University Rankings released their 2017 league table. These rankings are derived from Global Institutional Profiles Project (GIPP) run by Thomson Reuters and now by Clarivate Analytics and used until 2014 by THE. TCD has fallen from 102nd place to 647th, well below Maynooth and the Dublin Institute of Technology. The decline was catastrophic for the indicators based on institutional data and very slight for those derived from surveys and bibliometric information.
What happened? It was not the tight fists of the government. TCD apparently just submitted the data form to GIPP without providing data.
No doubt another innocent mistake. It will be interesting to see what the group of experts in charge of rankings at TCD has to say about this.
By the way, University College Dublin continues to do well in these rankings, falling a little bit from 195th to 218th.
Doing Something About Citations and Affiliations
University rankings have proliferated over the last decade. The International Rankings Expert Group's (IREG) inventory of national rankings counted 60 and there are now 40 international rankings including global, regional, subject, business school and system rankings.
In addition, there have been a variety of spin offs and extracts from the global rankings, especially those published by Times Higher Education, including Asian, Latin American, African, MENA, Young University rankings and most international universities. The value of these varies but that of the Asian rankings must now be considered especially suspect.
THE have just released the latest edition of their Asian rankings using the world rankings indicators with a recalibration of the weightings. They have reduced the weighting given to the teaching and research reputation surveys and increased that for research income, research productivity and income from industry. Unsurprisingly, Japanese universities, with good reputations but affected by budget cuts, have performed less well than in the world rankings.
These rankings have, as usual, produced some results that are rather counter intuitive and illustrate the need for THE, other rankers and the academic publishing industry to introduce some reforms in the presentation and counting of publications and citations.
As usual, the oddities in the THE Asian rankings have a lot to do with the research impact indicator supposedly measured by citations. This, it needs to be explained, does not simply count the number of citations but compares them with the world average for over three hundred fields, five years of publications and six years of citations. Added to all that is a "regional modification" applied to half of the indicator by which the score for each university is divided by the square root of the score for the country in which the university is located. This effectively gives a boost to everybody except those places in the top scoring country, one that can be quite significant for countries with a low citation impact.
What this means is that a university with a minimal number of papers can rack up a large and disproportionate score if it can collect large numbers of citations for a relatively small number of papers. This appears to be what has contributed to the extraordinary success of the institution variously known as Vel Tech University, Veltech University, Veltech Dr. RR & Dr. SR University and Vel Tech Rangarajan Dr Sagunthala R & D Institute of Science and Technology.
The university has scored a few local achievements, most recently ranking 58th for engineering institutions in the latest Indian NRIF rankings, but internationally, as Ben Sowter indicated in Quora, it is way down the ladder or even unable to get onto the bottom rung.
So how did it get to be the third best university and best private university in India according to the THE Asian rankings? How could it have the highest research impact of any university in Chennai, Tamil Nadu, India and Asia and perhaps the highest or second highest in the world.
Ben Sowter of QS Intelligence Unit has provided the answer. It is basically due to industrial scale self-citation.
In addition, the non-self citations are from a very small number of people, including his co-authors. Basically his audience is himself and a small circle of friends.
Another point is that Dr Vaidyanathan has published in a limited of journals and conference proceedings the most important of which are the International Journal of Pharmtech Research and the International Journal of Chemtech Research, both of which have Vaidyanathan as an associate editor. My understanding of Scopus procedures for inclusion and retention in the database is that the number of citations is very important. I was once associated with a journal that was highly praised by the Scopus reviewers for the quality of its contents but rejected because it had few citations. I wonder if Scopus's criteria include watching out for self-citations.
The Editor in Chief of the International Journal of Chemtech Research is listed as Bhavik J Bhatt who received his Ph D from the University of Iowa in 2013 and does not appear to have ever held a full time university post.
The Editor in Chief of the International Journal of Pharmtech Research is Moklesur R Sarker, associate professor at Lincoln University College Malaysia, which in 2015 was reported to be in trouble for admitting bogus students.
I will be scrupulously fair and quote Dr Vaidyanathan.
His claim that self citation was not his intention is odd. Was he citing in his sleep or was he possessed by an evil spirit when he wrote his papers or signed off on them? The claim about citing recent developments that include his own work misses the point. Certainly somebody like Chomsky would cite himself when reviewing developments in formal linguistics but he would also be cited by other people. Aside from himself and his co-authors Dr Vaidyanathan is cited by almost nobody.
The problems with the citations indicator in the THE Asian rankings do not end there. Here are a few cases of universities with very low scores for research and unbelievably high scores for research impact
King Abdulaziz University is ranked second in Asia for research impact. This is an old story and it is achieved by the massive recruitment of adjunct faculty culled from the lists of highly cited researchers.
Toyota Technological Institute is supposedly best in Japan for research impact, which I suspect would be news to most Japanese academics, but 19th for research.
Atilim University in Ankara is supposedly the best in Turkey for research impact but also has a very low score for research.
The high citations score for Quaid i Azam University in Pakistan results from participation in the multi-author physics papers derived from the CERN projects. In addition, there is one hyper productive researcher in applied mathematics.
Tokyo Metropolitan University gets a high score for citation because of a few much cited papers in physics and molecular genetics.
Bilkent university is a contributor to frequently cited multi-author papers in genetics.
According to THE Universiti Tunku Abdul Rahman (UTAR) is the second best university in Malaysia and best for research impact, something that will come as a surprise to anyone with the slightest knowledge of Malaysian higher education. This is because of participation in the global burden of disease study, whose papers propelled Anglia Ruskin University to the apex of British research. Other universities with disproportionate scores for research impact include Soochow University China, North East Normal University China, Jordan University of Science and Technology, Panjab University India, Comsats Institute of Information Technology Pakistan and Yokohama City University Japan.
There are some things that the ranking and academic publishing industries need to do about the collection, presentation and distribution of publications and citations data.
1. All rankers should exclude self- citations from citation counts. This is very easy to do, just clicking a box, and has been done by QS since 2011. It would be even better if intra-university and intra-journal citations were excluded as well.
2. There will almost certainly be a growing problem with the recruitment of adjunct staff who will be asked to do no more than list an institution as a secondary affiliation when publishing papers. It would be sensible if academic publishers simply insisted that there be only one affiliation per author. If they do not it should be possible for rankers to count only the first named author.
3. The more fields there are the greater the chances that rankings can be skewed by strategically or accidentally placed citations. The number of fields used for normalisation should be kept to a reasonable number.
4. A visit to the Leiden Ranking website and a few minutes tinkering with their settings and parameters will show that citations can be used to measure several different things. Rankers should use more than one indicator to measure citations.
5. It defies common sense for any ranking to give a greater weight to citations than to publications. Rankers need to review the weighting given to their citation indicators. In particular, THE needs to think about their regional modification. which has the effect, noted above, of increasing the citations score for nearly everybody and so pushing the actual weighting of the indicator above 30 per cent.
6. Academic publishers and databases like Scopus and Web of Science need to audit journals on a regular basis.
In addition, there have been a variety of spin offs and extracts from the global rankings, especially those published by Times Higher Education, including Asian, Latin American, African, MENA, Young University rankings and most international universities. The value of these varies but that of the Asian rankings must now be considered especially suspect.
THE have just released the latest edition of their Asian rankings using the world rankings indicators with a recalibration of the weightings. They have reduced the weighting given to the teaching and research reputation surveys and increased that for research income, research productivity and income from industry. Unsurprisingly, Japanese universities, with good reputations but affected by budget cuts, have performed less well than in the world rankings.
These rankings have, as usual, produced some results that are rather counter intuitive and illustrate the need for THE, other rankers and the academic publishing industry to introduce some reforms in the presentation and counting of publications and citations.
As usual, the oddities in the THE Asian rankings have a lot to do with the research impact indicator supposedly measured by citations. This, it needs to be explained, does not simply count the number of citations but compares them with the world average for over three hundred fields, five years of publications and six years of citations. Added to all that is a "regional modification" applied to half of the indicator by which the score for each university is divided by the square root of the score for the country in which the university is located. This effectively gives a boost to everybody except those places in the top scoring country, one that can be quite significant for countries with a low citation impact.
What this means is that a university with a minimal number of papers can rack up a large and disproportionate score if it can collect large numbers of citations for a relatively small number of papers. This appears to be what has contributed to the extraordinary success of the institution variously known as Vel Tech University, Veltech University, Veltech Dr. RR & Dr. SR University and Vel Tech Rangarajan Dr Sagunthala R & D Institute of Science and Technology.
The university has scored a few local achievements, most recently ranking 58th for engineering institutions in the latest Indian NRIF rankings, but internationally, as Ben Sowter indicated in Quora, it is way down the ladder or even unable to get onto the bottom rung.
So how did it get to be the third best university and best private university in India according to the THE Asian rankings? How could it have the highest research impact of any university in Chennai, Tamil Nadu, India and Asia and perhaps the highest or second highest in the world.
Ben Sowter of QS Intelligence Unit has provided the answer. It is basically due to industrial scale self-citation.
"Their score of 100 for citations places them as the topmost university in Asia for citations, more than 6 points clear of their nearest rival. This is an indicator weighted at 30%. Conversely, and very differently from other institutions in the top 10 for citations, with a score of just 8.4 for research, they come 285/298 listed institutions. So an obvious question emerges, how can one of the weakest universities in the list for research, be the best institution in the list for citations?
The simple answer? It can’t. This is an invalid result, which should have been picked up when the compilers undertook their quality assurance checks.
It’s technically not a mistake though, it has occurred as a result of the Times Higher Education methodology not excluding self-citations, and the institution appears to have, for either this or other purposes, undertaken a clear campaign to radically promote self-citations from 2015 onwards.
In other words and in my opinion, the university has deliberately and artificially manipulated their citation records, to cheat this or some other evaluation system that draws on them.
The Times Higher Education methodology page explains: The data include the 23,000 academic journals indexed by Elsevier’s Scopus database and all indexed publications between 2011 and 2015. Citations to these publications made in the six years from 2011 to 2016 are also collected.
So let’s take a look at the Scopus records for Vel Tech for those periods. There are 973 records in Scopus on the primary Vel Tech record for the period 2011–2015 (which may explain why Vel Tech have not featured in their world ranking which has a threshold of 1,000). Productivity has risen sharply through that period from 68 records in 2011 to 433 records in 2015 - for which due credit should be afforded.
The issue begins to present itself when we look at the citation picture. "He continues:
"That’s right. Of the 13,864 citations recorded for the main Vel Tech affiliation in the measured period 12,548 (90.5%) are self-citations!!
A self-citation is not, as some readers might imagine, one researcher at an institution citing another at their own institution, but that researcher citing their own previous research, and the only way to a group of researchers will behave that way collectively on this kind of scale so suddenly, is to have pursued a deliberate strategy to do so for some unclear and potentially nefarious purpose.
It’s not a big step further to identify some of the authors who are most clearly at the heart of this strategy by looking at the frequency of their occurence amongst the most cited papers for Vel Tech. Whilst this involves a number of researchers, at the heart of it seems to be Dr. Sundarapandian Vaidyanathan, Dean of the R&D Center.
Let’s take as an example, a single paper he published in 2015 entitled “A 3-D novel conservative chaotic system and its generalized projective synchronization via adaptive control”. Scopus lists 144 references, 19 of which appear to be his own prior publications. The paper has been cited 114 times, 112 times by himself in other work."
In addition, the non-self citations are from a very small number of people, including his co-authors. Basically his audience is himself and a small circle of friends.
Another point is that Dr Vaidyanathan has published in a limited of journals and conference proceedings the most important of which are the International Journal of Pharmtech Research and the International Journal of Chemtech Research, both of which have Vaidyanathan as an associate editor. My understanding of Scopus procedures for inclusion and retention in the database is that the number of citations is very important. I was once associated with a journal that was highly praised by the Scopus reviewers for the quality of its contents but rejected because it had few citations. I wonder if Scopus's criteria include watching out for self-citations.
The Editor in Chief of the International Journal of Chemtech Research is listed as Bhavik J Bhatt who received his Ph D from the University of Iowa in 2013 and does not appear to have ever held a full time university post.
The Editor in Chief of the International Journal of Pharmtech Research is Moklesur R Sarker, associate professor at Lincoln University College Malaysia, which in 2015 was reported to be in trouble for admitting bogus students.
I will be scrupulously fair and quote Dr Vaidyanathan.
"I joined Veltech University in 2009 as a Professor and shortly, I joined the Research and Development Centre at Veltech University. My recent research areas are chaos and control theory. I like to stress that research is a continuous process, and research done in one topic becomes a useful input to next topic and the next work cannot be carried on without referring to previous work. My recent research is an in-depth study and discovery of new chaotic and hyperchaotic systems, and my core research is done on chaos, control and applications of these areas. As per my Scopus record, I have published a total of 348 research documents. As per Scopus records, my work in chaos is ranked as No. 2, and ranked next to eminent Professor G. Chen. Also, as per Scopus records, my work in hyperchaos is ranked as No. 1, and I have contributed to around 50 new hyperchaotic systems. In Scopus records, I am also included in the list of peers who have contributed in control areas such as ‘Adaptive Control’, ‘Backstepping Control’, ‘Sliding Mode Control’ and ‘Memristors’. Thus, the Scopus record of my prolific research work gives ample evidence of my subject expertise in chaos and control. In this scenario, it is not correct for others to state that self-citation has been done for past few years with an intention of misleading others. I like to stress very categorically that the self-citations are not an intention of me or my University.
I started research in chaos theory and control during the years 2010-2013. My visit to Tunisia as a General Chair and Plenary Speaker in CEIT-2013 Control Conference was a turning point in my research career. I met many researchers in control systems engineering and I actively started my research collaborations with foreign faculty around the world. From 2013-2016, I have developed many new results in chaos theory such as new chaotic systems, new hyperchaotic systems, their applications in various fields, and I have also published several papers in control techniques such as adaptive control, backstepping control, sliding mode control etc. Recently, I am also actively involved in new areas such as fractional-order chaotic systems, memristors, memristive devices, etc."
...
"Regarding citations, I cite the recent developments like the discovery of new chaotic and hyperchaotic systems, recent applications of these systems in various fields like physics, chemistry, biology, population ecology, neurology, neural networks, mechanics, robotics, chaos masking, encryption, and also various control techniques such as active control, adaptive control, backstepping control, fuzzy logic control, sliding mode control, passive control, etc,, and these recent developments include my works also."
The problems with the citations indicator in the THE Asian rankings do not end there. Here are a few cases of universities with very low scores for research and unbelievably high scores for research impact
King Abdulaziz University is ranked second in Asia for research impact. This is an old story and it is achieved by the massive recruitment of adjunct faculty culled from the lists of highly cited researchers.
Toyota Technological Institute is supposedly best in Japan for research impact, which I suspect would be news to most Japanese academics, but 19th for research.
Atilim University in Ankara is supposedly the best in Turkey for research impact but also has a very low score for research.
The high citations score for Quaid i Azam University in Pakistan results from participation in the multi-author physics papers derived from the CERN projects. In addition, there is one hyper productive researcher in applied mathematics.
Tokyo Metropolitan University gets a high score for citation because of a few much cited papers in physics and molecular genetics.
Bilkent university is a contributor to frequently cited multi-author papers in genetics.
According to THE Universiti Tunku Abdul Rahman (UTAR) is the second best university in Malaysia and best for research impact, something that will come as a surprise to anyone with the slightest knowledge of Malaysian higher education. This is because of participation in the global burden of disease study, whose papers propelled Anglia Ruskin University to the apex of British research. Other universities with disproportionate scores for research impact include Soochow University China, North East Normal University China, Jordan University of Science and Technology, Panjab University India, Comsats Institute of Information Technology Pakistan and Yokohama City University Japan.
There are some things that the ranking and academic publishing industries need to do about the collection, presentation and distribution of publications and citations data.
1. All rankers should exclude self- citations from citation counts. This is very easy to do, just clicking a box, and has been done by QS since 2011. It would be even better if intra-university and intra-journal citations were excluded as well.
2. There will almost certainly be a growing problem with the recruitment of adjunct staff who will be asked to do no more than list an institution as a secondary affiliation when publishing papers. It would be sensible if academic publishers simply insisted that there be only one affiliation per author. If they do not it should be possible for rankers to count only the first named author.
3. The more fields there are the greater the chances that rankings can be skewed by strategically or accidentally placed citations. The number of fields used for normalisation should be kept to a reasonable number.
4. A visit to the Leiden Ranking website and a few minutes tinkering with their settings and parameters will show that citations can be used to measure several different things. Rankers should use more than one indicator to measure citations.
5. It defies common sense for any ranking to give a greater weight to citations than to publications. Rankers need to review the weighting given to their citation indicators. In particular, THE needs to think about their regional modification. which has the effect, noted above, of increasing the citations score for nearly everybody and so pushing the actual weighting of the indicator above 30 per cent.
6. Academic publishers and databases like Scopus and Web of Science need to audit journals on a regular basis.
Tuesday, April 04, 2017
The Trinity Affair Gets Worse
Trinity College Dublin (TCD) has been doing extremely well over the last few years, especially in research. It has risen in the Shanghai ARWU rankings from the 201-300 to the 151-200 band and from 174th to 102nd in the RUR rankings.
You would have thought that would be enough for any aspiring university and that they would be flying banners all over the place. But TCD has been too busy lamenting its fall in the Times Higher Education (THE) and QS world rankings, which it attributed to the reluctance of the government to give it as much money as it wanted. Inevitably, a high powered Rankings Steering Group headed by the Provost was formed to turn TCD around.
In September last year the Irish Times reported that the reason or part of the reason for the fall in the THE world rankings was that incorrect data had been supplied. The newspaper said that:
"The error is understood to have been spotted when the college – which ranked in 160th place last year – fell even further in this year’s rankings.
The data error – which sources insist was an innocent mistake – is likely to have adversely affected its ranking position both this year and last. "I am wondering why "sources" were so keen to insist that it was an innocent mistake. Has someone been hinting that it might have been deliberate?
It now seems that the mistake was not just a misplaced decimal point. It was a decimal point moved six places to the left so that TCD reported a total income of 355 Euro, a research income of 111 Euro and 5 Euro income from industry instead of 355, 111, and 5 million respectively. I wonder what will happen to applications to the business school.
What is even more disturbing, although perhaps not entirely surprising, is that THE's game-changing auditors did not notice.
Subscribe to:
Posts (Atom)