Times Higher Education (THE) has produced a "snapshot" ranking of Arab universities extracted from its World University Rankings. There has been no change in the indicators or their weighting. Only 28 universities are included which raises questions about how suitable THE's methodology is for regions like the Middle East and North Africa.
This is an improvement over a remarkable MENA snapshot that THE did in 2015 which put Texas A and M University Qatar in first place by virtue of one half-time faculty member who was listed as a contributor to a multi-author, multi-cited CERN paper.
The top five universities this time are King Abdulaziz University (KAU), Saudi Arabia, King Fahad University of Petroleum and Minerals, Saudi Arabia, King Saud University, Saudi Arabia, Khalifa University of Science, Technology and Research, UAE, and Qatar University.
The top three places are held by Saudi institutions. So how did they do it? According to an article by a THE editor for the World Economic Forum it was all due to money and internationalisation.
Up to a point that is correct. The sad story of Trinity College Dublin's botched data submission shows roughly how much a given reported increase can affect a university's overall rank: roughly 5 million Euro in reported total income (with proportionate increases for research income and income from industry) and a middling high university can jump up a spot in the overall rankings.
But does that explain KAU in top place? It did get a high score, 92.1, for international orientation but five other Arab universities did better. For teaching it was third, for industry income third, and for research seventh. What actually made the difference was Citations. KAU had a score of 93.3, far ahead of the next contender, Jordan University of Science and Technology with 50.2.
KAU's research impact is, according to THE, second in Asia only to the shooting star of india, Vel Tech University whose single self citing prodigy supposedly had a greater impact than the whole of any other Asian university. KAU's citations score was the result of the massive recruitment of adjunct faculty, 40 at the last count, who put KAU as a second affiliation. How much time they put in at KAU is uncertain but the Shanghai rankings calculated that highly cited researchers spent an average of 16% of their time at the university of their second affiliation.
It is bad enough that THE put so much emphasis on income and internationalisation in their methodology, promoting the diversion of resources from things like primary and pre-school education, adult literacy and alternatives to oil exports. To encourage universities to rise in the rankings by hiring adjunct faculty whose contribution is uncertain is very irresponsible. It would be a good thing if this snapshot was ignored.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Monday, May 22, 2017
Wednesday, May 10, 2017
Ranking article: The building of weak expertise: the work of global university rankers
Miguel Antonio Lim, The building of weak expertise: the work of global university rankers
University rankers are the subject of much criticism, and yet they remain influential in the field of higher education. Drawing from a two-year field study of university ranking organizations, interviews with key correspondents in the sector, and an analysis of related documents, I introduce the concept of weak expertise. This kind of expertise is the result of a constantly negotiated balance between the relevance, reliability, and robustness of rankers’ data and their relationships with their key readers and audiences. Building this expertise entails collecting robust data, presenting it in ways that are relevant to audiences, and engaging with critics. I show how one ranking organization, the Times Higher Education (THE), sought to maintain its legitimacy in the face of opposition from important stakeholders and how it sought to introduce a new “Innovation and Impact” ranking. The paper analyzes the strategies, methods, and particular practices that university rankers undertake to legitimate their knowledge—and is the first work to do so using insights gathered alongside the operations of one of the ranking agencies as well as from the rankings’ conference circuit. Rather than assuming that all of these trust-building mechanisms have solidified the hold of the THE over its audience, they can be seen as signs of a constant struggle for influence over a skeptical audience.
Higher Education 13 April 2017
https://link.springer.com/article/10.1007%2Fs10734-017-0147-8
University rankers are the subject of much criticism, and yet they remain influential in the field of higher education. Drawing from a two-year field study of university ranking organizations, interviews with key correspondents in the sector, and an analysis of related documents, I introduce the concept of weak expertise. This kind of expertise is the result of a constantly negotiated balance between the relevance, reliability, and robustness of rankers’ data and their relationships with their key readers and audiences. Building this expertise entails collecting robust data, presenting it in ways that are relevant to audiences, and engaging with critics. I show how one ranking organization, the Times Higher Education (THE), sought to maintain its legitimacy in the face of opposition from important stakeholders and how it sought to introduce a new “Innovation and Impact” ranking. The paper analyzes the strategies, methods, and particular practices that university rankers undertake to legitimate their knowledge—and is the first work to do so using insights gathered alongside the operations of one of the ranking agencies as well as from the rankings’ conference circuit. Rather than assuming that all of these trust-building mechanisms have solidified the hold of the THE over its audience, they can be seen as signs of a constant struggle for influence over a skeptical audience.
Higher Education 13 April 2017
DOI: 10.1007/s10734-017-0147-8
https://link.springer.com/article/10.1007%2Fs10734-017-0147-8
What good are international faculty?
In the previous post I did a bit of fiddling around with correlations and found that UK universities' scores for the international student indicator in the QS world rankings did not correlate very much with beneficial outcome for students such as employability and course completion. They did, however, correlate quite well with spending per student.
That would suggest that British universities want lots of international students because it is good for their finances.
What about international faculty?
Comparing the scores for various outcomes with the QS international faculty score shows that in most cases correlation is low and statistically insignificant. This includes course satisfaction and satisfaction with teaching (Guardian University League Tables), and completion and student satisfaction (THE TEF simulation).
There is, however, one metric that is positively, albeit modestly, and significantly associated with international students and that is the Research Excellence Framework score (REF) score (from Complete University Guide): .284 (sig (2-tailed) .043), N 51).
So it seems that international students are valued for the money they bring with them and international faculty for boosting research quality.
Caveat: this applies to highly ranked universities in the UK. How far it is true of other places or even less prestigious British institutions remains to be seen.
That would suggest that British universities want lots of international students because it is good for their finances.
What about international faculty?
Comparing the scores for various outcomes with the QS international faculty score shows that in most cases correlation is low and statistically insignificant. This includes course satisfaction and satisfaction with teaching (Guardian University League Tables), and completion and student satisfaction (THE TEF simulation).
There is, however, one metric that is positively, albeit modestly, and significantly associated with international students and that is the Research Excellence Framework score (REF) score (from Complete University Guide): .284 (sig (2-tailed) .043), N 51).
So it seems that international students are valued for the money they bring with them and international faculty for boosting research quality.
Caveat: this applies to highly ranked universities in the UK. How far it is true of other places or even less prestigious British institutions remains to be seen.
Monday, May 01, 2017
What good are international students?
There has been a big fuss in the UK about the status of international students. Many in the higher education industry are upset that the government insists on including these students in the overall total of immigrants, which might lead at some point to a reduction in their numbers. Leading twitterati have erupted in anger. Phil Baty of THE has called the government's decision "bizarre, bonkers & deeply depressing" and even invoked the name of the arch demon Enoch Powell in support.
So are international students a benefit to British universities? I have just done a quick correlation of the scores for the international students indicator in the QS World University Rankings to see whether there is any link between positive outcomes for students and the number of international students.
This is of course only suggestive. QS provides scores for only 51 universities included in their world top 500 and the situation might be different for other countries. Another caveat is that international students might provide net economic benefits for surrounding communities although that is far from settled.
Here are the correlations with the QS international student score (significance 2 tailed and N in brackets).
Value added .182 (.206; 50) From the Guardian Rankings 2016. Compares entry qualifications with degrees awarded.
Career .102 (.480; 50) Graduate-level employment or postgraduate study six months after graduation. Also from the Guardian rankings. The correlation with the graduate destinations indicator, based on the same data, in the Times Higher Education TEF simulation is even lower, .018, and turns negative after benchmarking, -172.
Students completing degrees .128 (.376; 50). From the TEF simulation. Again, the correlation turns negative after benchmarking.
QS Employers reputation survey .234 (.140; 41). From the 2016 world rankings.
So the number of international students has a slight and statistically insignificant relationship with the quality of teaching and learning as measured by value added, graduate employability, course completion and reputation with employers. Why then are universities so desperate to get as many as possible?
This, I think, is the answer. The correlation between the QS international students indicator and spending per student, measured by the Guardian ranking is .414 (.003; 50) which is very significant considering the noise generated in comparisons of this sort. Of course, correlation does not equal causation, but it seems a reasonable hypothesis that it is the money brought by international students that makes them so attractive to British universities.
So are international students a benefit to British universities? I have just done a quick correlation of the scores for the international students indicator in the QS World University Rankings to see whether there is any link between positive outcomes for students and the number of international students.
This is of course only suggestive. QS provides scores for only 51 universities included in their world top 500 and the situation might be different for other countries. Another caveat is that international students might provide net economic benefits for surrounding communities although that is far from settled.
Here are the correlations with the QS international student score (significance 2 tailed and N in brackets).
Value added .182 (.206; 50) From the Guardian Rankings 2016. Compares entry qualifications with degrees awarded.
Career .102 (.480; 50) Graduate-level employment or postgraduate study six months after graduation. Also from the Guardian rankings. The correlation with the graduate destinations indicator, based on the same data, in the Times Higher Education TEF simulation is even lower, .018, and turns negative after benchmarking, -172.
Students completing degrees .128 (.376; 50). From the TEF simulation. Again, the correlation turns negative after benchmarking.
QS Employers reputation survey .234 (.140; 41). From the 2016 world rankings.
So the number of international students has a slight and statistically insignificant relationship with the quality of teaching and learning as measured by value added, graduate employability, course completion and reputation with employers. Why then are universities so desperate to get as many as possible?
This, I think, is the answer. The correlation between the QS international students indicator and spending per student, measured by the Guardian ranking is .414 (.003; 50) which is very significant considering the noise generated in comparisons of this sort. Of course, correlation does not equal causation, but it seems a reasonable hypothesis that it is the money brought by international students that makes them so attractive to British universities.
Sunday, April 23, 2017
UTAR and the Times Higher Education Asian University Rankings
Recently, Universiti Tunku Abdul Rahman (UTAR), a private
Malaysian university, welcomed what appeared to be an outstanding performance
in the Times Higher Education (THE) Asian
Universities Rankings, followed by a good score in the magazine’s Young
University Rankings. This has been interpreted as a remarkable achievement not
just for UTAR but also for Malaysian higher education in general.
In the Asian rankings, UTAR is ranked in the top 120 and
second in Malaysia behind Universiti Malaya (UM) and ahead of the major
research universities, Universiti Sains Malaysia, Universiti Kebangsaan Malaysia
and Universiti Putra Malaysia.
This is sharp contrast to other rankings. There is a research
based ranking published by Middle East Technical
University that puts
UTAR 12th in Malaysia and 589th in Asia. The Webometrics ranking, which is mainly web based with one research indicator, has it 17th
in Malaysia and 651st in Asia.
The QS rankings, known to be kind to South East
Asian universities, puts UTAR in the 251-300 band for Asia and 14th=
in Malaysia behind places like Taylor’s University and Multi Media University
and in the same band as Universiti Malaysia Perlis and Universiti Malaysia Terengganu.
UTAR does not appear in the Shanghai rankings or the Russian Round University
Rankings.
Clearly, THE is the odd man out among rankings in its
assessment of UTAR. I suspect that if challenged a spokesperson for THE might
say that this is because they measure things other than research. That is very debatable.
Bahram Bekhradnia of the Higher Education Policy Institute has argued in a widely-cited report that these rankings are of little value
because they are almost entirely research-orientated.
In fact, UTAR did not perform so well in the THE Asian
rankings because of teaching, internationalisation or links with industry. It
did not even do well in research. It did well because of an “outstanding” score
for research impact and it got that score because of the combination of a
single obviously talented researcher with a technically defective methodology.
Just take a look at UTAR’s scores for the various components in the THE Asian rankings. For Research
UTAR got a very low score of 9.6, the lowest of the nine Malaysian universities
featured in these rankings (100 represents the top score in all the indicators).
For Teaching it has a score of 15.9, also the lowest of the
ranked Malaysian universities.
For International Orientation, it got a score of 33.2. This
was not quite the worst in Malaysia. Universiti Teknologi MARA (UiTM), which
does not admit non– bumiputra Malaysians, let alone international
students, did worse.
For Industry Income UTAR’s score was 32.9, again surpassed by
every Malaysian university except UiTM.
So how on earth did UTAR manage to get into the top 120 in Asia
and second in Malaysia?
The answer is that it got an “excellent” score of 56.7 for Research
Impact, measured by field-normalised citations, higher than every other Malaysian
university, including UM, in these rankings.
That score is also higher than several major international research
universities such as National Taiwan University, the Indian Institute of
Technology Bombay, Kyoto University and Tel Aviv University. That alone should
make the research impact score very suspicious. Also, compare the score with
the low score for research which combines three metrics, research reputation,
research income and publications. Somehow UTAR has managed to have a huge
impact on the research world even though it receives little money for research,
does not have much of a reputation for research, and does not publish very
much.
The THE research impact (citations) indicator is very problematical
in several ways. It regularly produces utterly absurd results such as Alexandria
University in Egypt in fourth place for research impact in the world in 2010
and St George’s, University of London (a medical school), in first place last
year, or Anglia Ruskin University, a former art school, equal to Oxford and
well ahead of Cambridge University.
In addition, to flog a horse that should have decomposed by
now, Veltech University in Chennai, India, according to THE has biggest
research impact in Asia and perhaps, if it qualified for the World Rankings, in
the world. This was done by massive self-citation by exactly one researcher and a
little bit of help from a few friends.
Second in Asia for research, THE would have us believe, is King
Abdulaziz University of Jeddah which has been on recruiting spree of adjunct faculty whose duties might include visiting
the university but certainly do require putting its name as secondary
affiliation in research papers.
To rely on the THE rankings as a measure of excellence is unwise.
There were methodological changes in 2011, 2015 and 2016, which have
contributed to universities moving up or down many places even if there has
been no objective change. Middle East Technical University in Ankara, for
example, fell from 85th place in 2014-15 to the 501-600 band in
2015-6 and then to the 601-800 band in 2016-17. Furthermore, adding new universities means
that the average scores from which the final scores are calculated are likely
to fluctuate.
In addition, THE has been known to recalibrate the weight given to its indicators in their
regional rankings and this has sometimes worked to the advantage of whoever is
the host of THE’s latest exciting and prestigious summit. In 2016, THE’s Asian
rankings featured an increased weight for research income from industry and a
reduced one for teaching and research reputation. This was to the disadvantage
of Japan and to the benefit of Hong Kong where the Asian summit was held.
So, is UTAR really more influential among international
researchers than Kyoto University or the National Taiwan University?
What actually happened to UTAR is that it has an outstanding
medical researcher who is involved in a massive international medical project
with hundreds of collaborators from hundreds of institutions that produces
papers that have been cited hundreds of times and will in the next few years be
cited thousands of times. One of these papers had, by my count, 720
contributors from 470 universities and research centres and has so far received
1,036 citations, 695 in 2016 alone.
There is absolutely nothing wrong with such projects but it
is ridiculous to treat every one of those 720 contributors as though they were
the sole author of the paper with credit for all the citations, which is what
THE does. This could have been avoided simply by using fractional counting and
dividing the number of citations by the number of authors or number of
affiliating institutions. This is an option available in the Leiden Ranking,
which is the most technically expert of the various rankings. THE already does
this for publications with over 1,000 contributors but that is obviously not
enough.
I would not go as far as Bahram Bekhradnia and other higher
education experts and suggest that universities should ignore rankings
altogether. But if THE are going to continue to peddle such a questionable
product then Malaysian universities would be well advised to keep their
distance. There are now several other rankings on the marking that could be
used for benchmarking and marketing.
It is not a good idea for UTAR to celebrate its achievement
in the THE rankings. It is quite possible that the researcher concerned will one day go
elsewhere or that THE will tweak its methodology again. If either happens the
university will suffer from a precipitous fall in the rankings along with a
decline in its public esteem. UTAR and other Malaysian universities would be
wise to treat the THE rankings with a great deal of caution and scepticism.
Thursday, April 13, 2017
University Challenge: It wasn't such a dumb ranking.
A few years ago I did a crude ranking of winners and runners-up, derived from Wikipedia, of the UK quiz show University Challenge, to show that British university rankings were becoming too complex and sophisticated. Overall it was not too dissimilar to national rankings and certainly more reasonable than the citations indicator of the THE world rankings. At the top was Oxford, followed by Cambridge and then Manchester. The first two were actually represented by constituent colleges. Manchester is probably underrepresented because it was expelled for several years after its team tried to sabotage a show by giving answers like Marx, Trotsky and Lenin to all or most of the questions, striking a blow against bourgeois intellectual hegemony or something.
Recently Paul Greatrix of Wonk HE did a list of the ten dumbest rankings ever. The University Challenge ranking was ninth because everybody knows who will win. He has a point. Cambridge and Oxford colleges are disproportionately likely to be in the finals.
Inevitably there is muttering about not enough women and ethnic minorities on the teams. The Guardian complains that only 22 % of this years contestants were women and none of the finalists. The difference between the number of finalists and the number of competitors might, however, suggest that there is a bias in favour of women in the processes of team selection.
Anyway, here is a suggestion for anyone concerned that the university challenge teams don't look like Britain or the world. Give each competing university or college the opportunity, if they wish, to submit two teams, one of which will be composed of women and/or aggrieved minorities and see what happens.
James Thompson at the Unz Review has some interesting comments. It seems that general knowledge is closely associated with IQ and to a lesser extent with openness to experience. This is in fact a test of IQ aka intelligence aka general mental ability.
So it's not such a dumb ranking after all.
Recently Paul Greatrix of Wonk HE did a list of the ten dumbest rankings ever. The University Challenge ranking was ninth because everybody knows who will win. He has a point. Cambridge and Oxford colleges are disproportionately likely to be in the finals.
Inevitably there is muttering about not enough women and ethnic minorities on the teams. The Guardian complains that only 22 % of this years contestants were women and none of the finalists. The difference between the number of finalists and the number of competitors might, however, suggest that there is a bias in favour of women in the processes of team selection.
Anyway, here is a suggestion for anyone concerned that the university challenge teams don't look like Britain or the world. Give each competing university or college the opportunity, if they wish, to submit two teams, one of which will be composed of women and/or aggrieved minorities and see what happens.
James Thompson at the Unz Review has some interesting comments. It seems that general knowledge is closely associated with IQ and to a lesser extent with openness to experience. This is in fact a test of IQ aka intelligence aka general mental ability.
So it's not such a dumb ranking after all.
Wednesday, April 12, 2017
Exactly how much is five million Euro worth converted into ranking places
One very useful piece of information to emerge from the Trinity College Dublin (TCD) rankings fiasco is the likely effect on the rankings of injecting money into universities.
When TCD reported to Times Higher Education THE that it had almost no income at all, 355 Euro in total, of which 111 Euro was research income and 5 Euro from industry, it was ranked in the 201 - 250 band in the world university rankings. Let's be generous and say that it was 201st. But when the correct numbers were inserted, 355 million in total (of which 111 million is research income and 5 million industry income) it was in 131st= place.
So we can say crudely that increasing (or rather reporting) overall institutional income by 5 million Euro (keeping the proportions for research income and industry income constant) translates into one place in the overall world rankings.
Obviously this is not going to apply as we go up the rankings. I suspect that Caltech will need a lot more than an extra 5 million Euro or 5 million anything to oust Oxford from the top of the charts.
Anyway, there it is. Five million Euro and the national flagship advances one spot up the THE world rankings. It sounds a lot but when the minister for arts, sports and tourism spends 120 Euro for hat rental, and thousands for cars and hotels, there are perhaps worse things the Irish government could do with the taxpayers' money.
When TCD reported to Times Higher Education THE that it had almost no income at all, 355 Euro in total, of which 111 Euro was research income and 5 Euro from industry, it was ranked in the 201 - 250 band in the world university rankings. Let's be generous and say that it was 201st. But when the correct numbers were inserted, 355 million in total (of which 111 million is research income and 5 million industry income) it was in 131st= place.
So we can say crudely that increasing (or rather reporting) overall institutional income by 5 million Euro (keeping the proportions for research income and industry income constant) translates into one place in the overall world rankings.
Obviously this is not going to apply as we go up the rankings. I suspect that Caltech will need a lot more than an extra 5 million Euro or 5 million anything to oust Oxford from the top of the charts.
Anyway, there it is. Five million Euro and the national flagship advances one spot up the THE world rankings. It sounds a lot but when the minister for arts, sports and tourism spends 120 Euro for hat rental, and thousands for cars and hotels, there are perhaps worse things the Irish government could do with the taxpayers' money.
Tuesday, April 11, 2017
Should graduation rates be included in rankings?
There is a noticeable trend for university rankings to become more student- and teaching-centred. Part of this is a growing interest in using graduation rates as a ranking metric. Bob Morse of US News says "[t]his is why we factor in graduation rates. Getting into college means nothing if you can't graduate."
The big problem though is that if universities can influence or control standards for graduation then the value of this metric is greatly diminished. A high graduation rate might mean effective teaching and meritocratic admissions: it might also mean nothing more than a relaxation of standards.
But we do know that dropping out or not finishing university is the road to poverty and obscurity. Think of poor Robert Zimmerman, University of Minnesota dropout, singing for a pittance in coffee bars or Kingsley Amis toiling at University College Swansea for 13 years, able to afford only ten cigarettes a day, after failing his Oxford B Litt exam. Plus all those other failures like Mick Jagger (LSE) and Bill Gates (Harvard). So it could be that graduation rates as a ranking indicator are here to stay.
Subscribe to:
Posts (Atom)