Showing posts sorted by date for query oxford reputation. Sort by relevance Show all posts
Showing posts sorted by date for query oxford reputation. Sort by relevance Show all posts

Saturday, September 07, 2019

Finer and finer rankings prove anything you want

If you take a single metric from a single ranking and do a bit of slicing by country, region, subject, field and/or age there is a good chance that you can prove almost anything, for example that the University of the Philippines is a world beater for medical research. Here is another example from the Financial Times.

An article by John O'Hagan, Emeritus Professor at Trinity College Dublin, claims that German universities are doing well for research impact in the QS economics world rankings. Supposedly, "no German university appears in the top 50 economics departments in the world using the overall QS rankings. However, when just research impact is used, the picture changes dramatically, with three German universities, Bonn, Mannheim and Munich, in the top 50, all above Cambridge and Oxford on this ranking."

This is a response to Frederick Studemann's claim that German universities are about to move up the rankings. O'Hagan is saying that is already happening.

I am not sure what this is about. I had a look at the most recent QS economics rankings and found that in fact Mannheim is in the top fifty overall for that subject. The QS subject rankings do not have a research impact indicator. They have academic reputation, citations per paper, and h-index, which might be considered proxies for research impact, but for none of these are the three universities in the top fifty. Two of the three universities are in the top fifty for academic research reputation, one for citations per paper and two for h-index.

So it seems that the article isn't referring to the QS economics subject ranking. Maybe it is the overall ranking that professor O'Hagan is thinking of? There are no German universities in the overall top fifty there but there are also none in the citations per faculty indicator. 

I will assume that the article is based on an actual ranking somewhere, maybe an earlier edition of the QS subject rankings or the THE world rankings or from one of the many spin-offs. 

But it seems a stretch to talk about German universities moving up the rankings just because they did well in one metric in one of the 40 plus international rankings in one year.


Tuesday, August 13, 2019

University of the Philippines beats Oxford, Cambridge, Yale, Harvard, Tsinghua, Peking etc etc

Rankings can do some good sometimes. They can also do a lot of harm and that harm is multiplied when they are sliced more and more thinly to produce rankings by age, by size, by mission, by region, by indicator, by subject. When this happens minor defects in the overall rankings are amplified.

That would not be so bad if universities, political leaders and the media were to treat the tables and the graphs with a healthy scepticism. Unfortunately, they treat the rankings, especially THE, with obsequious deference as long as they are provided with occasional bits of publicity fodder.

Recently, the Philippines media have proclaimed that the University of the Philippines (UP) has beaten Harvard, Oxford and Stanford for health research citations. It was seventh in the THE Clinical, Pre-clinical and Health category behind Tokyo Metropolitan University, Auckland University of Technology, Metropolitan  Autonomous University Mexico, Jordan University of Science and Technology, University of Canberra and Anglia Ruskin University.

The Inquirer is very helpful and provides an explanation from the Philippine Council for Health Research and Development that citation scores “indicate the number of times a research has been cited in other research outputs” and that the score "serves as an indicator of the impact or influence of a research project which other researchers use as reference from which they can build on succeeding breakthroughs or innovations.” 

Fair enough, but how can UP, which has a miserable score of  13.4 for research in the same subject ranking have such a massive research influence? How can it have an extremely low output of papers, a poor reputation for research, and very little funding and still be a world beater for research impact.

It is in fact nothing to do with UP, nothing to do with everyone working as a team, decisive leadership or recruiting international talent.

It is the result of a bizarre and ludicrous methodology.  First, THE does not use fractional counting for papers with less than a thousand authors. UP, along with many other universities, has taken part in the Global Burden of Disease project funded by the Bill and Melinda Gates Foundation. This has produced a succession of papers, many of them in the Lancet, with hundreds of contributing institutions and researchers, whose names are all listed as authors, and hundreds or thousands of citations. As long as the number of authors does not reach 1,000 each author is counted as though he or she were the recipient of all the citations. So UP gets the credit for a massive number of citations which is divided by a relatively small number of papers.

Why not just use fractional counting, dividing the citations among the contributors or the intuitions, like Leiden Ranking does. Probably because it might add a little bit to costs, perhaps because THE doesn't like to admit it made a mistake.

Then we have the country bonus or regional modification, applied to half the indicator, which increases the score for universities in countries with low impact.

The result of all this is that UP, surrounded by low scoring universities, not producing very much research but with a role in a citation rich mega project, gets a score for this indicator that puts it ahead of the Ivy League, the Group of Eight and the leading universities of East Asia.

If nobody took this seriously, then no great harm would be done. Unfortunately it seems that large numbers of academics, bureaucrats and journalists do take the THE rankings very seriously or pretend to do so in public. 

And so committee addicts get bonuses and promotions, talented researchers spend their days in unending ranking-inspired transformational seminars, funds go to the mediocre and the sub mediocre, students and stakeholders base their careers on  misleading data, and the problems of higher education are covered up or ignored.



Tuesday, June 19, 2018

Are the US and the UK really making a comeback?

The latest QS World University Rankings and the THE World Reputation Rankings have just been published. The latter will feed into the forthcoming world rankings where the two reputation indicators, research and postgraduate teaching, will account for 33 per cent of the total weighting, 

The THE reputation rankings include only 100 universities. QS is now ranking close to 1,000 universities and provides scores for 500 of them including academic reputation and employer reputation.

The publication of these rankings has led to claims that British and American universities are performing well again after  a period of stress and difficulty. In recent years we have heard a great deal about the rise of Asia and the decline of the West. Now it seems that THE and QS are telling us that things are beginning to change.

The rise of Asia has perhaps been overblown but if Asia is narrowly as Northeast Asia and Greater China then there is definitely something going on. Take a look at the record of Zhejiang University in the Leiden Ranking publications indicator. In 2006-9 Harvard produced a total of 27,422 papers and Zhejiang University 11,173. In the period 2013-16 the numbers were 33,045 for Harvard and 20,876 for Zhejiang. In seven years Zhejiang has gone from 42% of Harvard's score to 63%. It is not impossible that Zhejiang will reach parity within two decades.

We are talking about quantity here. Reaching parity for research of the highest quality and the greatest impact will take longer but here too it seems likely that within a generation universities like Peking, Zhejiang, Fudan, KAIST and the National University of Singapore will catch up with and perhaps surpass the Ivy League, the Russell Group and the Group of Eight.

The scientific advance of China and its neighbours is confirmed by data from a variety of sources, including the deployment of supercomputers,  the use of robots, and, just recently,  the Chinese Academy of Science holding its place at the top of the Nature Index.

There are caveats. Plagiarism is a serious problem and the efficiency of Chinese research culture is undermined by  cronyism and political conformity. But these are problems that are endemic, and perhaps worse, in Western universities.

So it might seem surprising that the two recent world rankings show that American and British universities are rising again. 

But perhaps it should not be too surprising. QS and THE emphasise reputation surveys, which have a weighting of 50% in the QS world rankings and 33% in THE's. There are signs that British and American universities and others in the Anglosphere are learning the reputation management game while universities in Asia are not so interested.

Take a look at the the top fifty universities in the QS academic reputation indicator, which is supposed to be about the best universities for research. The countries represented are:
US 20
UK 7
Australia 5
Canada 3
Japan 2
Singapore 2
China 2
Germany 2.

There is one each for Switzerland, Hong Kong, South Korea, Mexico, Taiwan, France and Brazil.

The top fifty universities in the QS citations per faculty indicator, a measure of research excellence, are located in:
USA 20
China 4
Switzerland 4
Netherlands 3
India  2
Korea 2
Israel 2
Hong Kong 2
Australia 2.

There is one each from Saudi Arabia, Italy, Germany, UK, Sweden, Taiwan, Singapore and Belgium.

Measuring citations is a notoriously tricky business and probably some of the high flyers in the reputation charts are genuine local heroes little known to the rest of the world. There is also now a lot of professional advice available about reputation management for those with cash to spare. Even so it is striking that British, Australian, and Canadian universities do relatively well on reputation in the QS rankings while China, Switzerland, the Netherlands, India and Israel do relatively well for citations.

For leading British universities the mismatch is very substantial. According to the 2018-19 QS world rankings, Cambridge is 2nd for academic reputation, 71st for citations, Manchester is 33rd and 221st, King's College London 47th and 159th, Edinburgh 24th and 181st. It is not surprising that British universities should perform well in rankings where there is a 40 % weighting for reputation.

The THE reputation rankings have produced some good results for several US universities.
UCLA has risen from 13th to 9th    
Cornell from 23rd to 18th                      
University of Washington from 34th to 28th                
University of Illinois Urbana-Champaign from 36th to 32nd            Carnegie Mellon from 37th to 30th                    
Georgia Institute of Technology from 48th to 44th.                             
Some of this is probably the result of a change in the distribution of survey responses. I have already pointed out that the fate of Oxford in the THE survey rankings is tied to the percentages of responses from the arts and humanities. THE have reported that their survey this year had an increased number of responses from computer science and engineering and a reduced number from the social sciences and the humanities. Sure enough, Oxford has slipped slightly while LSE has fallen five places. 

The shift to computer science and engineering in the THE survey might explain the improved reputation of Georgia Tech and Carnegie Mellon. There is, I suspect, something else going on and that is the growing obsession of some American universities with  reputation management, public relations and rankings, including the hiring of professional consultants.

In contrast, Asian universities have not done so well in the THE reputation rankings.

University of  Tokyo has fallen from 11th to 13th place    
University of Kyoto from 25th to 27th      
Osaka University from 51st to 81st         
Tsinghua University is unchanged in 14th  
Peking University 17 unchanged in 17th   
Zhejiang University has fallen from the 51-60 band to 71-80          University of Hong Kong has fallen from 39th to 40th.        

All but one of the US universities have fallen in the latest Nature Index, UCLA by 3.1%, University of Washington 1.7%, University of Illinois Urbana-Champaign 12%, Carnegie Mellon 4.8%, Georgia Tech 0.9%.

All but one of the Asian universities have risen in the Nature Index, Tokyo by 9.2%, Kyoto 15.1%, Tsinghua 9.5%, Peking 0.9%, Zhejiang 9.8%, Hong kong 25.3%.

It looks like that Western and Asian universities are diverging. The former are focussed on branding, reputation, relaxing admission criteria, searching for diversity. They are increasingly engaged with, or even obsessed with, the rankings.

Asian universities, especially in Greater China and Korea, are less concerned with rankings and public relations and more with academic excellence and research output and impact. 

As the university systems diverge it seems that two different sets of rankings are emerging to cater for the academic aspirations of different countries.












Wednesday, May 30, 2018

Why are US universities doing so well in the THE reputation rankings?

For the last couple of years the higher education media has tried to present any blip in the fortunes of UK universities as one of the malign effects of Brexit, whose toxic rays are unlimited by space, time or logic. Similarly, if anything unpleasant happens to US institutions, it is often linked to the evil spell of the great orange devil, who is scaring away international students, preventing the recruitment of the scientific elites of the world, or even being insufficiently credulous of the latest settled science.

So what is the explanation for the remarkable renaissance of US higher education apparently revealed by the THE reputation survey published today?

Is Trump working his magic to make American colleges great again?

UCLA is up four places, Carnegie Mellon seven, Cornell six, University of Washington six, Pennsylvania three. In contrast, several European and Asian institutions have fallen, University College London and the University of Kyoto by two places, Munich by seven, and Moscow State University by three.

In the previous post I noted that this year's survey had seen an increased response from engineering and computer science and a reduced one from the social sciences and the arts and humanities. As expected, LSE has tumbled five places and Oxford has fallen one place. Surprisingly, Caltech has fallen as well.

Some schools that are strong in engineering, such as Nanyang Technological University and Georgia Institute of Technology, have done well but I do not know if that is a full explanation for  the success of US universities.

I suspect that US administrators have learned that influencing reputation is easier than maintaining scientific and intellectual standards and that a gap is emerging between perceptions and actual achievements.

It will be interesting to see if these results are confirmed by the reputation indicators included in the QS, Best Global Universities, and the Round University Rankings


Saturday, May 26, 2018

The THE reputation rankings

THE have just published details of their reputation rankings which will be published on May 30th, just ahead, no doubt coincidentally, of the QS World University Rankings.

The number of responses has gone down a bit, from 10,566 last year to 10,162, possibly reflecting growing survey fatigue among academics.

In surveys of this kind the distribution of responses is crucial. The more responses from engineers the better for universities in Asia. The more from scholars in the humanities the better for  Western Europe. I have noted in a previous blog that the fortunes of Oxford in this ranking are tied to the percentage of responses from the arts and humanities.

This year there have been modest or small reductions in the percentage of responses from the clinical and health sciences, the life sciences, the social sciences, education and psychology and  large ones for business and economics and the arts and humanities.

The number of responses in engineering and computer science has increased considerably.

It is likely that this year places like Caltech and Nanyang Technological University will do better while Oxford and LSE will suffer. It will be interesting to see if THE claim that this is all the fault of Brexit, an anti-feminist reaction to Oxford's appointment of a female vice-chancellor or government Scrooges turning off the funding tap.

         

2017  %
2018  %
Physical science
14.6
15.6
Clinical and health
14.5
13.2
Life sciences
13.3
12.8
Business and economics
13.1
9
engineering
12.7
18.1
Arts and humanities
12.5
7.5
Social sciences
8.9
7.6
Computer science
4.2
10.4
Education
2.6
2.5
Psychology
2.6
2.3
Law
0.9
1.0



North America
22
22
Asia Pacific
33
32
Western Europe
25
26
Eastern Europe
11
11
Latin America
5
5
Middle East
3
3
Africa
2
2


Friday, May 11, 2018

Ranking Insights from Russia

The ranking industry is expanding and new rankings appear all the time. Most global rankings measure research publications and citations. Others try to add to the mix indicators that might have something to do with teaching and learning. There is now a  ranking that tries to capture various third missions.

The Round University Rankings published in Russia are in the tradition  of  holistic rankings. They give a 40 % weighting to research, 40 % to teaching, 10% to international diversity and 10% to financial sustainability. Each group contains five equally weighted indicators. The data is derived from Clarivate Analytics who also contribute to the US News Best Global Universities Rankings.

These rankings are similar to the THE rankings in that they attempt to assess quality rather than quantity but they have 20 indicators instead of 13 and assign sensible weightings. Unfortunately, they receive only a fraction of the attention given to the THE rankings.

They are, however, very  valuable since they dig deeper into the data than  other global rankings. They also show that there is a downside to measures of quality and that data submitted directly by institutions should be  treated with caution and perhaps scepticism.

Here are the top universities for each of the RUR indicators.

Teaching
Academic staff per students: VIB (Flemish Institute of Biotechnology), Belgium
Academic staff per bachelor degrees awarded: University of Valladolid, Spain
Doctoral degrees per academic staff: Kurdistan University of Medical Science, Iran
Doctoral degrees per bachelor degrees awarded:  Jawaharlal Nehru University, India
World teaching reputation  Harvard University, USA.

Research
Citations per academic and research staff: Harvard
Doctoral degrees per admitted PhD: Al Farabi Kazakh National University
Normalised citation impact: Rockefeller University, USA
Share of international co-authored papers: Free University of Berlin
World research reputation: Harvard.

International diversity
Share of international academic staff: American University of Sharjah, UAE
Share of international students: American University of Sharjah
Share of international co-authored papers: Innopolis University, Russia
Reputation outside region: Voronezh State Technical University, Russai
International Level: EPF Lausanne, Switzerland.

Financial sustainability:
Institutional income per academic staff: Universidade Federal Do Ceara, Brazil
Institutional income per student: Rockefeller University
Papers per research income:  Novosibersk State University of Economics and Management, Russia
Research income per academic and research staff: Istanbul Technical University, Turkey
Research income per institutional income: A C Camargo Cancer Center, Brazil.

There are some surprising results here. The most obvious is Voronezh State Technical University which is first for reputation outside its region (Asia, Europe and so on), even though its overall scores for reputation and for international diversity are very low. The other top universities for this metric are just what you would expect, Harvard, MIT, Stanford, Oxford and so on. I wonder whether there is some sort of bug in the survey procedure, perhaps something like the university's supporters being assigned to Asia and therefore out of region. The university is also in second place in the world for papers per research income despite very low scores for the other research indicators.

There are other oddities such as Novosibersk State University of Economics and Management placed first for papers per research income and Universidade Federal Do Ceara for institutional income per academic staff. These may result from anomalies in the procedures for reporting and analysing data, possibly including problems in collecting data on income and staff.

It also seems that medical schools and specialist or predominantly postgraduate institutions such as Rockefeller University, the Kurdistan University of Medical Science, Jawarhalal Nehru University and VIB have a big advantage with these indicators since they tend to have favourable faculty student ratios, sometimes boosted by large numbers of clinical and research only staff, and a large proportion of doctoral students.

Jawaharlal Nehru University is a mainly postgraduate university so a high placing for academic staff per bachelor degrees awarded is not unexpected although I am surprised that it is ahead of Yale and Princeton. I must admit that the third place here for the University of Baghdad needs some explanation.

The indicator doctoral degrees per admitted PhD might identify universities that do a good job of selection and training and get large numbers of doctoral candidates through the system. Or perhaps it identifies universities where doctoral programmes are so lacking in rigour that nearly everybody can get their degree once admitted. The top ten of this indicator includes De Montfort University, Shakarim University, Kingston University, and the University of Westminster, none of which are famous for research excellence across the range of disciplines.

Measures of international diversity have become a staple of global rankings since they are fairly easy to collect. The problem is that international orientation may have something do with quality but it may also simply be a necessary attribute of being in a small country next to larger countries with the same or similar language and culture. The top ten for the international student indicators includes the Central European University and the American university of Sharjah. For international faculty it includes the University of Macau and Qatar University.

To conclude, these indicators suggest that self submitted institutional data should be used sparingly and that data from third party sources may be preferable. Also, while ranking by quality instead of quantity is sometimes advisable it also means that anomalies and outliers are more likely to appear.







Friday, April 13, 2018

At last. A Ranking With Cambridge at the Bottom


Cambridge usually does well in national and global rankings. The most recent ARWU from Shanghai puts it in third place and although it does less well in other rankings it always seems to be in the top twenty. It has suffered at the hands of the citations indicator in the THE world  rankings which seem to think that Anglia Ruskin University, formerly the Cambridgeshire College of Arts and Technology, has a greater global research impact but nobody takes that seriously.

So it is a surprise to find  an article in the Guardian about a ranking from the Higher Education Policy Institute ( HEPI) in the UK that actually puts Cambridge at the bottom and the University of Hull at the top. Near the bottom are others in the Russell group, Oxford, Bristol and LSE.

At the top we find Edge Hill, Cardiff Metropolitan and, of course, Anglia Ruskin Universities.

The ranking was part of a report written for HEPI by Iain Martin, vice-chancellor of Anglia Ruskin University, that supposedly rates universities for fair access, that is having a student intake that mirrors society as a whole. It compares the percentage of participation in higher education of school leavers in local authority areas with the percentage admitted by specific universities. Universities have a high rank if they draw students from areas where relatively few school leavers go to university. The rationale is the claim that learning outcomes are improved when people of diverse backgrounds study together.

It is noticeable that there several Scottish universities clustered at the bottom even though Scotland has a free tuition policy (not  for the English of course) that was supposed to guarantee fair access.

This rankings looks like an inversion of the ranking of UK universities according to average entry tariff, ie 'A' level grades, and a similar inversion of most global rankings based on research or reputation. 

Cambridge and other Russell Group universities have been under increasing pressure to relax entry standards and indiscriminately  recruit more low income students and those from historically unrepresented groups. It seems that they are slowly giving way to the pressure and that as academic standards erode they will be gradually eclipsed by the rising universities of East Asia.





Tuesday, February 20, 2018

Is Erdogan Destroying Turkish Universities?


An article by Andrew Wilks in The National claims that the position of Turkish universities in the Times Higher Education (THE) world rankings, especially that of Middle East Technical University (METU) has been declining as a result of the crackdown by president Erdogan following the unsuccessful coup of July 2016.

He claims that Turkish universities are now sliding down the international rankings and that this is because of the decline of academic freedom, the dismissal or emigration of many academics and a decline in its academic reputation.


'Turkish universities were once seen as a benchmark of the country’s progress, steadily climbing international rankings to compete with the world’s elite.
But since the introduction of emergency powers following a failed coup against President Recep Tayyip Erdogan in July 2016, the government’s grip on academic freedom has tightened.
A slide in the nation's academic reputation is now indisputable. Three years ago, six Turkish institutions [actually five] were in the Times Higher Education’s global top 300. Ankara's Middle East Technical University was ranked 85th. Now, with Oxford and Cambridge leading the standings, no Turkish university sits in the top 300.
Experts say at least part of the reason is that since the coup attempt more than 5,800 academics have been dismissed from their jobs. Mr Erdogan has also increased his leeway in selecting university rectors.
Gulcin Ozkan, formerly of Middle East Technical University but now teaching economics at York University in Britain, said the wave of dismissals and arrests has "forced some of the best brains out of the country".'
I have no great regard for Erdogan but in this case he is entirely innocent.

There has been a massive decline in METU's position in the THE rankings since 2014 but that is entirely the fault of THE's methodology. 

In the world rankings of 2014-15, published in 2014, METU was 85th in the world, with a whopping score of 92.0 for citations, which carries an official weighting of 30%. That score was the result of METU's participation in the Large Hadron Collider (LHC) project which produces papers with hundreds or thousands of authors and hundreds and thousands of citations. In 2014 THE counted every single contributor as receiving all of the citations. Added to this was a regional modification that boosted the scores of universities located in countries with a low citations impact score.

In 2015, THE revamped its methodology by not counting the citations to these mega-papers and by applying the regional modification to only half of the research impact score.

As a result, in the 2015-16 rankings METU crashed to the 501-600 band, with a score for citations of only 28.8. Other Turkish universities had also been involved in the LHC project and benefited from the citations bonus and they too plummeted. There was now only one Turkish university in the THE top 300.

The exalted position of METU in the THE 2014-15 rankings was the result of THE's odd methodology and its spectacular tumble was the result of changing that methodology. In other popular rankings METU seems to be slipping a bit but it never goes as high as in THE in 2014 or as low as in 2015

In the QS world rankings for 2014-15 METU was in the 401-410 band and by 2017-18 it had fallen to  471-480 in 2017

The Russian Round University Rankings have it 375 in 2014 and 407 in 407. The US News Best Global Universities placed it 314th last year.

Erdogan had nothing to do with it.















Saturday, December 16, 2017

Measuring graduate employability; two rankings

Global university rankings are now well into their second decade. Since 2003, when the first Shanghai rankings appeared, there has been a steady growth of global and regional rankings. At the moment most global rankings are of two kinds, those that focus entirely or almost entirely on research and those such as the Russian Round Rankings, Times Higher Education (THE) and Quacquarelli Symonds (QS) that claim to also measure teaching, learning or graduate quality in some way, although even those are biased towards research when you scratch the surface a little.

The ranking industry has become adept at measuring research productivity and quality in various ways. But the assessment of undergraduate teaching and learning is another matter.

Several ranking organisations use faculty student ratio as a proxy for quality of teaching which in turn is assumed to have some connection with something that happens to students during their programmes. THE also count institutional income, research income and income from industry, again assuming that there is a significant association with academic excellence. Indicators like this are usually based on those supplied by institutions. For examples of problems here see an article by Alex Usher and a reply by Phil Baty.

An attempt to get at student quality is provided by the CWUR rankings now based in UAE, counting alumni who win international awards or who are CEOs of major companies. But obviously this is relevant only for a very small number of universities. A new pilot ranking from Moscow also counts international awards.

The only attempt to measure student quality  by the well known rankers that is relevant to most institutions is the survey of employers in the QS world and regional rankings. There are some obvious difficulties here. QS gets respondents from a variety of channels and this may allow some universities to influence the survey. In recent years some Latin American universities have done much better on this indicator than on any other.

THE now publish a global employability ranking which is conducted by two European firms, Trendence and Emerging. This is based on two surveys of recruiters in Argentina, Australia, Austria, Brazil, Canada, China, Germany, France, India, Israel, Italy, Japan, Mexico, Netherlands, Singapore, Spain, South Africa, South Korea, Turkey, UAE, UK, and USA. There were two panels with a total of over 6,000 respondents.

A global survey that does not include Chile, Sweden, Egypt, Nigeria, Saudi Arabia, Russia, Pakistan, Indonesia, Bangladesh, Poland, Malaysia or Taiwan can hardly claim to be representative of international employers. This limited representation may explain some oddities of the rankings such as the high places of the American University of Dubai and and the National Autonomous University of Mexico.

The first five places in these rankings are quite similar to the THE world rankings: Caltech, Harvard, Columbia, MIT, Cambridge.  But there some significant differences after that and some substantial changes since last year. Here Columbia, 14th in the world rankings, is in third place, up from 12th last year. Boston University is 6th here but 70th in the world rankings. Tokyo Institute of Technology in 19th place is in the 251-300 band in the world rankings. CentraleSupelec, is 41st,  but in the world  401-500 group.

These rankings are useful only for a small minority of universities, stakeholders and students. Only 150 schools are ranked and only a small proportion of the world's employers consulted.

QS have also released their global employability rankings with 500 universities. These combine the employer reputation survey, used in their world rankings with  other indicators: alumni outcomes, based on lists of high achievers, partnership with employers, that is research collaboration noted in the Scopus database, employer-student connections, that is employers actively present on campus, and graduate employment rate. There seems to be a close association, at least at the top, between overall scores, employer reputation and alumni outcomes. Overall the top three are Stanford, UCLA, Harvard. For employer reputation they are Cambridge, Oxford, Harvard and for alumni outcomes Harvard, Stanford Oxford.

The other  indicators are a different matter. For employer-student connections the top three are Huazhong University of Science and Technology, Arizona State University, and New York University. In fact seven out of the top ten on this measure  are Chinese. For graduate employment rate they are Politecnico di Torino, Moscow State Institute of International Relations, and Sungkyunkwan University and for partnership with employers Stanford, Surrey and Politecnico Milano. When the front runners in indicators are so different one has to wonder about their validity.

There are some very substantial differences in the ranks given to various universities in these rankings. Caltech is first in the Emerging-Trendence rankings and 73rd in QS. Hong Kong University of Science and Technology is 12th in Emerging-Trendence but not ranked at all by QS. The University of Sydney is 4th in QS and 48th in Emerging-Trendence. The American University of Dubai is in QS's 301-500 band but 138th for Emerging-Trendence

The  rankings published by THE could be some value to those students contemplating careers with the leading companies in the richest countries.

The QS rankings may be more helpful for those students or stakeholders looking at universities outside the very top of the global elite. Even so QS have ranked only a fraction of the world's universities.

It still seems that the way forward in the assessment of graduate outcomes and employability is through standardised testing along the lines of AHELO or the Collegiate Learning Assessment.




Wednesday, July 19, 2017

Comments on an Article by Brian Leiter

Global university rankings are now nearly a decade and a half old. The Shanghai rankings (Academic Ranking of World Universities or ARWU) began in 2003, followed a year later by Webometrics and the THES-QS rankings which, after an unpleasant divorce, became the Times Higher Education (THE) and the Quacquarelli Symonds (QS) world rankings. Since then the number of rankings with a variety of audiences and methodologies has expanded.

We now have several research-based rankings, University Ranking by Academic Performance (URAP) from Turkey, the National Taiwan University Rankings, Best Global Universities from US NewsLeiden Ranking, as well as rankings that include some attempt to assess and compare something other than research, the Round University Rankings from Russia and U-Multirank from the European Union. And, of course, we also have subject rankingsregional rankings, even age group rankings.

It is interesting that some of these rankings have developed beyond the original founders of global rankings. Leiden Ranking is now the gold standard for the analysis of publications and citations. The Russian rankings use the same Web of Science database that THE did until 2014 and it has 12 out of the 13 indicators used by THE plus another eight in a more sensible and transparent arrangement. However, both of these receive only a fraction of the attention given to the THE rankings.

The research rankings from Turkey and Taiwan are similar to the Shanghai rankings but without the elderly or long departed Fields and Nobel award winners and with a more coherent methodology. U-Multirank is almost alone in trying to get at things that might be of interest to prospective undergraduate students.

It is regrettable that an article by Professor Brian Leiter of the University of Chicago in the Chronicle of Higher Education , 'Academic Ethics: To Rank or Not to Rank' ignores such developments and mentions only the original “Big Three”, Shanghai, QS and THE. This is perhaps forgivable since the establishment media, including THE and the Chronicle, and leading state and academic bureaucrats have until recently paid very little attention to innovative developments in university ranking. Leiter attacks the QS rankings and proposes that they should be boycotted while trying to improve the THE rankings.

It is a little odd that Leiter should be so caustic, not entirely without justification, about QS while apparently being unaware of similar or greater problems with THE.

He begins by saying that QS stands for “quirky silliness”. I would not disagree with that although in recent years QS has been getting less silly. I have been as sarcastic as anyone about the failings of QS: see here and here for an amusing commentary.

But the suggestion that QS is uniquely bad in contrast to THE is way off the target. There are many issues with the QS methodology, especially with its employer and academic surveys, and it has often announced placings that seem very questionable such as Nanyang Technological University (NTU) ahead of Princeton and Yale or the University of Buenos Aires in the world top 100, largely as a result of a suspiciously good performance in the survey indicators. The oddities of the QS rankings are, however, no worse than some of the absurdities that THE has served up in their world and regional rankings.  We have had places like University of Marakkesh Cadi Ayyad University in Morocco, Middle East Technical University in Turkey, Federico Santa Maria Technical University in Chile, Alexandria University and Veltech University in India rise to ludicrously high places, sometimes just for a year or two, as the result of a few papers or even a single highly cited author.

I am not entirely persuaded that NTU deserves its top 12 placing in the QS rankings. You can see here QS’s unconvincing reply to a question that I provided. QS claims that NTU's excellence is shown by its success in attracting foreign faculty, students and collaborators, but when you are in a country where people show their passports to drive to the dentist, being international is no great accomplishment. Even so, it is evidently world class as far as engineering and computer science are concerned and it is not impossible that it could reach an undisputed overall top ten or twenty ranking the next decade.

While the THE top ten or twenty or even fifty looks quite reasonable, apart from Oxford in first place, there are many anomalies as soon as we start breaking the rankings apart by country or indicator and THE has pushed some very weird data in recent years. Look at these places supposed to be regional or international centers of across the board research excellence as measured by citations: St Georges University of London, Brandeis University, the Free University of Bozen-Bolsano,  King Abdulaziz University, the University of Iceland, Veltech University. If QS is silly what are we to call a ranking where Anglia Ruskin University is supposed to have a greater research impact than Chicago, Cambridge or Tsinghua.

Leiter starts his article by pointing out that the QS academic survey is largely driven by the geographical distribution of its respondents and by the halo effect. This is very probably true and to that I would add that a lot of the responses to academic surveys of this kind are likely driven by simple self interest, academics voting for their alma mater or current employer. QS does not allow respondents to vote for the latter but they can vote for the former and also vote for grant providers or collaborators.

He says that “QS does not, however, disclose the geographic distribution of its survey respondents, so the extent of the distorting effect cannot be determined". This is not true of the overall survey. QS does in fact give very detailed figures about the origin of its respondents and there is good evidence here of probable distorting effects. There are, for example, more responses from Taiwan than from Mainland China, and almost as many from Malaysia as from Russia. QS does not, however, go down to subject level when listing geographic distribution.

He then refers to the case of University College Cork (UCC) asking faculty to solicit friends in other institutions to vote for UCC. This is definitely a bad practice, but it was in violation of QS guidelines and QS have investigated. I do not know what came of the investigation but it is worth noting that the message would not have been an issue if it had referred to the THE survey.

On balance, I would agree that THE ‘s survey methodology is less dubious than QS’s and less likely to be influenced by energetic PR campaigns. It would certainly be a good idea if the weighting of the QS survey was reduced and if there was more rigorous screening and classification of potential respondents.

But I think we also have to bear in mind that QS does prohibit respondents from voting for their own universities and it does average results out over a five- year period (formerly three years).

It is interesting that while THE does not usually combine and average survey results it did so in the 2016-17 world rankings combining the 2015 and 2016 survey results. This was, I suspect, probably because of a substantial drop in 2016 in the percentage of respondents from the arts and humanities that would, if unadjusted, have caused a serious problem for UK universities, especially those in the Russell Group.

Leiter then goes on to condemn QS for its dubious business practices. He reports that THE dropped QS because of its dubious practices. That is what THE says but it is widely rumoured within the rankings industry that THE was also interested in the financial advantages of a direct partnership with Thomson Reuters rather than getting data from QS.

He also refers to QS’s hosting a series of “World Class events” where world university leaders pay $950 for “seminar, dinners, coffee breaks” and “learn best practice for branding and marketing your institution through case studies and expert knowledge” and the QS stars plan where universities pay to be audited by QS in return for stars that they can use for promotion and advertising. I would add to his criticism that the Stars program has apparently undergone a typical “grade inflation” with the number of five-star universities increasing all the time.

Also, QS offers specific consulting services and it has a large number of clients from around the world although there are many more from Australia and Indonesia than from Canada and the US. Of the three from the US one is MIT which has been number one in the QS world rankings since 2012, a position it probably achieved after a change in the way in which faculty were classified.

It would, however, be misleading to suggest that THE is any better in this respect. Since 2014 it has launched a serious and unapologetic “monetisation of data” program.

There are events such as the forthcoming world "academic summit" where for 1,199 GBP (standard university) or 2,200 GBP (corporate), delegates can get "Exclusive insight into the 2017 Times Higher Education World University Rankings at the official launch and rankings masterclass,”, plus “prestigious gala dinner, drinks reception and other networking events”. THE also provides a variety of benchmarking and performance analysis services, branding, advertising and reputation management campaigns and a range of silver and gold profiles, including adverts and sponsored supplements. THE’s data clients include some illustrious names like the National University of Singapore and Trinity College Dublin plus some less well-known places such as Federico Santa Maria Technical University, Orebro University, King Abdulaziz University, National Research Nuclear University MEPhI Moscow, and Charles Darwin University.

Among THE’s activities are regional events that promise “partnership opportunities for global thought leaders” and where rankings like “the WUR are presented at these events with our award-winning data team on hand to explain them, allowing institutions better understanding of their findings”.

At some of these summits the rankings presented are trimmed and tweaked and somehow the hosts emerge in a favourable light. In February 2015, for example, THE held a Middle East and North Africa (MENA) summit that included a “snapshot ranking” that put Texas A and M University Qatar, a branch campus that offers nothing but engineering courses, in first place and Qatar University in fourth. The ranking consisted of precisely one indicator out of the 13 that make up THE’s world university rankings, field and year normalised citations. United Arab Emirates University (UAEU) was 11th and the American University of Sharjah in the UAE 14th.  

The next MENA summit was held in January 2016 in Al Ain in UAE. There was no snapshot this time and the methodology for the MENA rankings included 13 indicators in THE’s world rankings. Host country universities were now in fifth (UAEU) and eighth place (American University in Sharjah). Texas A and M Qatar was not ranked and Qatar University fell to sixth place.

Something similar happened to Africa. In 2015, THE went to the University of Johannesburg for a summit that brought together “outstanding global thought leaders from industry, government, higher education and research” and which unveiled THE’s Africa ranking based on citations (with the innovation of fractional counting) that put the host university in ninth place and the University of Ghana in twelfth.

In 2016 the show moved on to the University of Ghana where another ranking was produced based on all the 13 world ranking indicators. This time the University of Johannesburg did not take part and the University of Ghana went from 12th place to 7th.

I may have missed something but so far I do not see sign of THE Africa or MENA summits planned for 2017. If so, then African and MENA university leaders are to be congratulated for a very healthy scepticism.

To be fair, THE does not seem to have done any methodological tweaking for this year’s Asian, Asia Pacific and Latin American rankings.

Leiter concludes that American academics should boycott the QS survey but not THE’s and that they should lobby THE to improve its survey practices. That, I suspect, is pretty much a nonstarter. QS has never had much a presence in the US anyway and THE is unlikely to change significantly as long as its commercial dominance goes unchallenged and as long as scholars and administrators fail to see through its PR wizardry. It would be better for everybody to start looking beyond the "Big Three" rankings.