Showing posts sorted by date for query oxford reputation. Sort by relevance Show all posts
Showing posts sorted by date for query oxford reputation. Sort by relevance Show all posts

Friday, May 11, 2018

Ranking Insights from Russia

The ranking industry is expanding and new rankings appear all the time. Most global rankings measure research publications and citations. Others try to add to the mix indicators that might have something to do with teaching and learning. There is now a  ranking that tries to capture various third missions.

The Round University Rankings published in Russia are in the tradition  of  holistic rankings. They give a 40 % weighting to research, 40 % to teaching, 10% to international diversity and 10% to financial sustainability. Each group contains five equally weighted indicators. The data is derived from Clarivate Analytics who also contribute to the US News Best Global Universities Rankings.

These rankings are similar to the THE rankings in that they attempt to assess quality rather than quantity but they have 20 indicators instead of 13 and assign sensible weightings. Unfortunately, they receive only a fraction of the attention given to the THE rankings.

They are, however, very  valuable since they dig deeper into the data than  other global rankings. They also show that there is a downside to measures of quality and that data submitted directly by institutions should be  treated with caution and perhaps scepticism.

Here are the top universities for each of the RUR indicators.

Teaching
Academic staff per students: VIB (Flemish Institute of Biotechnology), Belgium
Academic staff per bachelor degrees awarded: University of Valladolid, Spain
Doctoral degrees per academic staff: Kurdistan University of Medical Science, Iran
Doctoral degrees per bachelor degrees awarded:  Jawaharlal Nehru University, India
World teaching reputation  Harvard University, USA.

Research
Citations per academic and research staff: Harvard
Doctoral degrees per admitted PhD: Al Farabi Kazakh National University
Normalised citation impact: Rockefeller University, USA
Share of international co-authored papers: Free University of Berlin
World research reputation: Harvard.

International diversity
Share of international academic staff: American University of Sharjah, UAE
Share of international students: American University of Sharjah
Share of international co-authored papers: Innopolis University, Russia
Reputation outside region: Voronezh State Technical University, Russai
International Level: EPF Lausanne, Switzerland.

Financial sustainability:
Institutional income per academic staff: Universidade Federal Do Ceara, Brazil
Institutional income per student: Rockefeller University
Papers per research income:  Novosibersk State University of Economics and Management, Russia
Research income per academic and research staff: Istanbul Technical University, Turkey
Research income per institutional income: A C Camargo Cancer Center, Brazil.

There are some surprising results here. The most obvious is Voronezh State Technical University which is first for reputation outside its region (Asia, Europe and so on), even though its overall scores for reputation and for international diversity are very low. The other top universities for this metric are just what you would expect, Harvard, MIT, Stanford, Oxford and so on. I wonder whether there is some sort of bug in the survey procedure, perhaps something like the university's supporters being assigned to Asia and therefore out of region. The university is also in second place in the world for papers per research income despite very low scores for the other research indicators.

There are other oddities such as Novosibersk State University of Economics and Management placed first for papers per research income and Universidade Federal Do Ceara for institutional income per academic staff. These may result from anomalies in the procedures for reporting and analysing data, possibly including problems in collecting data on income and staff.

It also seems that medical schools and specialist or predominantly postgraduate institutions such as Rockefeller University, the Kurdistan University of Medical Science, Jawarhalal Nehru University and VIB have a big advantage with these indicators since they tend to have favourable faculty student ratios, sometimes boosted by large numbers of clinical and research only staff, and a large proportion of doctoral students.

Jawaharlal Nehru University is a mainly postgraduate university so a high placing for academic staff per bachelor degrees awarded is not unexpected although I am surprised that it is ahead of Yale and Princeton. I must admit that the third place here for the University of Baghdad needs some explanation.

The indicator doctoral degrees per admitted PhD might identify universities that do a good job of selection and training and get large numbers of doctoral candidates through the system. Or perhaps it identifies universities where doctoral programmes are so lacking in rigour that nearly everybody can get their degree once admitted. The top ten of this indicator includes De Montfort University, Shakarim University, Kingston University, and the University of Westminster, none of which are famous for research excellence across the range of disciplines.

Measures of international diversity have become a staple of global rankings since they are fairly easy to collect. The problem is that international orientation may have something do with quality but it may also simply be a necessary attribute of being in a small country next to larger countries with the same or similar language and culture. The top ten for the international student indicators includes the Central European University and the American university of Sharjah. For international faculty it includes the University of Macau and Qatar University.

To conclude, these indicators suggest that self submitted institutional data should be used sparingly and that data from third party sources may be preferable. Also, while ranking by quality instead of quantity is sometimes advisable it also means that anomalies and outliers are more likely to appear.







Friday, April 13, 2018

At last. A Ranking With Cambridge at the Bottom


Cambridge usually does well in national and global rankings. The most recent ARWU from Shanghai puts it in third place and although it does less well in other rankings it always seems to be in the top twenty. It has suffered at the hands of the citations indicator in the THE world  rankings which seem to think that Anglia Ruskin University, formerly the Cambridgeshire College of Arts and Technology, has a greater global research impact but nobody takes that seriously.

So it is a surprise to find  an article in the Guardian about a ranking from the Higher Education Policy Institute ( HEPI) in the UK that actually puts Cambridge at the bottom and the University of Hull at the top. Near the bottom are others in the Russell group, Oxford, Bristol and LSE.

At the top we find Edge Hill, Cardiff Metropolitan and, of course, Anglia Ruskin Universities.

The ranking was part of a report written for HEPI by Iain Martin, vice-chancellor of Anglia Ruskin University, that supposedly rates universities for fair access, that is having a student intake that mirrors society as a whole. It compares the percentage of participation in higher education of school leavers in local authority areas with the percentage admitted by specific universities. Universities have a high rank if they draw students from areas where relatively few school leavers go to university. The rationale is the claim that learning outcomes are improved when people of diverse backgrounds study together.

It is noticeable that there several Scottish universities clustered at the bottom even though Scotland has a free tuition policy (not  for the English of course) that was supposed to guarantee fair access.

This rankings looks like an inversion of the ranking of UK universities according to average entry tariff, ie 'A' level grades, and a similar inversion of most global rankings based on research or reputation. 

Cambridge and other Russell Group universities have been under increasing pressure to relax entry standards and indiscriminately  recruit more low income students and those from historically unrepresented groups. It seems that they are slowly giving way to the pressure and that as academic standards erode they will be gradually eclipsed by the rising universities of East Asia.





Tuesday, February 20, 2018

Is Erdogan Destroying Turkish Universities?


An article by Andrew Wilks in The National claims that the position of Turkish universities in the Times Higher Education (THE) world rankings, especially that of Middle East Technical University (METU) has been declining as a result of the crackdown by president Erdogan following the unsuccessful coup of July 2016.

He claims that Turkish universities are now sliding down the international rankings and that this is because of the decline of academic freedom, the dismissal or emigration of many academics and a decline in its academic reputation.


'Turkish universities were once seen as a benchmark of the country’s progress, steadily climbing international rankings to compete with the world’s elite.
But since the introduction of emergency powers following a failed coup against President Recep Tayyip Erdogan in July 2016, the government’s grip on academic freedom has tightened.
A slide in the nation's academic reputation is now indisputable. Three years ago, six Turkish institutions [actually five] were in the Times Higher Education’s global top 300. Ankara's Middle East Technical University was ranked 85th. Now, with Oxford and Cambridge leading the standings, no Turkish university sits in the top 300.
Experts say at least part of the reason is that since the coup attempt more than 5,800 academics have been dismissed from their jobs. Mr Erdogan has also increased his leeway in selecting university rectors.
Gulcin Ozkan, formerly of Middle East Technical University but now teaching economics at York University in Britain, said the wave of dismissals and arrests has "forced some of the best brains out of the country".'
I have no great regard for Erdogan but in this case he is entirely innocent.

There has been a massive decline in METU's position in the THE rankings since 2014 but that is entirely the fault of THE's methodology. 

In the world rankings of 2014-15, published in 2014, METU was 85th in the world, with a whopping score of 92.0 for citations, which carries an official weighting of 30%. That score was the result of METU's participation in the Large Hadron Collider (LHC) project which produces papers with hundreds or thousands of authors and hundreds and thousands of citations. In 2014 THE counted every single contributor as receiving all of the citations. Added to this was a regional modification that boosted the scores of universities located in countries with a low citations impact score.

In 2015, THE revamped its methodology by not counting the citations to these mega-papers and by applying the regional modification to only half of the research impact score.

As a result, in the 2015-16 rankings METU crashed to the 501-600 band, with a score for citations of only 28.8. Other Turkish universities had also been involved in the LHC project and benefited from the citations bonus and they too plummeted. There was now only one Turkish university in the THE top 300.

The exalted position of METU in the THE 2014-15 rankings was the result of THE's odd methodology and its spectacular tumble was the result of changing that methodology. In other popular rankings METU seems to be slipping a bit but it never goes as high as in THE in 2014 or as low as in 2015

In the QS world rankings for 2014-15 METU was in the 401-410 band and by 2017-18 it had fallen to  471-480 in 2017

The Russian Round University Rankings have it 375 in 2014 and 407 in 407. The US News Best Global Universities placed it 314th last year.

Erdogan had nothing to do with it.















Saturday, December 16, 2017

Measuring graduate employability; two rankings

Global university rankings are now well into their second decade. Since 2003, when the first Shanghai rankings appeared, there has been a steady growth of global and regional rankings. At the moment most global rankings are of two kinds, those that focus entirely or almost entirely on research and those such as the Russian Round Rankings, Times Higher Education (THE) and Quacquarelli Symonds (QS) that claim to also measure teaching, learning or graduate quality in some way, although even those are biased towards research when you scratch the surface a little.

The ranking industry has become adept at measuring research productivity and quality in various ways. But the assessment of undergraduate teaching and learning is another matter.

Several ranking organisations use faculty student ratio as a proxy for quality of teaching which in turn is assumed to have some connection with something that happens to students during their programmes. THE also count institutional income, research income and income from industry, again assuming that there is a significant association with academic excellence. Indicators like this are usually based on those supplied by institutions. For examples of problems here see an article by Alex Usher and a reply by Phil Baty.

An attempt to get at student quality is provided by the CWUR rankings now based in UAE, counting alumni who win international awards or who are CEOs of major companies. But obviously this is relevant only for a very small number of universities. A new pilot ranking from Moscow also counts international awards.

The only attempt to measure student quality  by the well known rankers that is relevant to most institutions is the survey of employers in the QS world and regional rankings. There are some obvious difficulties here. QS gets respondents from a variety of channels and this may allow some universities to influence the survey. In recent years some Latin American universities have done much better on this indicator than on any other.

THE now publish a global employability ranking which is conducted by two European firms, Trendence and Emerging. This is based on two surveys of recruiters in Argentina, Australia, Austria, Brazil, Canada, China, Germany, France, India, Israel, Italy, Japan, Mexico, Netherlands, Singapore, Spain, South Africa, South Korea, Turkey, UAE, UK, and USA. There were two panels with a total of over 6,000 respondents.

A global survey that does not include Chile, Sweden, Egypt, Nigeria, Saudi Arabia, Russia, Pakistan, Indonesia, Bangladesh, Poland, Malaysia or Taiwan can hardly claim to be representative of international employers. This limited representation may explain some oddities of the rankings such as the high places of the American University of Dubai and and the National Autonomous University of Mexico.

The first five places in these rankings are quite similar to the THE world rankings: Caltech, Harvard, Columbia, MIT, Cambridge.  But there some significant differences after that and some substantial changes since last year. Here Columbia, 14th in the world rankings, is in third place, up from 12th last year. Boston University is 6th here but 70th in the world rankings. Tokyo Institute of Technology in 19th place is in the 251-300 band in the world rankings. CentraleSupelec, is 41st,  but in the world  401-500 group.

These rankings are useful only for a small minority of universities, stakeholders and students. Only 150 schools are ranked and only a small proportion of the world's employers consulted.

QS have also released their global employability rankings with 500 universities. These combine the employer reputation survey, used in their world rankings with  other indicators: alumni outcomes, based on lists of high achievers, partnership with employers, that is research collaboration noted in the Scopus database, employer-student connections, that is employers actively present on campus, and graduate employment rate. There seems to be a close association, at least at the top, between overall scores, employer reputation and alumni outcomes. Overall the top three are Stanford, UCLA, Harvard. For employer reputation they are Cambridge, Oxford, Harvard and for alumni outcomes Harvard, Stanford Oxford.

The other  indicators are a different matter. For employer-student connections the top three are Huazhong University of Science and Technology, Arizona State University, and New York University. In fact seven out of the top ten on this measure  are Chinese. For graduate employment rate they are Politecnico di Torino, Moscow State Institute of International Relations, and Sungkyunkwan University and for partnership with employers Stanford, Surrey and Politecnico Milano. When the front runners in indicators are so different one has to wonder about their validity.

There are some very substantial differences in the ranks given to various universities in these rankings. Caltech is first in the Emerging-Trendence rankings and 73rd in QS. Hong Kong University of Science and Technology is 12th in Emerging-Trendence but not ranked at all by QS. The University of Sydney is 4th in QS and 48th in Emerging-Trendence. The American University of Dubai is in QS's 301-500 band but 138th for Emerging-Trendence

The  rankings published by THE could be some value to those students contemplating careers with the leading companies in the richest countries.

The QS rankings may be more helpful for those students or stakeholders looking at universities outside the very top of the global elite. Even so QS have ranked only a fraction of the world's universities.

It still seems that the way forward in the assessment of graduate outcomes and employability is through standardised testing along the lines of AHELO or the Collegiate Learning Assessment.




Wednesday, July 19, 2017

Comments on an Article by Brian Leiter

Global university rankings are now nearly a decade and a half old. The Shanghai rankings (Academic Ranking of World Universities or ARWU) began in 2003, followed a year later by Webometrics and the THES-QS rankings which, after an unpleasant divorce, became the Times Higher Education (THE) and the Quacquarelli Symonds (QS) world rankings. Since then the number of rankings with a variety of audiences and methodologies has expanded.

We now have several research-based rankings, University Ranking by Academic Performance (URAP) from Turkey, the National Taiwan University Rankings, Best Global Universities from US NewsLeiden Ranking, as well as rankings that include some attempt to assess and compare something other than research, the Round University Rankings from Russia and U-Multirank from the European Union. And, of course, we also have subject rankingsregional rankings, even age group rankings.

It is interesting that some of these rankings have developed beyond the original founders of global rankings. Leiden Ranking is now the gold standard for the analysis of publications and citations. The Russian rankings use the same Web of Science database that THE did until 2014 and it has 12 out of the 13 indicators used by THE plus another eight in a more sensible and transparent arrangement. However, both of these receive only a fraction of the attention given to the THE rankings.

The research rankings from Turkey and Taiwan are similar to the Shanghai rankings but without the elderly or long departed Fields and Nobel award winners and with a more coherent methodology. U-Multirank is almost alone in trying to get at things that might be of interest to prospective undergraduate students.

It is regrettable that an article by Professor Brian Leiter of the University of Chicago in the Chronicle of Higher Education , 'Academic Ethics: To Rank or Not to Rank' ignores such developments and mentions only the original “Big Three”, Shanghai, QS and THE. This is perhaps forgivable since the establishment media, including THE and the Chronicle, and leading state and academic bureaucrats have until recently paid very little attention to innovative developments in university ranking. Leiter attacks the QS rankings and proposes that they should be boycotted while trying to improve the THE rankings.

It is a little odd that Leiter should be so caustic, not entirely without justification, about QS while apparently being unaware of similar or greater problems with THE.

He begins by saying that QS stands for “quirky silliness”. I would not disagree with that although in recent years QS has been getting less silly. I have been as sarcastic as anyone about the failings of QS: see here and here for an amusing commentary.

But the suggestion that QS is uniquely bad in contrast to THE is way off the target. There are many issues with the QS methodology, especially with its employer and academic surveys, and it has often announced placings that seem very questionable such as Nanyang Technological University (NTU) ahead of Princeton and Yale or the University of Buenos Aires in the world top 100, largely as a result of a suspiciously good performance in the survey indicators. The oddities of the QS rankings are, however, no worse than some of the absurdities that THE has served up in their world and regional rankings.  We have had places like University of Marakkesh Cadi Ayyad University in Morocco, Middle East Technical University in Turkey, Federico Santa Maria Technical University in Chile, Alexandria University and Veltech University in India rise to ludicrously high places, sometimes just for a year or two, as the result of a few papers or even a single highly cited author.

I am not entirely persuaded that NTU deserves its top 12 placing in the QS rankings. You can see here QS’s unconvincing reply to a question that I provided. QS claims that NTU's excellence is shown by its success in attracting foreign faculty, students and collaborators, but when you are in a country where people show their passports to drive to the dentist, being international is no great accomplishment. Even so, it is evidently world class as far as engineering and computer science are concerned and it is not impossible that it could reach an undisputed overall top ten or twenty ranking the next decade.

While the THE top ten or twenty or even fifty looks quite reasonable, apart from Oxford in first place, there are many anomalies as soon as we start breaking the rankings apart by country or indicator and THE has pushed some very weird data in recent years. Look at these places supposed to be regional or international centers of across the board research excellence as measured by citations: St Georges University of London, Brandeis University, the Free University of Bozen-Bolsano,  King Abdulaziz University, the University of Iceland, Veltech University. If QS is silly what are we to call a ranking where Anglia Ruskin University is supposed to have a greater research impact than Chicago, Cambridge or Tsinghua.

Leiter starts his article by pointing out that the QS academic survey is largely driven by the geographical distribution of its respondents and by the halo effect. This is very probably true and to that I would add that a lot of the responses to academic surveys of this kind are likely driven by simple self interest, academics voting for their alma mater or current employer. QS does not allow respondents to vote for the latter but they can vote for the former and also vote for grant providers or collaborators.

He says that “QS does not, however, disclose the geographic distribution of its survey respondents, so the extent of the distorting effect cannot be determined". This is not true of the overall survey. QS does in fact give very detailed figures about the origin of its respondents and there is good evidence here of probable distorting effects. There are, for example, more responses from Taiwan than from Mainland China, and almost as many from Malaysia as from Russia. QS does not, however, go down to subject level when listing geographic distribution.

He then refers to the case of University College Cork (UCC) asking faculty to solicit friends in other institutions to vote for UCC. This is definitely a bad practice, but it was in violation of QS guidelines and QS have investigated. I do not know what came of the investigation but it is worth noting that the message would not have been an issue if it had referred to the THE survey.

On balance, I would agree that THE ‘s survey methodology is less dubious than QS’s and less likely to be influenced by energetic PR campaigns. It would certainly be a good idea if the weighting of the QS survey was reduced and if there was more rigorous screening and classification of potential respondents.

But I think we also have to bear in mind that QS does prohibit respondents from voting for their own universities and it does average results out over a five- year period (formerly three years).

It is interesting that while THE does not usually combine and average survey results it did so in the 2016-17 world rankings combining the 2015 and 2016 survey results. This was, I suspect, probably because of a substantial drop in 2016 in the percentage of respondents from the arts and humanities that would, if unadjusted, have caused a serious problem for UK universities, especially those in the Russell Group.

Leiter then goes on to condemn QS for its dubious business practices. He reports that THE dropped QS because of its dubious practices. That is what THE says but it is widely rumoured within the rankings industry that THE was also interested in the financial advantages of a direct partnership with Thomson Reuters rather than getting data from QS.

He also refers to QS’s hosting a series of “World Class events” where world university leaders pay $950 for “seminar, dinners, coffee breaks” and “learn best practice for branding and marketing your institution through case studies and expert knowledge” and the QS stars plan where universities pay to be audited by QS in return for stars that they can use for promotion and advertising. I would add to his criticism that the Stars program has apparently undergone a typical “grade inflation” with the number of five-star universities increasing all the time.

Also, QS offers specific consulting services and it has a large number of clients from around the world although there are many more from Australia and Indonesia than from Canada and the US. Of the three from the US one is MIT which has been number one in the QS world rankings since 2012, a position it probably achieved after a change in the way in which faculty were classified.

It would, however, be misleading to suggest that THE is any better in this respect. Since 2014 it has launched a serious and unapologetic “monetisation of data” program.

There are events such as the forthcoming world "academic summit" where for 1,199 GBP (standard university) or 2,200 GBP (corporate), delegates can get "Exclusive insight into the 2017 Times Higher Education World University Rankings at the official launch and rankings masterclass,”, plus “prestigious gala dinner, drinks reception and other networking events”. THE also provides a variety of benchmarking and performance analysis services, branding, advertising and reputation management campaigns and a range of silver and gold profiles, including adverts and sponsored supplements. THE’s data clients include some illustrious names like the National University of Singapore and Trinity College Dublin plus some less well-known places such as Federico Santa Maria Technical University, Orebro University, King Abdulaziz University, National Research Nuclear University MEPhI Moscow, and Charles Darwin University.

Among THE’s activities are regional events that promise “partnership opportunities for global thought leaders” and where rankings like “the WUR are presented at these events with our award-winning data team on hand to explain them, allowing institutions better understanding of their findings”.

At some of these summits the rankings presented are trimmed and tweaked and somehow the hosts emerge in a favourable light. In February 2015, for example, THE held a Middle East and North Africa (MENA) summit that included a “snapshot ranking” that put Texas A and M University Qatar, a branch campus that offers nothing but engineering courses, in first place and Qatar University in fourth. The ranking consisted of precisely one indicator out of the 13 that make up THE’s world university rankings, field and year normalised citations. United Arab Emirates University (UAEU) was 11th and the American University of Sharjah in the UAE 14th.  

The next MENA summit was held in January 2016 in Al Ain in UAE. There was no snapshot this time and the methodology for the MENA rankings included 13 indicators in THE’s world rankings. Host country universities were now in fifth (UAEU) and eighth place (American University in Sharjah). Texas A and M Qatar was not ranked and Qatar University fell to sixth place.

Something similar happened to Africa. In 2015, THE went to the University of Johannesburg for a summit that brought together “outstanding global thought leaders from industry, government, higher education and research” and which unveiled THE’s Africa ranking based on citations (with the innovation of fractional counting) that put the host university in ninth place and the University of Ghana in twelfth.

In 2016 the show moved on to the University of Ghana where another ranking was produced based on all the 13 world ranking indicators. This time the University of Johannesburg did not take part and the University of Ghana went from 12th place to 7th.

I may have missed something but so far I do not see sign of THE Africa or MENA summits planned for 2017. If so, then African and MENA university leaders are to be congratulated for a very healthy scepticism.

To be fair, THE does not seem to have done any methodological tweaking for this year’s Asian, Asia Pacific and Latin American rankings.

Leiter concludes that American academics should boycott the QS survey but not THE’s and that they should lobby THE to improve its survey practices. That, I suspect, is pretty much a nonstarter. QS has never had much a presence in the US anyway and THE is unlikely to change significantly as long as its commercial dominance goes unchallenged and as long as scholars and administrators fail to see through its PR wizardry. It would be better for everybody to start looking beyond the "Big Three" rankings.





Monday, July 03, 2017

Proving anything you want from rankings

It seems that university rankings can be used to prove almost anything that journalists want to prove.

Ever since the Brexit referendum experts and pundits of various kinds have been muttering about the dread disease that is undermining or about to undermine the research prowess of British universities. The malignity of Brexit is so great that it can send its evil rays back from the future.

Last year, as several British universities tumbled down the Quacquarelli Symonds (QS) world rankings, the Independent claimed that “[p]ost-Brexit uncertainty and long-term funding issues have seen storm clouds gather over UK higher education in this year’s QS World University Rankings”.

It is difficult to figure out how anxiety about a vote that took place on June 24th 2016 could affect a ranking based on institutional data for 2014 and bibliometric data from the previous five years.

It is just about possible that some academics or employers might have woken up on June 24th to see that their intellectual inferiors had joined the orcs to raze the ivory towers of Baggins University and Bree Poly and then rushed to send a late response to the QS opinion survey. But QS, to their credit, have taken steps to deal with that sort of thing by averaging out survey responses over a period of five years.

European and American universities have been complaining for a long time that they do not get enough money from the state and that their performance in the global rankings is undermined because they do not get enough international students or researchers. That is a bit more plausible. After all, income does account for three separate indicators in the Times Higher Education (THE) world rankings so reduced income would obviously cause universities to fall a bit. The scandal over Trinity College Dublin’s botched rankings data submission showed precisely how much a given increase in reported total income (with research and industry income in a constant proportion) means for the THE world rankings. International metrics account for 10% of the QS rankings and 7.5% of the THE world rankings. Whether a decline in income or the number of international students has a direct effect or indeed any effect at all on research output or the quality of teaching is quite another matter.

The problem with claims like this is that the QS and THE rankings are very blunt instruments that should not be used to make year by year analyses or to influence government or university policy. There have been several changes in methodology, there are fluctuations in the distribution of survey responses by region and subject and the average scores for indicators may go up and down as the number of participants changes. All of these mean that it is very unwise to make extravagant assertions about university quality based on what happens in those rankings.

Before making any claim based on ranking changes it would be a good idea to wait a few years until the impact of any methodological change has passed through the system

Another variation in this genre is the recent claim in the Daily Telegraph that “British universities are slipping down the world rankings, with experts blaming the decline on pressure to admit more disadvantaged students.”

Among the experts is Alan Smithers of the University of Buckingham who is reported as saying “universities are no longer free to take their own decisions and recruit the most talented students which would ensure top positions in league tables”.

There is certainly good evidence that British university courses are becoming much less rigorous. Every year reports come in about declining standards everywhere. The latest is the proposal at Oxford to allow students to do take home instead of timed exams.

But it is unlikely that this could show up in the QS or THE rankings. None of the global rankings has a metric that measures the attributes of graduates except perhaps the QS employers survey. It is probable that a decline in the cognitive skills of admitted undergraduate students would eventually trickle up to the qualities of research students and then to the output and quality of research but that is not something that could happen in a single year especially when there is so much noise generated by methodological changes.

The cold reality is that university rankings can tell us some things about universities and how they change over perhaps half a decade and some metrics are better than others but it is an exercise in futility to use overall rankings or indicators subject to methodological tweaking to argue about how political or economic changes are impacting western universities.

The latest improbable claim about rankings is that Oxford’s achieving parity with Cambridge in the THE reputation rankings was the result of  a positive image created by appointing its first female Vice Chancellor.

Phil Baty, THE’s editor, is reported as saying that ‘Oxford University’s move to appoint its first female Vice Chancellor sent a “symbolic” wave around the world which created a positive image for the institution among academics.’

There is a bit of a problem here. Louise Richardson was appointed Vice -Chancellor in January 2016. The polling for the 2016 THE reputation rankings took place between January and March 2016. One would expect that if the appointment of Richardson had any effect on academic opinion at all then it would be in those months. It certainly seems more likely than an impact that was delayed for more than a year. If the appointment did affect the reputation rankings then it was apparently a negative one for Oxford’s score fell massively from 80.4 in 2015 to 69.1 in 2016 (compared to 100 for Harvard in both years). 

So, did Oxford suffer in 2016 because spiteful curmudgeons were infuriated by an upstart intruding into the dreaming spires?

The collapse of Oxford in the 2016 reputation rankings and its slight recovery in 2017 almost certainly had nothing to do with the new Vice-Chancellor.

Take a look at the table below. Oxford’s reputation score tracks the percentage of THE survey responses from the arts and humanities. It goes up when there are more respondents from those subjects and goes down when there are fewer. This is the case for British universities in general and also for Cambridge except for this year.

The general trend since 2011 has been for the gap between Cambridge and Oxford to fall steadily and that trend happened before Oxford acquired a new Vice-Chancellor although it accelerated and finally erased the gap this year.

What is unusual about this year’s reputation ranking is not that Oxford recovered as the number of arts and humanities respondents increased but that Cambridge continued to fall.

I wonder if it has something to do with Cambridge’s “disastrous” performance in the THE research impact (citations) indicator in recent years.  In the 2014-15 world rankings Cambridge was 28th behind places like Federico Santa Maria Technical University and Bogazici University. In 2015-16 it was 27th behind St Petersburg Polytechnic University. But a greater humiliation came in the 2016-17 rankings. Cambridge fell to 31st in the world for research impact. Even worse it was well behind Anglia Ruskin University, a former art school. For research impact Cambridge University wasn’t the best university in Europe or England. It wasn’t even the best in Cambridge, at least if you trusted the sophisticated THE rankings.

Rankings are not entirely worthless and if they did not exist no doubt they would somehow be invented. But it is doing nobody any good to use them to promote the special interests of university bureaucrats and insecure senior academics.

Table: Scores in THE reputation rankings


Year
Oxford
Cambridge
Gap
% responses arts and
humanities
2011
68.6
80.7
12.1
--
2012
71.2
80.7
9.5
7%
2013
73.0
81.3
8.3
10.5%
2014
67.8
74.3
6.5
9%
2015
80.4
84.3
3.9
16%
2016
67.6
72.2
4.6
9%
2017
69.1
69.1
0
12.5%