It seems that
university rankings can be used to prove almost anything that journalists want
to prove.
Ever since
the Brexit referendum experts and pundits of various kinds have been muttering
about the dread disease that is undermining or about to undermine the research
prowess of British universities. The malignity of Brexit is so great that it
can send its evil rays back from the future.
Last year,
as several British universities tumbled down the Quacquarelli Symonds (QS) world
rankings, the Independent claimed that
“[p]ost-Brexit uncertainty and long-term funding issues have seen storm clouds
gather over UK higher education in this year’s QS World University Rankings”.
It is
difficult to figure out how anxiety about a vote that took place on June 24th
2016 could affect a ranking based on institutional data for 2014 and
bibliometric data from the previous five years.
It is just
about possible that some academics or employers might have woken up on June 24th
to see that their intellectual inferiors had joined the orcs to raze the ivory
towers of Baggins University and Bree Poly and then rushed to send a late
response to the QS opinion survey. But QS, to their credit, have taken steps to
deal with that sort of thing by averaging out survey responses over a period of
five years.
European and
American universities have been complaining for a long time that they do not
get enough money from the state and that their performance in the global
rankings is undermined because they do not get enough international students or
researchers. That is a bit more plausible. After all, income does account for
three separate indicators in the Times Higher Education (THE) world rankings so
reduced income would obviously cause universities to fall a bit. The scandal
over Trinity College Dublin’s botched rankings data submission showed
precisely how much a given increase in reported total income (with research and
industry income in a constant proportion) means for the THE world rankings. International
metrics account for 10% of the QS rankings and 7.5% of the THE world rankings.
Whether a decline in income or the number of international students has a
direct effect or indeed any effect at all on research output or the quality of
teaching is quite another matter.
The problem
with claims like this is that the QS and THE rankings are very blunt instruments
that should not be used to make year by year analyses or to influence
government or university policy. There have been several changes in
methodology, there are fluctuations in the distribution of survey responses by
region and subject and the average scores for indicators may go up and down as
the number of participants changes. All of these mean that it is very unwise to
make extravagant assertions about university quality based on what happens in those
rankings.
Before
making any claim based on ranking changes it would be a good idea to wait a few
years until the impact of any methodological change has passed through the
system
Another variation
in this genre is the recent
claim in the Daily Telegraph that “British universities are slipping down
the world rankings, with experts blaming the decline on pressure to admit more
disadvantaged students.”
Among the
experts is Alan Smithers of the University of Buckingham who is reported as
saying “universities are no longer free to take their own decisions and recruit
the most talented students which would ensure top positions in league tables”.
There is certainly
good evidence that British university courses are becoming much less rigorous. Every
year reports come in about declining standards
everywhere. The latest is the proposal at Oxford to allow
students to do take home instead of timed exams.
But it is unlikely
that this could show up in the QS or THE rankings. None of the global rankings
has a metric that measures the attributes of graduates except perhaps the QS
employers survey. It is probable that a decline in the cognitive skills of
admitted undergraduate students would eventually trickle up to the qualities of
research students and then to the output and quality of research but that is
not something that could happen in a single year especially when there is so
much noise generated by methodological changes.
The cold reality
is that university rankings can tell us some things about universities and how
they change over perhaps half a decade and some metrics are better than others
but it is an exercise in futility to use overall rankings or indicators subject
to methodological tweaking to argue about how political or economic changes are
impacting western universities.
The latest
improbable claim about rankings is that
Oxford’s achieving parity with Cambridge in the THE reputation rankings was the
result of a
positive image created by appointing its first female Vice Chancellor.
Phil Baty, THE’s editor, is reported as saying that ‘Oxford
University’s move to appoint its first female Vice Chancellor sent a “symbolic”
wave around the world which created a positive image for the institution among
academics.’
There is a
bit of a problem here. Louise Richardson was appointed Vice -Chancellor in
January 2016. The polling for the 2016 THE reputation rankings took place
between January and March 2016. One would expect that if the appointment of
Richardson had any effect on academic opinion at all then it would be in those
months. It certainly seems more likely than an impact that was delayed for more
than a year. If the appointment did affect the reputation rankings then it was
apparently a negative one for Oxford’s
score fell massively from 80.4 in 2015 to 69.1 in 2016 (compared to 100 for
Harvard in both years).
So, did Oxford
suffer in 2016 because spiteful curmudgeons were infuriated by an upstart
intruding into the dreaming spires?
The
collapse of Oxford in the 2016 reputation rankings and its slight recovery in
2017 almost certainly had nothing to do with the new Vice-Chancellor.
Take a look
at the table below. Oxford’s reputation score tracks the percentage of THE
survey responses from the arts and humanities. It goes up when there are more
respondents from those subjects and goes down when there are fewer. This is the
case for British universities in general and also for Cambridge except for this
year.
The general
trend since 2011 has been for the gap between Cambridge and Oxford to fall
steadily and that trend happened before Oxford acquired a new Vice-Chancellor
although it accelerated and finally erased the gap this year.
What is
unusual about this year’s reputation ranking is not that Oxford recovered as
the number of arts and humanities respondents increased but that Cambridge
continued to fall.
I wonder if
it has something to do with Cambridge’s “disastrous” performance in the THE
research impact (citations) indicator in recent years. In the 2014-15 world rankings Cambridge was
28th behind places like Federico Santa Maria Technical University
and Bogazici University. In 2015-16 it was 27th behind St Petersburg
Polytechnic University. But a greater humiliation came in the 2016-17 rankings.
Cambridge fell to 31st in the world for research impact. Even worse it
was well behind Anglia Ruskin University, a former art school. For research
impact Cambridge University wasn’t the best university in Europe or England. It
wasn’t even the best in Cambridge, at least if you trusted the sophisticated THE rankings.
Rankings
are not entirely worthless and if they did not exist no doubt they would
somehow be invented. But it is doing nobody any good to use them to promote the
special interests of university bureaucrats and insecure senior academics.
Table:
Scores in THE reputation rankings
Year
|
Oxford
|
Cambridge
|
Gap
|
%
responses arts and
humanities
|
2011
|
68.6
|
80.7
|
12.1
|
--
|
2012
|
71.2
|
80.7
|
9.5
|
7%
|
2013
|
73.0
|
81.3
|
8.3
|
10.5%
|
2014
|
67.8
|
74.3
|
6.5
|
9%
|
2015
|
80.4
|
84.3
|
3.9
|
16%
|
2016
|
67.6
|
72.2
|
4.6
|
9%
|
2017
|
69.1
|
69.1
|
0
|
12.5%
|