Times Higher Education (THE) has long suffered from the curse of field-normalised citations which without fail produce interesting (in the Chinese curse sense) results every year.
Part of THE's citation problem is the kilo-paper issue, papers mainly in particle physics with hundreds or thousands of authors and hundreds or thousands of citations. The best known case is 'Combined Measurement of the Higgs Boson Mass in pp Collisions ... ' in Physical Review Letters which has 5,154 contributors.
If every contributor to such papers is given equal credit for such citations then his or her institution would be awarded thousands of citations. Combined with other attributes of this indicator this means that a succession of improbable places, such as Tokyo Metropolitan University and Middle East Technical University, have soared to the research impact peaks in the THE world rankings.
THE have already tried a couple of variations to counting citations for this sort of paper. In 2015 they introduced a cap, simply not counting any paper with more than a thousand authors. Then in 2016 they decided to give a minimum credit of 5% of citations to such authors.
That meant that in the 2014 THE world rankings an institution with one contributor to a paper with 2,000 authors and 2,000 citations would be counted as being cited 2,000 times, in 2015 not at all and in 2016 100 times. The result was that many universities in Japan, Korea, France and Turkey suffered catastrophic falls in 2015 and then made a modest comeback in 2016.
But there may be more to come. A paper by Louis de Mesnard in the European Journal of Operational Research proposes a new formula -- (n+2)/3n -- so that if a paper has two authors each one gets two thirds of the credit. If it has 2,000 authors each one is assigned 334 citations.
Mesnard's paper has been given star billing in an article in THE which suggests that the magazine is thinking about using his formula in the next world rankings.
If so, we can expect headlines about the extraordinary recovery of Asian universities in contrast to the woes of the UK and the USA suffering from the ravages of Brexit and Trump-induced depression.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Tuesday, February 28, 2017
Monday, February 27, 2017
Worth Reading 8
Henk F
Moed, Sapienza University of Rome
A critical comparative
analysis of five world university rankings
ABSTRACT
To
provide users insight into the value and limits of world university rankings, a
comparative analysis is conducted of 5 ranking systems: ARWU, Leiden, THE, QS
and U-Multirank. It links these systems with one another at the level of
individual institutions, and analyses the overlap in institutional coverage,
geographical coverage, how indicators are calculated from raw data, the
skewness of indicator distributions, and statistical correlations between
indicators. Four secondary analyses are presented investigating national
academic systems and selected pairs of indicators. It is argued that current
systems are still one-dimensional in the sense that they provide finalized,
seemingly unrelated indicator values rather than offering a data set and tools
to observe patterns in multi-faceted data. By systematically comparing
different systems, more insight is provided into how their institutional
coverage, rating methods, the selection of indicators and their normalizations
influence the ranking positions of given institutions.
" Discussion and conclusions
The overlap analysis clearly illustrates that there
is no such set as ‘the’ top 100 universities in terms of excellence: it depends
on the ranking system one uses which universities constitute the top 100. Only
35 institutions appear in the top 100 lists of all 5 systems, and the number of
overlapping institutions per pair of systems ranges between 49 and 75. An
implication is that national governments executing a science policy aimed to
increase the number of academic institutions in the ‘top’ of the ranking of
world universities, should not only indicate the range of the top segment
(e.g., the top 100), but also specify which ranking(s) are used as a standard,
and argue why these were selected from the wider pool of candidate world
university rankings."
Scientometrics DOI 10.1007/s11192-016-2212-y
Tuesday, February 21, 2017
Never mind the rankings, THE has a huge database
There has been a debate, or perhaps
the beginnings of a debate, about international university rankings following
the publication of Bahram Bekhradnia's report to
the Higher Education Policy Institute with comments in University World
News by Ben Sowter, Phil Baty, Frank Ziegele
and Frans van Vought and Philip Altbach
and Ellen Hazelkorn and a guest post by
Bekhradnia in this blog.
Bekhradnia argued that global
university rankings were damaging and dangerous because they encourage an
obsession with research, rely on unreliable or subjective data, and emphasise
spurious precision. He suggests that governments, universities and academics
should just ignore the rankings.
Times Higher Education (THE) has now published a piece by THE rankings editor Phil Baty that
does not really deal with the criticism but basically says that it does not
matter very much because the THE database is bigger and better than anyone
else's. This he claims is "the true purpose and enduring legacy" of
the THE world rankings.
Legacy? Does this mean that THE is
getting ready to abandon rankings, or maybe just the world rankings, and go
exclusively into the data refining business?
Whatever Baty is hinting at, if
that is what he is doing, it does seem a rather insipid defence of the rankings
to say that all the criticism is missing the point because they are the
precursor to a big and sophisticated database.
The article begins with a quotation
from Lydia Snover, Director of Institutional Research, at MIT:
“There is no world department of
education,” says Lydia Snover, director of institutional research at the Massachusetts
Institute of Technology. But Times Higher Education, she
believes, is helping to fill that gap: “They are doing a real service to
universities by developing definitions and data that can be used for comparison
and understanding.”
This sounds as though THE is doing something
very impressive that nobody else has even thought of doing. But Snover's
elaboration of this point in an email gives equal billing to QS and THE as
definition developers and suggests the definitions and data that they provide
will improve and expand in the future, implying that they are now less than
perfect. She says:
"QS and THE both collect data
annually from a large number of international universities. For example,
understanding who is considered to be “faculty” in the EU, China, Australia,
etc. is quite helpful to us when we want to compare our universities
internationally. Since both QS and THE are relatively new in the rankings
business compared to US NEWS, their definitions are still evolving. As we
go forward, I am sure the amount of data they collect and the definitions of
that data will expand and improve."
Snover, by the way , is a member of the QS advisory board, as is THE's former rankings "masterclass" partner, Simon Pratt.
Baty offers a rather perfunctory
defence of the THE rankings. He talks about rankings bringing great
insights into the shifting fortunes of universities. If we are talking about
year to year changes then the fact that THE purports to chart shifting fortunes
is a very big bug in their methodology. Unless there has been drastic
restructuring universities do not change much in a matter of months and any
ranking that claims that it is detecting massive shifts over a year is simply
advertising its deficiencies.
The assertion that the THE rankings
are the most comprehensive and balanced is difficult to take seriously. If by
comprehensive it is meant that the THE rankings have more indicators than QS or
Webometrics that is correct. But the number of indicators does not mean very
much if they are bundled together and the scores hidden from the public and if
some of the indicators, the teaching survey and research survey for example,
correlate so closely that they are effectively the same thing. In any case, The
Russian Round University Rankings have 20 indicators compared with THE's 13 in
the world rankings.
As for being balanced, we have
already seen Bekhradnia's analysis showing that even the teaching and
international outlook criteria in the THE rankings are really about research.
In addition, THE gives almost a third of its weighting to citations. In
practice that is often even more because the effect of the regional
modification, now applied to half the indicator, is to boost in varying degrees
the scores of everybody except those in the best performing country.
After offering a scaled down
celebration of the rankings, Baty then dismisses critics while announcing that
THE "is quietly [seriously?] getting on with a
hugely ambitious project to build an extraordinary and truly unique global
resource."
Perhaps some
elite universities, like MIT, will find the database and its associated
definitions helpful but whether there is anything extraordinary or unique about
it remains to be seen.
Subscribe to:
Posts (Atom)