Times Higher Education (THE) has long suffered from the curse of field-normalised citations which without fail produce interesting (in the Chinese curse sense) results every year.
Part of THE's citation problem is the kilo-paper issue, papers mainly in particle physics with hundreds or thousands of authors and hundreds or thousands of citations. The best known case is 'Combined Measurement of the Higgs Boson Mass in pp Collisions ... ' in Physical Review Letters which has 5,154 contributors.
If every contributor to such papers is given equal credit for such citations then his or her institution would be awarded thousands of citations. Combined with other attributes of this indicator this means that a succession of improbable places, such as Tokyo Metropolitan University and Middle East Technical University, have soared to the research impact peaks in the THE world rankings.
THE have already tried a couple of variations to counting citations for this sort of paper. In 2015 they introduced a cap, simply not counting any paper with more than a thousand authors. Then in 2016 they decided to give a minimum credit of 5% of citations to such authors.
That meant that in the 2014 THE world rankings an institution with one contributor to a paper with 2,000 authors and 2,000 citations would be counted as being cited 2,000 times, in 2015 not at all and in 2016 100 times. The result was that many universities in Japan, Korea, France and Turkey suffered catastrophic falls in 2015 and then made a modest comeback in 2016.
But there may be more to come. A paper by Louis de Mesnard in the European Journal of Operational Research proposes a new formula -- (n+2)/3n -- so that if a paper has two authors each one gets two thirds of the credit. If it has 2,000 authors each one is assigned 334 citations.
Mesnard's paper has been given star billing in an article in THE which suggests that the magazine is thinking about using his formula in the next world rankings.
If so, we can expect headlines about the extraordinary recovery of Asian universities in contrast to the woes of the UK and the USA suffering from the ravages of Brexit and Trump-induced depression.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Tuesday, February 28, 2017
Monday, February 27, 2017
Worth Reading 8
Henk F
Moed, Sapienza University of Rome
A critical comparative
analysis of five world university rankings
ABSTRACT
To
provide users insight into the value and limits of world university rankings, a
comparative analysis is conducted of 5 ranking systems: ARWU, Leiden, THE, QS
and U-Multirank. It links these systems with one another at the level of
individual institutions, and analyses the overlap in institutional coverage,
geographical coverage, how indicators are calculated from raw data, the
skewness of indicator distributions, and statistical correlations between
indicators. Four secondary analyses are presented investigating national
academic systems and selected pairs of indicators. It is argued that current
systems are still one-dimensional in the sense that they provide finalized,
seemingly unrelated indicator values rather than offering a data set and tools
to observe patterns in multi-faceted data. By systematically comparing
different systems, more insight is provided into how their institutional
coverage, rating methods, the selection of indicators and their normalizations
influence the ranking positions of given institutions.
" Discussion and conclusions
The overlap analysis clearly illustrates that there
is no such set as ‘the’ top 100 universities in terms of excellence: it depends
on the ranking system one uses which universities constitute the top 100. Only
35 institutions appear in the top 100 lists of all 5 systems, and the number of
overlapping institutions per pair of systems ranges between 49 and 75. An
implication is that national governments executing a science policy aimed to
increase the number of academic institutions in the ‘top’ of the ranking of
world universities, should not only indicate the range of the top segment
(e.g., the top 100), but also specify which ranking(s) are used as a standard,
and argue why these were selected from the wider pool of candidate world
university rankings."
Scientometrics DOI 10.1007/s11192-016-2212-y
Tuesday, February 21, 2017
Never mind the rankings, THE has a huge database
There has been a debate, or perhaps
the beginnings of a debate, about international university rankings following
the publication of Bahram Bekhradnia's report to
the Higher Education Policy Institute with comments in University World
News by Ben Sowter, Phil Baty, Frank Ziegele
and Frans van Vought and Philip Altbach
and Ellen Hazelkorn and a guest post by
Bekhradnia in this blog.
Bekhradnia argued that global
university rankings were damaging and dangerous because they encourage an
obsession with research, rely on unreliable or subjective data, and emphasise
spurious precision. He suggests that governments, universities and academics
should just ignore the rankings.
Times Higher Education (THE) has now published a piece by THE rankings editor Phil Baty that
does not really deal with the criticism but basically says that it does not
matter very much because the THE database is bigger and better than anyone
else's. This he claims is "the true purpose and enduring legacy" of
the THE world rankings.
Legacy? Does this mean that THE is
getting ready to abandon rankings, or maybe just the world rankings, and go
exclusively into the data refining business?
Whatever Baty is hinting at, if
that is what he is doing, it does seem a rather insipid defence of the rankings
to say that all the criticism is missing the point because they are the
precursor to a big and sophisticated database.
The article begins with a quotation
from Lydia Snover, Director of Institutional Research, at MIT:
“There is no world department of
education,” says Lydia Snover, director of institutional research at the Massachusetts
Institute of Technology. But Times Higher Education, she
believes, is helping to fill that gap: “They are doing a real service to
universities by developing definitions and data that can be used for comparison
and understanding.”
This sounds as though THE is doing something
very impressive that nobody else has even thought of doing. But Snover's
elaboration of this point in an email gives equal billing to QS and THE as
definition developers and suggests the definitions and data that they provide
will improve and expand in the future, implying that they are now less than
perfect. She says:
"QS and THE both collect data
annually from a large number of international universities. For example,
understanding who is considered to be “faculty” in the EU, China, Australia,
etc. is quite helpful to us when we want to compare our universities
internationally. Since both QS and THE are relatively new in the rankings
business compared to US NEWS, their definitions are still evolving. As we
go forward, I am sure the amount of data they collect and the definitions of
that data will expand and improve."
Snover, by the way , is a member of the QS advisory board, as is THE's former rankings "masterclass" partner, Simon Pratt.
Baty offers a rather perfunctory
defence of the THE rankings. He talks about rankings bringing great
insights into the shifting fortunes of universities. If we are talking about
year to year changes then the fact that THE purports to chart shifting fortunes
is a very big bug in their methodology. Unless there has been drastic
restructuring universities do not change much in a matter of months and any
ranking that claims that it is detecting massive shifts over a year is simply
advertising its deficiencies.
The assertion that the THE rankings
are the most comprehensive and balanced is difficult to take seriously. If by
comprehensive it is meant that the THE rankings have more indicators than QS or
Webometrics that is correct. But the number of indicators does not mean very
much if they are bundled together and the scores hidden from the public and if
some of the indicators, the teaching survey and research survey for example,
correlate so closely that they are effectively the same thing. In any case, The
Russian Round University Rankings have 20 indicators compared with THE's 13 in
the world rankings.
As for being balanced, we have
already seen Bekhradnia's analysis showing that even the teaching and
international outlook criteria in the THE rankings are really about research.
In addition, THE gives almost a third of its weighting to citations. In
practice that is often even more because the effect of the regional
modification, now applied to half the indicator, is to boost in varying degrees
the scores of everybody except those in the best performing country.
After offering a scaled down
celebration of the rankings, Baty then dismisses critics while announcing that
THE "is quietly [seriously?] getting on with a
hugely ambitious project to build an extraordinary and truly unique global
resource."
Perhaps some
elite universities, like MIT, will find the database and its associated
definitions helpful but whether there is anything extraordinary or unique about
it remains to be seen.
Saturday, February 18, 2017
Searching for the Gold Standard: The Times Higher Education World University Rankings, 2010-2014
Now available at the Asian Journal of University Education. The paper has, of course, already been outdated by subsequent developments in the world of university rankings
ABSTRACT
This paper analyses the global university rankings introduced by Times Higher Education (THE) in partnership with Thomson Reuters in 2010 after the magazine ended its association with its former data provider Quacquarelli Symonds. The distinctive features of the new rankings included a new procedure for determining the choice and weighting of the various indicators, new criteria for inclusion in and exclusion from the rankings, a revised academic reputation survey, the introduction of an indicator that attempted to measure innovation, the addition of a third measure of internationalization, the use of several indicators related to teaching, the bundling of indicators into groups, and most significantly, the employment of a very distinctive measure of research impact with an unprecedentedly large weighting. The rankings met with little enthusiasm in 2010 but by 2014 were regarded with some favour by administrators and policy makers despite the reservations and criticisms of informed observers and the unusual scores produced by the citations indicator. In 2014, THE announced that the partnership would come to an end and that the magazine would collect its own data. There were some changes in 2015 but the basic structure established in 2010 and 2011 remained intact.
Saturday, February 11, 2017
What was the greatest ranking insight of 2016?
It is now difficult to imagine a world without university rankings. If they did not exist we would have to make judgements and decisions based on the self-serving announcements of bureaucrats and politicians, reputations derived from the achievements of past decades and popular and elite prejudices.
Rankings sometimes tell us things that are worth hearing. The first edition of the Shanghai rankings revealed emphatically that venerable European universities such as Bologna, the Sorbonne and Heidelberg were lagging behind their Anglo-Saxon competitors. More recently, the rise of research based universities in South Korea and Hong Kong and the relative stagnation of Japan has been documented by global rankings. The Shanghai ARWU also show the steady decline in the relative research capacity of a variety of US institutions including Wake Forest University, Dartmouth College, Wayne State University, the University of Oregon and Washington State University .
International university rankings have developed a lot in recent years and, with their large databases and sophisticated methodology, they can now provide us with an expanding wealth of "great insights into the strengths and shifting fortunes" of major universities.
So what was the greatest ranking insight of 2016? Here are the first three on my shortlist. I hope to add a few more over the next couple of weeks. If anybody has suggestions I would be happy to publish them.
One. Cambridge University isn't even the best research university in Cambridge.
You may have thought that Cambridge University was one of the best research universities in the UK or Europe, perhaps even the best. But when it comes to research impact, as measured by field and year normalised citations with a 50% regional modification it isn't even the best in Cambridge. That honour, according to THE goes to Anglia Ruskin University, a former art school. Even more remarkable is that this achievement was due to the work of a single researcher. I shall keep the name a secret in case his or her office becomes a stopping point for bus tours.
Two. The University of Buenos Aires and the Pontifical Catholic University of Chile rival the top European, American and Australian universities for graduate employability.
The top universities for graduate employability according to the Quacquarelli Symonds (QS) employer survey are pretty obvious: Harvard, Oxford, Cambridge, MIT, Stanford. But it seems that there are quite a few Latin American universities in the world top 100 for employability. The University of Buenos Aires is 25th and the Pontifical Catholic University of Chile 28th in last year's QS world rankings employer survey indicator. Melbourne is 23rd, ETH 26th, Princeton 32nd and New York University 36th.
Three. King Abdulaziz University is one of the world's leading universities for engineering.
The conventional wisdom seems settled, pick three or four from MIT, Harvard, Stanford, Berkeley, perhaps even a star rising in the East like Tsinghua or the National University of Singapore. But in the Shanghai field rankings for Engineering last year the fifth place went to King Abdulaziz University in Jeddah. For highly cited researchers in engineering it is second in the world surpassed only by Stanford.
Rankings sometimes tell us things that are worth hearing. The first edition of the Shanghai rankings revealed emphatically that venerable European universities such as Bologna, the Sorbonne and Heidelberg were lagging behind their Anglo-Saxon competitors. More recently, the rise of research based universities in South Korea and Hong Kong and the relative stagnation of Japan has been documented by global rankings. The Shanghai ARWU also show the steady decline in the relative research capacity of a variety of US institutions including Wake Forest University, Dartmouth College, Wayne State University, the University of Oregon and Washington State University .
International university rankings have developed a lot in recent years and, with their large databases and sophisticated methodology, they can now provide us with an expanding wealth of "great insights into the strengths and shifting fortunes" of major universities.
So what was the greatest ranking insight of 2016? Here are the first three on my shortlist. I hope to add a few more over the next couple of weeks. If anybody has suggestions I would be happy to publish them.
One. Cambridge University isn't even the best research university in Cambridge.
You may have thought that Cambridge University was one of the best research universities in the UK or Europe, perhaps even the best. But when it comes to research impact, as measured by field and year normalised citations with a 50% regional modification it isn't even the best in Cambridge. That honour, according to THE goes to Anglia Ruskin University, a former art school. Even more remarkable is that this achievement was due to the work of a single researcher. I shall keep the name a secret in case his or her office becomes a stopping point for bus tours.
Two. The University of Buenos Aires and the Pontifical Catholic University of Chile rival the top European, American and Australian universities for graduate employability.
The top universities for graduate employability according to the Quacquarelli Symonds (QS) employer survey are pretty obvious: Harvard, Oxford, Cambridge, MIT, Stanford. But it seems that there are quite a few Latin American universities in the world top 100 for employability. The University of Buenos Aires is 25th and the Pontifical Catholic University of Chile 28th in last year's QS world rankings employer survey indicator. Melbourne is 23rd, ETH 26th, Princeton 32nd and New York University 36th.
Three. King Abdulaziz University is one of the world's leading universities for engineering.
The conventional wisdom seems settled, pick three or four from MIT, Harvard, Stanford, Berkeley, perhaps even a star rising in the East like Tsinghua or the National University of Singapore. But in the Shanghai field rankings for Engineering last year the fifth place went to King Abdulaziz University in Jeddah. For highly cited researchers in engineering it is second in the world surpassed only by Stanford.
Monday, February 06, 2017
Is Trump keeping out the best and the brightest?
One of the strange things among several about the legal challenge to Trump's executive order on refugees and immigration is the claim in an amicus brief by dozens of companies, many of them at the cutting edge of the high tech economy, that the order makes it hard to "recruit hire and retain some of the world's best employees." The proposed, now frozen, restrictions would, moreover, be a "barrier to innovation" and prevent companies from attracting "great talent." They point out that many Nobel prize winners are immigrants.
Note that these are "tech giants", not meat packers or farmers and that they are talking about the great and the best employees, not the good or adequate or possibly employable after a decade of ESL classes and community college.
So let us take a look at the seven countries included in the proposed restrictions. Are they likely to be the source of large numbers of future hi tech entrepreneurs, Nobel laureates and innovators?
The answer is almost certainly no. None of the Nobel prize winners (not counting Peace and Literature) so far have been born in Yemen, Iraq, Iran, Somalia, Sudan, Libya or Syria although there has been an Iranian born winner of the Fields medal for mathematics.
The general level of the higher educational system in these countries does not inspire confidence that they are bursting with great talent. Of the seven only Iran has any universities in the Shanghai rankings, the University of Tehran and Amirkabir University of Technology.
The Shanghai rankings are famously selectively so take a look at the rank of the top universities in the Webometrics rankings which are the most inclusive, ranking more than 12,000 institutions this year.
The position of the top universities from the seven countries is as follows:
University of Babylon, Iraq 2,654
University of Benghazi, Libya 3,638
Kismayo University, Somalia 5,725
University of Khartoum, Sudan 1,972
Yemeni University of Science and Technology 3,681
Tehran University of Medical Science, Iran 478
Damascus Higher Institute of Applied Science and Technology, Syria 3,757.
It looks as though the only country remotely capable of producing innovators, entrepreneurs and scientists is Iran.
Finally, let's look at the scores of students from these countries n the GRE verbal and quantitative tests 2011-12.
For verbal reasoning, Iran has a score of 141.3, Sudan 140.6, Syria 142.7, Yemen 141, Iraq 139.2, and Libya 137.1. The mean score is 150.8 with a standard deviation of 8.4.
For quantitative reasoning, Iran has a score of 157.5, equal to France, Sudan 148.5, Syria 152.7, Yemen 148.6, Iraq 146.4, Libya 145.5. The mean score is 151.4 with a standard deviation of 8.7.
It seems that of the seven countries only Iran is likely to produce any significant numbers of workers capable of contributing to a modern economy.
No doubt there are other reasons why Apple, Microsoft and Twitter should be concerned about Trump's executive order. Perhaps they are worried about Russia, China, Korea or Poland being added to the restricted list. Perhaps they are thinking about farmers whose crops will rot in the fields, ESL teachers with nothing to do or social workers and immigration lawyers rotting at their desks. But if they really do believe that Silicon Valley will suffer irreparable harm from the proposed restrictions then they are surely mistaken.
Sunday, February 05, 2017
Guest post by Bahram Bekhradnia
I have just received this reply from Bahram Bekhradnia, President of the Higher Education Policy institute, in response to my review of his report on global university rankings.
My main two points which I think are not reflected in your blog – no doubt because I was not sufficiently clear – are
My main two points which I think are not reflected in your blog – no doubt because I was not sufficiently clear – are
· First the international rankings – with the exception of U-multirank which has other issues – almost exclusively reflect research activity and performance. Citations and publications of course are explicitly concerned with research, and as you say “International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.” And I add (see below) that faculty to student ratios reflect research activity and are not an indicator of a focus on education. There is not much argument that indicators of research dominate the rankings.
Yet although they ignore pretty well all other aspects of universities’ activities they claim nevertheless to identify the "best universities". They certainly do not provide information that is useful to undergraduate students, nor even actually to postgraduate students whose interest will be at discipline not institution level. If they were also honest enough to say simply that they identify research performance there would be rather less objection to the international rankings.
That is why it is so damaging for universities, their governing bodies – and even Governments – to pay so much attention to improving their universities performance in the international rankings. Resources – time and money - are limited and attaching priority to improving research performance can only be right for a very small number of universities.
· Second, the data on which they are based are wholly inadequate. 50% of the QS and 30% of the Times Higher rankings are based on nothing more than surveys of "opinion”, including in the case of QS the opinions of dead respondents. But no less serious is that the data on which the rankings are based – other than the publications and prize related data – are supplied by universities themselves and unaudited, or are ‘scraped’ from a variety of other sources including universities websites’ and cannot be compared one with the other. Those are the reasons for the Trinity College Dublin and Sultan Qaboos fiascos. One UAE university told me recently they had (mistakenly) submitted information about external income in UAE Dirhams instead of US Dollars – an inflation of 350% that no-one had noticed. Who knows what other errors there may be – the ranking bodies certainly don’t.
In reply to some of the detailed points that you make
In order to compare institutions you need to be sure that the data relating to each are compiled on a comparable basis, using comparable definitions et cetera. That is why the ranking bodies, rightly, have produced their own data definitions to which they ask institutions to adhere when returning data. The problem of course is that there is no audit of the data that are returned by institutions to ensure that the definitions are adhered to or that the data are accurate. Incidentally, that is why also there is far less objection to national rankings, which can, if there are robust national data collection and audit arrangements, have fewer problems with regard to comparability of data.
But at least there is the attempt with institution-supplied data to ensure that they are on a common basis and comparable. That is not so with data ‘scraped’ from random sources, and that is why I say that data scraping is such a bad practice. It produces data which are not comparable, but which QS nevertheless uses to compare institutions.
You say that THE, at least, omit faculty on research only contracts when compiling faculty to student ratios. But when I say that FSRs are a measure of research activity I am not referring to research only faculty. What I am pointing out is that the more research a university does the more academic faculty it is likely to recruit on teaching and research contracts. These will inflate the faculty to student ratios without necessarily increasing the teaching capacity over a university that does less research, consequently has fewer faculty but whose faculty devote more of their time to teaching. And of course QS even includes research contract faculty in FSR calculations. FSRs are essentially a reflection of research activity.
Subscribe to:
Posts (Atom)