Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Showing posts sorted by relevance for query MEPhi. Sort by date Show all posts
Showing posts sorted by relevance for query MEPhi. Sort by date Show all posts
Saturday, August 11, 2018
Will THE do something about the citations indicator?
International university rankings can be a bit boring sometimes. It is difficult to get excited about the Shanghai rankings, especially at the upper end: Chicago down two places, Peking up one. There was a bit of excitement in 2014 when there was a switch to a new list of highly cited researchers and some universities went up and down a few places, or even a few dozen, but that seems over with now.
The Times Higher Education (THE) world rankings are always fun to read, especially the citations indicator, which since 2010 has proclaimed a succession of unlikely places as having an outsize influence on the world of research: Alexandria University, Hong Kong Baptist University, Bilkent University, Royal Holloway University of London, National Research University MEPhi Moscow, Tokyo Metropolitan University, Federico Santa Maria Technical University Chile, St George's University of London, Anglia Ruskin University Cambridge, Babol Noshirvani University University of Technology Iran.
I wonder if the good and the great of the academic world ever feel uncomfortable about going to those prestigious THE summits while places like the above are deemed to be the equal for research impact or the superior of Chicago or Melbourne or Tsinghua. Do they even look at the indicator scores?
These remarkable results are not because of deliberate cheating but of THE's methodology. First, research documents are divided into 300 plus fields, five types of documents, and five years of publication, and then the world average number of citations (mean) is calculated for each type of publication in each field and in each year. Altogether there are 8000 "cells" with which the average of each university in the THE rankings is compared .
This means that if a university manages to get a few publications in a field where citations are typically low it could easily get a very high citations score.
Added to this is a "regional modification" where the final citation impact score is divided by the square root of the score of the country in which the country is located. This results in most universities receiving an increased score which is very small for those in productive countries and very high for those in countries that generate few citations. The modification is now applied to half of the citations indicator score.
Then we have the problems of those famous kilo-author mega-cited papers. These are papers with dozens, scores, or hundreds of participating institutions and similar numbers of authors and citations. Until 2015 THE treated every author as as though they were the sole author of a paper, including those with thousands of authors. Then in 2015 they stopped counting papers with over a thousand authors and in 2016 they introduced a modified fractional counting of citations for papers with over thousand authors. Citations were distributed proportionally among the authors with a minimum allotment of five per cent.
There are problems with all of these procedures. Treating every authors as as the sole author meant that a few places can get massive citation counts from taking part in one or two projects such as the CERN project or the global burden of disease study . On the other hand excluding mega papers is also not helpful since it omits some of the most significant current research.
The simplest solution would be fractional counting all around, just dividing the number of citations of all papers by the numbers of contributors or contributing institutions. This is the default option of Leiden Ranking and there seems no compelling reason why THE could not so.
There are some other issues that should be dealt with. One is the question of self-citation. This is probably not a widespread issue but it has caused problems on a couple of occasions.
Something else that THE might want to think about is the effect of the rise of in the number of authors with multiple affiliations. So far only one university has recruited large numbers of adjunct staff whose main function seems to be listing the university as a secondary affiliation at the top of published papers but there could be more in the future.
Of course, none of this would matter very much if the citations indicator were given a reasonable weighting of, say, five or ten percent but it has more weight than any other indicator -- the next is the research reputation survey with 18 %. A single mega-paper or even a few strategically placed citations in a low cited field can have a huge impact on a university's overallscore.
There are signs that THE is getting embarrassed at the bizarre effects of this indicator. Last year Phil Baty, THE's ranking editor, spoke about its quirky results.
Recently, Duncan Ross, data director at THE, has written about the possibility of of a methodological change. He notes that currently the benchmark world score for the 8000 plus cells is determined by the mean. He speculates about using the median instead. The problem with this is that a majority of papers are never cited so the median for many of the cells is going to be zero. So he proposes, based on an analysis from the recent THE Latin American rankings, that the 75th percentile be used.
Ross suggests that this would make the THE rankings more stable, especially the Latin American rankings where the threshold number of articles is quite low.
It would also allow the inclusion of more universities that currently fall below the threshold. This, I suspect, is something that is likely to appeal to the THE management.
It is very good that THE appears willing to think about reforming the citations indicator. But a bit of tweaking will not be enough.
Sunday, September 11, 2016
Waiting for the THE world rankings
The world, having recovered from the shocks of the Shanghai, QS and RUR rankings, now waits for the THE world rankings, especially the research impact indicator measured by field normalised citations.
It might be helpful to show the top 5 universities for this criterion since 2010-11.
2010-11
1. Caltech
2. MIT
3. Princeton
4. Alexandria University
5. UC Santa Cruz
2011-12
1. Princeton
2. MIT
3. Caltech
4. UC Santa Barbara
5. Rice University
2012-13
1. Rice University
2. National Research Nuclear University MePhI
3. MIT
4. UC Santa Cruz
5. Princeton
2013-14
1. MIT
2. Tokyo Metropolitan University
3. Rice University
4. UC Santa Cruz
5. Caltech
2014-15
1. MIT
2. UC Santa Cruz
3. Tokyo Metropolitan University
4. Rice University
5. Caltech
2015-16
1. St George's, University of London
2. Stanford University
3. UC Santa Cruz
4 Caltech
5. Harvard
Notice that no university has been in the top five for citations in every year.
Last year THE introduced some changes to this indicator, one of which was to exclude papers with more than 1000 authors from the citation count. This, along with a dilution of the regional modification that gave a bonus to universities in low scoring countries, had a devastating effect on some universities in France, Korea, Japan, Morocco, Chile and Turkey.
The citations indicator has always been an embarrassment to THE, throwing up a number of improbable front runners aka previously undiscovered pockets of excellence. Last year they introduced some reforms but not enough. It would be a good idea for THE to get rid of the regional modification altogether, to introduce full scale fractional counting, to reduce the weighting assigned to citations, to exclude self-citations and secondary affiliations and to include more than one measure of research impact and research quality.
Excluding the papers, mainly in particle physics, with 1,000 plus "authors" meant avoiding the bizarre situation where a contributor to a single paper with 2,000 authors and 2,000 citations would get the same credit as 1,000 authors writing a thousand papers each of which had been cited twice.
But this measure also meant that some of the most significant scientific activity of the century would not be counted in the rankings. The best solution would have been fractional counting, distributing the citations among all of the institutions or contributors, and in fact THE did this for their pilot African rankings at the University of Johannesburg.
Now, THE have announced a change for this year's rankings. According to their data chief Duncan Ross.
" Last year we excluded a small number of papers with more than 1,000 authors. I won’t rehearse the arguments for their exclusion here, but we said at the time that we would try to identify a way to re-include them that would prevent the distorting effect that they had on the overall metric for a few universities.
This year they are included – although they will be treated differently from other papers. Every university with researchers who author a kilo-author paper will receive at least 5 per cent credit for the paper – rising proportionally to the number of authors that the university has.
This is the first time that we have used a proportional measure in our citations score, and we will be monitoring it with interest.
We’re also pleased that this year the calculation of the Times Higher Education World University Rankings has been subject to independent audit by professional services firm PricewaterhouseCoopers (PwC). "
This could have perverse consequences. If an institution has one contributor to a 1,000 author paper with 2,000 citations then that author will get 2,000 citations for the university. But if there are 1001 authors then he or she would get only 50 citations.We’re also pleased that this year the calculation of the Times Higher Education World University Rankings has been subject to independent audit by professional services firm PricewaterhouseCoopers (PwC). "
It is possible that we will see a cluster of papers with 998, 999, 1000 authors as institutions remove their researchers from the author lists or project leaders start capping the number of contributors.
This could be a way of finding out if research intensive universities really do care about the THE rankings.
Similarly, QS now excludes papers with more than ten contributing institutions. If researchers are concerned about the QS rankings they will ensure that the number of institutions does not go above ten. Let's see if we start getting large numbers of papers with ten institutions but none or few with 11, 12 13 etc.
I am wondering why THE would bother introducing this relatively small change. Wouldn't it make more sense to introduce a lot of small changes all at once and get the resulting volatility over and done with?
I wonder if this has something to do with the THE world academic summit being held at Berkeley on 26-28 September in cooperation with UC Berkeley. Last year Berkeley fell from 8th to 13th in the THE world rankings. Since it is a contributor to several multi-contributor papers it is possible that the partial re-inclusion of hyper-papers will help the university back into the top ten.
Tuesday, June 25, 2013
What about a Research Influence Ranking?
Keeping up with the current surge of global university rankings is becoming next to impossible. Still there are a few niches that have remained unoccupied. One might be a ranking of universities according to their ability to spread new knowledge around the world. So it might be a good idea to have a Research Influence Ranking based on the citations indicator in the Times Higher Education -- Thomson Reuters World University Rankings.
Thomson Reuters are the world's leading collectors and analysts of citations data so such an index ought to provide invaluable data source for governments, corporations and other stakeholders deciding where to place research funding. Data for 400 universities can be found on the THE iPhone/iPad app.
The top place in the world would be jointly held by Rice University in Texas and Moscow State Engineering Physics Institute, closely followed by MIT and the University of California Santa Cruz.
Then there are the first places in various regions and counties. (MEPhI would be first in Europe and Rice in the US and North America.)
Canada
University of Toronto
Latin America
University of the Andes, Colombia
United Kingdom (and Western Europe)
Royal Holloway London
Africa
University of Cape Town
Middle East
Koc University, Turkey
Asia (and Japan)
Tokyo Metropolitan University
ASEAN
King Mongkut's University of Technology, Thailand
Australia and the Pacific
University of Melbourne
On second thoughts, perhaps not such a good idea.
Thomson Reuters are the world's leading collectors and analysts of citations data so such an index ought to provide invaluable data source for governments, corporations and other stakeholders deciding where to place research funding. Data for 400 universities can be found on the THE iPhone/iPad app.
The top place in the world would be jointly held by Rice University in Texas and Moscow State Engineering Physics Institute, closely followed by MIT and the University of California Santa Cruz.
Then there are the first places in various regions and counties. (MEPhI would be first in Europe and Rice in the US and North America.)
Canada
University of Toronto
Latin America
University of the Andes, Colombia
United Kingdom (and Western Europe)
Royal Holloway London
Africa
University of Cape Town
Middle East
Koc University, Turkey
Asia (and Japan)
Tokyo Metropolitan University
ASEAN
King Mongkut's University of Technology, Thailand
Australia and the Pacific
University of Melbourne
On second thoughts, perhaps not such a good idea.
Wednesday, July 19, 2017
Comments on an Article by Brian Leiter
Global
university rankings are now nearly a decade and a half old. The Shanghai
rankings (Academic Ranking of World Universities or ARWU) began in 2003,
followed a year later by Webometrics and the THES-QS rankings which, after an
unpleasant divorce, became the Times Higher Education (THE)
and the Quacquarelli Symonds (QS) world rankings. Since then the number of
rankings with a variety of audiences and methodologies has expanded.
We now
have several research-based rankings, University Ranking by Academic
Performance (URAP) from Turkey, the National Taiwan
University Rankings, Best Global Universities from US News, Leiden
Ranking, as well as rankings that include some attempt to assess and
compare something other than research, the Round University Rankings from
Russia and U-Multirank from
the European Union. And, of course, we also have subject
rankings, regional
rankings, even age
group rankings.
It is
interesting that some of these rankings have developed beyond the original
founders of global rankings. Leiden Ranking is now the gold standard for the
analysis of publications and citations. The Russian rankings use the same Web
of Science database that THE did until 2014 and it has 12 out of the 13
indicators used by THE plus another eight in a more sensible and transparent
arrangement. However, both of these receive only a fraction of the attention
given to the THE rankings.
The
research rankings from Turkey and Taiwan are similar to the Shanghai rankings
but without the elderly or long departed Fields and Nobel award winners and
with a more coherent methodology. U-Multirank is almost alone in trying to
get at things that might be of interest to prospective undergraduate students.
It is
regrettable that an article by Professor Brian Leiter of the University of
Chicago in the Chronicle of Higher Education , 'Academic
Ethics: To Rank or Not to Rank' ignores such developments
and mentions only the original “Big Three”, Shanghai, QS and THE. This is
perhaps forgivable since the establishment media, including THE and the
Chronicle, and leading state and academic bureaucrats have until recently paid
very little attention to innovative developments in university ranking. Leiter
attacks the QS rankings and proposes that they should be boycotted while trying
to improve the THE rankings.
It is a
little odd that Leiter should be so caustic, not entirely without justification,
about QS while apparently being unaware of similar or greater problems with THE.
He begins
by saying that QS stands for “quirky silliness”. I would not disagree with that although
in recent years QS has been getting less silly. I have been as sarcastic as
anyone about the failings of QS: see here and here for
an amusing commentary.
But the
suggestion that QS is uniquely bad in contrast to THE is way off the target.
There are many issues with the QS methodology, especially with its employer and
academic surveys, and it has often announced placings that seem very
questionable such as Nanyang Technological University (NTU) ahead of Princeton
and Yale or the University of Buenos Aires in the world top 100, largely
as a result of a suspiciously good performance in the survey
indicators. The
oddities of the QS rankings are, however, no worse than some of the absurdities
that THE has served up in their world and
regional rankings. We have had places like University of Marakkesh Cadi
Ayyad University in Morocco, Middle East Technical University in Turkey,
Federico Santa Maria Technical University in Chile, Alexandria University
and Veltech University
in India rise to ludicrously high places, sometimes just for a year or two, as
the result of a few papers or even a single highly cited author.
I am not
entirely persuaded that NTU deserves its top
12 placing in the QS rankings. You can see here QS’s
unconvincing reply to a question that I provided. QS claims that NTU's excellence
is shown by its success in attracting foreign faculty, students and
collaborators, but when you are in a country where people show their passports
to drive to the dentist, being international is no great accomplishment. Even
so, it is evidently world class as far as engineering and computer science are
concerned and it is not impossible that it could reach an undisputed overall top
ten or twenty ranking the next decade.
While the
THE top ten or twenty or even fifty looks quite reasonable, apart from Oxford
in first place, there are many anomalies as soon as we start breaking the
rankings apart by country or indicator and THE has pushed some very weird data
in recent years. Look at these
places supposed to be regional or international centers of across
the board research excellence as measured by citations: St Georges University
of London, Brandeis University, the Free University of Bozen-Bolsano,
King Abdulaziz University, the University of Iceland, Veltech University.
If QS is silly what are we to call a ranking where Anglia Ruskin University is
supposed to have a greater research impact than Chicago, Cambridge or Tsinghua.
Leiter
starts his article by pointing out that the QS academic survey is largely
driven by the geographical distribution of its respondents and by the halo
effect. This is very probably true and to that I would add that a lot of the
responses to academic surveys of this kind are likely driven by simple self
interest, academics voting for their alma mater or current employer. QS does
not allow respondents to vote for the latter but they can vote for the former
and also vote for grant providers or collaborators.
He says
that “QS does not, however, disclose the geographic distribution of
its survey respondents, so the extent of the distorting effect cannot be
determined". This is not true of the overall survey. QS does in fact
give very
detailed figures about the origin of its respondents and there
is good evidence here of probable distorting effects. There are, for example,
more responses from Taiwan than from Mainland China, and almost as many from
Malaysia as from Russia. QS does not, however, go down to subject level when
listing geographic distribution.
He then
refers to the case of University
College Cork (UCC) asking faculty to solicit friends in other
institutions to vote for UCC. This is definitely a bad practice, but it was in
violation of QS guidelines and QS have investigated. I do not know what came of
the investigation but it is worth noting that the message would not have been
an issue if it had referred to the THE survey.
On
balance, I would agree that THE ‘s survey methodology is less dubious than QS’s
and less likely to be influenced by energetic PR campaigns. It would certainly
be a good idea if the weighting of the QS survey was reduced and if there was more
rigorous screening and classification of potential respondents.
But I
think we also have to bear in mind that QS does prohibit respondents from
voting for their own universities and it does average results out over a five-
year period (formerly three years).
It is
interesting that while THE does not usually combine and average survey
results it
did so in the 2016-17 world rankings combining the 2015 and 2016
survey results. This was, I suspect, probably because of a substantial drop in 2016 in the
percentage of respondents from the arts and humanities that would, if
unadjusted, have caused a serious problem for UK universities, especially those
in the Russell Group.
Leiter
then goes on to condemn QS for its dubious business practices. He reports that
THE dropped QS because of its dubious practices. That is what THE says but it
is widely rumoured within the rankings industry that THE was also interested in
the financial advantages of a direct partnership with Thomson Reuters rather
than getting data from QS.
He also
refers to QS’s hosting a series of “World Class events” where world university
leaders pay $950 for “seminar, dinners, coffee breaks” and “learn best practice
for branding and marketing your institution through case studies and expert
knowledge” and the QS stars plan where universities pay to be audited by QS in
return for stars that they can use for promotion and advertising. I would add
to his criticism that the Stars program has apparently undergone a typical
“grade inflation” with the number of five-star universities increasing all the
time.
Also, QS
offers specific consulting services and it has a large number of clients from
around the world although there are many more from Australia and Indonesia than
from Canada and the US. Of the three from the US one is MIT which has
been number
one in the QS world rankings since 2012, a position it
probably achieved after a change in the way in which faculty were classified.
It would,
however, be misleading to suggest that THE is any better in this respect. Since
2014 it has launched a serious and unapologetic “monetisation of data” program.
There are
events such as the forthcoming world "academic summit" where for 1,199
GBP (standard university) or 2,200 GBP (corporate), delegates can get "Exclusive
insight into the 2017 Times Higher Education World University
Rankings at the official launch and rankings masterclass,”, plus “prestigious
gala dinner, drinks reception and other networking events”. THE also provides a variety of
benchmarking and performance analysis services, branding, advertising and
reputation management campaigns and a range of silver and gold profiles,
including adverts and sponsored supplements. THE’s data
clients include some illustrious names like the National University of
Singapore and Trinity College Dublin plus some less well-known places such as
Federico Santa Maria Technical University, Orebro University, King Abdulaziz University,
National Research Nuclear University MEPhI Moscow, and Charles Darwin
University.
Among
THE’s activities are regional events that promise “partnership opportunities
for global thought leaders” and where rankings like “the WUR are presented at
these events with our award-winning data team on hand to explain them, allowing
institutions better understanding of their findings”.
At some
of these summits the rankings presented are trimmed and tweaked and somehow
the hosts emerge in a favourable light. In February 2015, for example, THE held
a Middle East and North Africa (MENA) summit that included a “snapshot ranking” that
put Texas A and M University Qatar, a branch campus that offers nothing but
engineering courses, in first place and Qatar University in fourth. The ranking
consisted of precisely one indicator out of the 13 that make up THE’s world
university rankings, field and year normalised citations. United Arab Emirates
University (UAEU) was 11th and the American University of
Sharjah in the UAE 14th.
The next
MENA summit was held in January 2016 in Al Ain in UAE. There was no snapshot
this time and the methodology for the MENA rankings included 13 indicators in
THE’s world rankings. Host country universities were now in fifth (UAEU) and
eighth place (American University in Sharjah). Texas A and M Qatar was not
ranked and Qatar University fell to sixth place.
Something
similar happened to Africa. In 2015, THE went to the University of Johannesburg
for a summit that brought together “outstanding global thought leaders from
industry, government, higher education and research” and which unveiled THE’s
Africa ranking based on citations (with the innovation of fractional counting)
that put the host university in ninth place and the University of Ghana in
twelfth.
In 2016
the show moved on to the University of Ghana where another ranking was produced
based on all the 13 world ranking indicators. This time the University of
Johannesburg did not take part and the University of Ghana went from 12th place
to 7th.
I may
have missed something but so far I do not see sign of THE Africa or MENA
summits planned for 2017. If so, then African and MENA university leaders are
to be congratulated for a very healthy scepticism.
To be
fair, THE does not seem to have done any methodological tweaking for this year’s
Asian, Asia Pacific and Latin American rankings.
Leiter
concludes that American academics should boycott the QS survey but not THE’s
and that they should lobby THE to improve its survey practices. That, I
suspect, is pretty much a nonstarter. QS has never had much a presence in the
US anyway and THE is unlikely to change significantly as long as its commercial
dominance goes unchallenged and as long as scholars and administrators fail to
see through its PR wizardry. It would be better for everybody to start looking beyond the "Big Three" rankings.
Monday, May 29, 2017
Ten Universities with a Surprisingly Large Research Impact
Every so often newspapers produce lists of universities that excel in or are noteworthy for something. Here is a list of ten universities that, according to Times Higher Education (THE), have achieved remarkable success in the world of global research. In a time of austerity when the wells of patronage are running dry, they should be an example to us all: they have achieved a massive global research impact, measured by field-normalised citations, despite limited funding, minimal reputations and few or very few publications. The source is the THE World and Asian rankings citations indicator.
1. First on the list is Alexandria University in Egypt, 4th in the world and a near perfect score for research impact in 2010-11.
2. In the same year Hong Kong Baptist University was tenth for research impact, ahead of the University of Chicago and the University of Hong Kong.
3. In 2011-12 Royal Holloway, University of London, was in 12th place, ahead of any other British or European institution.
4. The National Research Nuclear University MEPhI, in Moscow, a specialist institution, was top of the table for citations in 2012-13.
5. In 2013-14 and 2014-15 Tokyo Metropolitan University had a perfect score of 100 for citations, a distinction shared only with MIT.
6. In 2014-15 Federico Santa Maria Technical University was sixth in the world for research impact and first in Latin America with a near perfect score of 99.7.
7. In the same year Bogazici University in Turkey reached the top twenty for research impact.
8. St George's, University of London, was the top institution in the world for research impact in 2016-17.
9. In that year Anglia Ruskin University, a former art school, was tenth for this metric, equal to Oxford and well ahead of the other university in Cambridge.
10. Last year's THE Asian rankings saw Vel Tech University in Chennai achieve the highest impact of any Asian university.
1. First on the list is Alexandria University in Egypt, 4th in the world and a near perfect score for research impact in 2010-11.
2. In the same year Hong Kong Baptist University was tenth for research impact, ahead of the University of Chicago and the University of Hong Kong.
3. In 2011-12 Royal Holloway, University of London, was in 12th place, ahead of any other British or European institution.
4. The National Research Nuclear University MEPhI, in Moscow, a specialist institution, was top of the table for citations in 2012-13.
5. In 2013-14 and 2014-15 Tokyo Metropolitan University had a perfect score of 100 for citations, a distinction shared only with MIT.
6. In 2014-15 Federico Santa Maria Technical University was sixth in the world for research impact and first in Latin America with a near perfect score of 99.7.
7. In the same year Bogazici University in Turkey reached the top twenty for research impact.
8. St George's, University of London, was the top institution in the world for research impact in 2016-17.
9. In that year Anglia Ruskin University, a former art school, was tenth for this metric, equal to Oxford and well ahead of the other university in Cambridge.
10. Last year's THE Asian rankings saw Vel Tech University in Chennai achieve the highest impact of any Asian university.
Saturday, December 21, 2013
Twenty Ways to Rise in the Rankings Quickly and Fairly Painlessly
Times
Higher Education has
just republished an article by Amanda Goodall, ‘Top 20 ways to improve your
world university ranking’. Much of her
advice is very sensible -- appointing university leaders with a strong research
record, for example -- but in most cases the road from her suggestions to a
perceptible improvement in the rankings is likely to be winding and very long. It
is unlikely that any of her proposals would have much effect on the rankings in
less than a decade or even two.
So here are 20 realistic proposals for a university wishing
to join the rankings game.
Before starting, any advice about how a university can rise
in the rankings should be based on these principles.
·
Rankings are proliferating and no doubt there will be more
in the future. There is something for almost anybody if you look carefully
enough.
·
The indicators and methodology of the better known rankings
are very different. Something that works with one may not work with another. It
might even have a negative effect.
· There is often a price to pay for getting ahead in the rankings. Everybody should consider whether it is worth it. Also, while rising from 300th place to 250th is quite easy, going from 30th to 25th is another matter.
· Don’t forget the number on the bottom. It might be easier to reduce the number of academic staff than to increase the number of citations or publications.
·
Rankings are at best an approximation to what universities
do. Nobody should get too excited about them.
The top 20 ways in which universities can quickly improve
their positions in one or more of the international university rankings are:
1. Get rid
of students
Over the years many universities acquire a collection of
branch campuses, general studies programmes, night schools, pre-degree programmes
and so on. Set them free to become independent universities or colleges. Almost
always, these places have relatively more students and relatively fewer faculty
than the main campus. The university will therefore do better in the Quacquarelli Symonds (QS) and Times Higher Education (THE) faculty student ratio
indicators. Also, staff in the spun off
branches and schools generally produce less research than those at the main
campus so you will get a boost in the productivity per capita indicator in the Shanghai ARWU rankings.
2. Kick out the old and bring in the young
Get rid of ageing professors, especially if unproductive and
expensive, and hire lots of indentured servants adjunct and temporary teachers
and researchers. Again, this will improve the university’s performance on the THE
and QS faculty student ratio indicators. They will not count as senior faculty so
this will be helpful for ARWU.
3. Hire research assistants
Recruiting slave labour cheap or unpaid research
assistants (unemployed or unemployable graduate interns?) will boost the score
for faculty student ratio in the QS rankings, since QS counts research-only
staff for their faculty student indicator. It will not, however, work for the THE
rankings. Remember that for QS more
faculty are good for faculty student ratio but bad for citations per faculty so
you have to analyse the potential trade off carefully.
4. Think about an exit option
If an emerging university wants to be included in the
rankings it might be better to focus on just one of them. Panjab University is doing very well in the
THE rankings but does not appear in the QS rankings. But remember that if you
apply to be ranked by THE and you do not like your placing then it is always
possible to opt out by not submitting data next year. But QS has a Hotel
California policy: once in, you can check out but you can never leave. It does
not matter how much you complain about the unique qualities of your institution
and how they are neglected by the rankers, QS will go on ranking you whether you
like it.
5. Get a medical school
If you do not have a
medical school or a research and/or teaching hospital then get one from
somewhere. Merge with an existing one or start your own. If you have one, get
another one. Medical research produces a disproportionate number of papers and
citations which is good for the QS citations per faculty indicator and the ARWU
publications indicator. Remember this strategy may not help so much with THE who use
field normalisation. Those citations of medical research will help there only
if they above the world average for field and year.
Update August 2016: QS now have a moderate form of field normalisation so the advantage of a medical school is reduced but the Shanghai rankings are still biased towards medical research.
Update August 2016: QS now have a moderate form of field normalisation so the advantage of a medical school is reduced but the Shanghai rankings are still biased towards medical research.
6. But if you are a
medical school, diversify
QS and THE supposedly do not include single subject
institutions in their general rankings, although from time to time one will, like the University of California at
San Francisco, Aston Business School or (National Research Nuclear University) Moscow Engineering Physics Institute (MEPhI),
slip through. If you are an independent
medical or single subject institution consider adding one or two more subjects
then QS and THE will count you although you will probably start sliding down the ARWU table.
Update August 2016: the
QS BRICS rankings include some Russian institutions that look like they
focus on one field and National Research Nuclear University MePhI is back in the THE world rankings.
7. Amalgamate
The Shanghai rankings count the total number of publications
in the SCI and SSCI, the total number of highly cited researchers and the total
number of papers without regard for the number of researchers. THE and QS count
the number of votes in their surveys without considering the number of alumni.
What about a new mega university formed by merging LSE,
University College London and Imperial College? Or a tres grande ecole from all
those little grandes ecoles around Paris?
Update August 2016: This is pretty much what the University of Paris-Saclay is doing.
Update August 2016: This is pretty much what the University of Paris-Saclay is doing.
8. Consider the
weighting of the rankings
THE gives a 30 % weighting to citations and 2.5% to income
from industry. QS gives 40 % to its academic survey and 5 % to international
faculty. So think about where you are going to spend your money.
9. The wisdom of crowds
Focus on research projects in those fields that have huge
multi - “author” publications, particle
physics, astronomy and medicine for example.
Such publications often have very large numbers of citations. Even if
your researchers make a one in two thousandth contribution Thomson Reuters, THE’s
data collector, will give them the same credit as they would get if they were
the only authors. This will not work for the
Leiden Ranking which uses fractionalised counting of citations. Note that this strategy works best when combined with number
10.
Update August 2016: THE methodological changes in 2015 mean that this does not work any more. Look at what happened to Middle East Technical University. But it is still worth looking out for projects with dozens or scores of contributors.
Update August 2016: THE methodological changes in 2015 mean that this does not work any more. Look at what happened to Middle East Technical University. But it is still worth looking out for projects with dozens or scores of contributors.
10. Do not produce too much
You need to produce 200 papers a year to be included in the
THE rankings. But producing more papers than this might be counterproductive. If
your researchers are producing five thousand papers a year then those five
hundred citations from a five hundred “author” report on the latest discovery
in particle physics will not have much impact. But if you are publishing three
hundred papers a year those citations will make a very big difference. This is
why Dr El Naschie’s frequently
cited papers in Chaos, Solitons and
Fractals were a big boost for Alexandria University but not for
Cambridge, Surrey, Cornell and Frankfurt universities with whom he also claimed
affiliation. However, Leiden will not rank universities until they reach
500 papers a year.
Update August 2016: See number 9.
Update August 2016: See number 9.
11. Moneyball Strategy
In his book Moneyball, Michael Lewis recounted the ascent
of the Oakland As baseball team through a strategy of buying undervalued players.
The idea was to find players who did things that led to their teams winning
even if they did not match the stereotype of a talented player.
This strategy was applied by George
Mason University in Virginia who created a top basketball team by
recruiting players who were overlooked by scouts because they were too small or
too fat and a top economics department by recruiting advocates of a market
economy at a time when such an idea was unfashionable.
Universities could recruit researchers who are prolific and
competent but are unpromotable or unemployable because they are in the wrong
group or fail to subscribe enthusiastically to current academic orthodoxies.
Maybe start with Mark
Regnerus and Jason
Richwine.
Update August 2016: See the story of Tim Groseclose's move from UCLA to George Mason
Update August 2016: See the story of Tim Groseclose's move from UCLA to George Mason
12. Expand doctoral
programmes
One indicator in the THE world rankings is the ratio of
doctoral to bachelor degree students.
Panjab University recently announced that they will
introduce integrated masters and doctors programmes. This could be a smart move
if it means students no longer go into master’s programmes but instead into
something that can be counted as a doctoral degree program.
13. The importance of names
Make sure that your researchers know which university they
are affiliated to and that they know its correct name. Make sure that branch
campuses, research institutes and other autonomous or quasi- autonomous groups
incorporate the university name in their publications. Keep an eye on Scopus
and ISI and make sure they know what you are called. Be especially careful if
you are an American state university.
14. Evaluate
staff according to criteria relevant to the rankings
If staff are to be appointed and promoted according to their
collegiality, the enthusiasm with which
they take part in ISO exercises, community
service, ability to make the faculty a pleasant place for everybody or commitment to diversity then you will get collegial,
enthusiastic etc faculty. But those are things that the rankers do not – for once
with good reason – attempt to measure.
While you are about it get rid of interviews for staff and
students. Predictive validity ranges from zero to low
15. Collaborate
The more authors a paper has the more likely it is to be
cited, even if it is only self-citation.
Also, the more collaborators you have the greater the chances of a good
score in the reputation surveys. And do not forget the percentage of
collaborators who are international is also an indicator in the THE rankings
16. Rebrand
It would be good to have names that are as distinctive and
memorable as possible. Consider a name change. Do you really think that the
average scientist filling out the QS or the THE reputation surveys is going to remember
which of the sixteen (?) Indian Institutes of Technology is especially good in
engineering.
Update August 2016: But not too memorable. I doubt that Lovely Professional University will get the sort of public interest it is hoping for.
Update August 2016: But not too memorable. I doubt that Lovely Professional University will get the sort of public interest it is hoping for.
17. Be proactive
Rankings are changing all the time so think about indicators
that might be introduced in the near future. It would seem quite easy, for
example, for rankers to collect data about patent applications.
Update August 2016: Make sure everyone keeps their Google Scholar Citations Profiles up to date.
Update August 2016: Make sure everyone keeps their Google Scholar Citations Profiles up to date.
18. Support your
local independence movement
It has been known for a long time that increasing the number
of international students and faculty is good for both the THE and QS rankings.
But there are drawbacks to just importing students. If it is difficult to move
students across borders why not create new borders?
If Scotland votes for independence in next year’s referendum
its scores for international students and international faculty in the QS and
THE rankings would go up since English and Welsh students and staff would be
counted as international.
Update August 2016: Scotland didn't but there may be another chance.
Update August 2016: Scotland didn't but there may be another chance.
19. Accept that some
things will never work
Realise that there are some things that are quite pointless
from a rankings perspective. Or any other for that matter. Do not bother telling staff and students to
click away at the website to get into Webometrics. Believe it or not, there are precautions against that sort of thing. Do not have motivational weekends. Do not have quality initiatives unless they
get rid of the cats.
Update August 2016: That should read do not do anything "motivational". The only thing they motivate is the departure of people with other options.
Update August 2016: That should read do not do anything "motivational". The only thing they motivate is the departure of people with other options.
20. Get Thee to an Island
Leiden Ranking has a little known ranking that measures the
distance between collaborators. At the moment the first place goes to the
Australian National University. Move to Easter Island or the Falklands and you
will be top for something.
Thursday, April 14, 2016
Are there any more rankings left?
There seems to be an unending stream of new rankings. So far we have had from the big three or four subject rankings, field rankings, European, Asian, African, Latin American, Middle East and North Africa rankings, BRICS rankings. BRICS and emerging economies rankings, reputation rankings, young universities, old universities, most international universities rankings, research income from industry rankings.
From outside the charmed triangle or square we have had random rankings, length of name rankings,green rankings, twitter and LinkedIn rankings and rich universities rankings and of course in the USA a mixed bag of best universities for squirrels, gay friendly, top party schools and so on. I am a keen follower of the latter: when the US Air Force Academy gets in the top ten I shall pack up and move to Antarctica.
So are there any more international university rankings in the pipeline?
A few suggestions. Commonwealth universities, OIC universities, cold universities, high universities (altitude that is), poor universities, fertile universities (measured by branch campuses).
One that would be fun to see would be a Research Impact ranking based on those universities that have achieved a top placing in the THE year- and field- normalised, regionally modified, standardised, citations ranking.
Some notable inclusions would be St. George's University of London, Rice University, Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology, National Research Nuclear University MEPhI and Alexandria University.
So are there any more international university rankings in the pipeline?
A few suggestions. Commonwealth universities, OIC universities, cold universities, high universities (altitude that is), poor universities, fertile universities (measured by branch campuses).
One that would be fun to see would be a Research Impact ranking based on those universities that have achieved a top placing in the THE year- and field- normalised, regionally modified, standardised, citations ranking.
Some notable inclusions would be St. George's University of London, Rice University, Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology, National Research Nuclear University MEPhI and Alexandria University.
Friday, November 13, 2015
Are global rankings losing their credibility? (from WONK HE)
Originally published in WONK HE 27/10/2015
Are global rankings losing their credibility?
Richard is an academic and expert on university rankings. He writes
in depth on rankings at his blog: University
Rankings Watch.
PUBLISHED
Oct 27th 2015
TAGS
·
Data
The international university ranking scene has become increasingly
complex, confusing and controversial. It also seems that the big name brands
are having problems balancing popularity with reliability and validity. All
this is apparent from the events of the last two months which have seen the
publication of several major rankings.
The first phase of the 2015 global ranking season ended with the
publication of the US News’s (USN) Best Global
universities. We have already seen the 2015 editions of the big
three brand names, the Academic Ranking
of World Universities (ARWU) produced by the Centre for
World-Class Universities at Shanghai Jiao Tong University, the Quacquarelli
Symonds (QS) World University
Rankings and the Times Higher Education (THE) World University
Rankings. Now a series of spin-offs has begun.
In addition, a Russian organisation, Round University Ranking (RUR), has
produced another set of league tables. Apart from a news item on
the website of the International Ranking Expert Group these rankings have
received almost no attention outside Russia, Eastern Europe and the CIS. This
is very unfortunate since they do almost everything that the other rankings do
and contain information that the others do not.
One sign of the growing complexity of the ranking scene is that USN, QS,
ARWU and THE are producing a variety of by-products including
rankings of new universities, subject rankings, best cities for students,
reputation rankings, regional rankings with no doubt more to come. They are
also assessing more universities than ever before. THE used to take pride in
ranking only a small elite group of world universities. Now they are talking
about being open and inclusive and have ranked 800 universities this year, as
did QS, while USN has expanded from 500 to 750 universities. Only the Shanghai rankers
have remained content with a mere 500 universities in their general rankings.
Academic Ranking of World Universities (ARWU)
All three of the brand name rankings have faced issues of credibility.
The Shanghai ARWU has had a problem with the massive recruitment of adjunct
faculty by King Abdulaziz University (KAU) in Jeddah. This was initially aimed
at the highly cited researchers indicator in the ARWU, which simply counts the
number of researchers affiliated to universities no matter whether their affiliation
has been for an academic lifetime or had begun the day before ARWU did the
counting. The Shanghai rankers deftly dealt with this issue by simply not
counting secondary affiliations in the new lists of highly cited researchers
supplied by Thompson Reuters in 2014.
That, however, did not resolve the problem entirely. Those researchers
have not stopped putting KAU as a secondary affiliation and even if they no
longer affected the highly cited researchers indicator they could still help a
lot with publications and papers in Nature and Science,
both of which are counted in the ARWU. These part-timers – and some may not
even be that – have already ensured that KAU, according to ARWU, is the top
university in the world for publications in mathematics.
The issue of secondary affiliation is one that is likely to become a
serious headache for rankers, academic publishers and databases in the next few
years. Already, undergraduate teaching in American universities is dominated by
a huge reserve army of adjuncts. It is not impossible that in the near future
some universities may find it very easy to offer minimal part-time contracts to
talented researchers in return for listing as an affiliation and then see a
dramatic improvement in ranking performance.
ARWU’s problem with the highly cited researchers coincided with Thomson
Reuters producing a new list and announcing that the old one would no longer be
updated. Last year, Shanghai combined the old and new lists and this produced
substantial changes for some universities. This year they continued with the
two lists and there was relatively little movement in this indicator or in the
overall rankings. But next year they will drop the old list altogether and just
use the new one and there will be further volatility. ARWU have, however, listed the
number of highly cited researchers in the old and new lists so
most universities should be aware of what is coming.
Quacquarelli Symonds (QS) World University Rankings
The Quacquarelli Symonds (QS) World University Rankings have been
regarded with disdain by many British and American academics although they do
garner some respect in Asia and Latin America. Much of the criticism has
been directed at the academic reputation survey which is complex, opaque and,
judging from QS’s regular anti-gaming measures, susceptible to influence from
universities. There have also been complaints about the staff student ratio
indicator being a poor proxy for teaching quality and the bias of the citations
per faculty indicator towards medicine and against engineering, the social
sciences and the arts and humanities.
QS have decided to reform their
citations indicator by treating the five large subject groups
as contributing equally to the indicator score. In addition, QS omitted papers,
most of them in physics, with a very large number of listed authors and
averaged responses to the surveys over a period of five years in an attempt to
make the rankings less volatile.
The result of all this was that some universities rose and others fell.
Imperial College London went from 2nd to 8th while the London School of Economics rose from 71st to 35th. In Italy, the Polytechnics of Milan
and Turin got a big boost while venerable universities suffered dramatic
relegation. Two Indian institutions moved into the two hundred, some Irish
universities such as Trinity College Dublin, University College Dublin and
University College Cork went down and some such as National University of
Ireland Galway and the University of Limerick went up.
There has always been a considerable amount of noise in these rankings
resulting in part from small fluctuations in the employer and academic surveys.
In the latest rankings these combined with methodological changes to produce
some interesting fluctuations. Overall the general pattern was that
universities that emphasise the social sciences, the humanities and engineering
have improved at the expense of those that are strong in physics and medicine.
Perhaps the most remarkable of this year’s changes was the rise of two
Singaporean universities, the National University of Singapore (NUS) and
Nanyang Technological University (NTU), to 12th and 13th place respectively, a change
that has met with some scepticism even in Singapore. They are now above Yale,
EPF Lausanne and King’s College London. While the changes to the citations
component were significant, another important reason for the rise of these two
universities was their continuing remarkable performance in the academic and
employer surveys. NUS is in the top ten in the world for academic reputation
and employer reputation with a perfect score of 100, presumably rounded up, in
each. NTU is 52nd for the academic survey and 39th for employer with scores in the nineties for both.
Introducing a moderate degree of field normalisation was probably a
smart move. QS were able to reduce the distortion resulting from the database’s
bias to medical research without risking the multiplication of strange results
that have plagued the THE citations indicator. They have not,
however, attempted to reform the reputation surveys which continue to have a
combined 50% weighting and until they do so these rankings are unlikely to
achieve full recognition from the international academic community.
Times Higher Education (THE)
World University Rankings
The latest THE world rankings were published on
September 30th and like QS, THE have done some tweaking of
their methodology. They had broken with Thompson Reuters at the end of
2014 and started using data from Scopus, while doing the analysis and
processing in-house. They were able to analyse many more papers and citations
and conduct a more representative survey of research and postgraduate
supervision. In addition they omitted multi-author and multi-cited papers and
reduced the impact of the “regional modification”.
Consequently there was a large dose of volatility. The results were so
different from those of 2014 that they seemed to reflect an entirely new
system. THE did, to their credit, do the decent thing and
state that direct comparisons should not be made to previous years. That,
however, did not stop scores of universities and countries around the world
from announcing their success. Those that had suffered have for the most part
kept quiet.
There were some remarkable changes. At the very top, Oxford and
Cambridge surged ahead of Harvard which fell to sixth place. University College
Dublin, in contrast to the QS rankings, rose as did Twente and Moscow State,
the Karolinska Institute and ETH Zurich.
On the other hand, many universities in France, Korea, Japan and Turkey
suffered dramatic falls. Some of those universities had been participants in
the CERN projects and so had benefitted in 2014 from the huge number of
citations derived from their papers. Some were small and produced few papers so
those citations were divided by a small number of papers. Some were located in
countries that performed poorly and so got help from a “regional modification”
(the citation impact score of the university is divided by the square root of
the average citation impact score of the whole country). Such places suffered
badly from this year’s changes.
It is a relief that THE have finally done something
about the citations indicator and it would be excellent if they continued with
further reforms such as fractional counting, reducing the indicator’s overall
weighting, not counting self-citations and secondary affiliations and getting
rid of the regional modification altogether.
Unfortunately, if the current round of reforms represent an improvement,
and on balance they probably do, then the very different results of 2014 and
before, call into question THE’s repeated claims to be trusted, robust
and sophisticated. If the University of Twente deserves to be in the top 150
this year then the 2014 rankings which had them outside the top 200 could not
possibly be valid. If the Korean Advanced Institute of Science and Technology
(KAIST) fell 66 places then either the 2015 rankings or those of 2014 were
inaccurate, or they both were. Unless there is some sort of major restructuring
such as an amalgamation of specialist schools or the shedding of inconvenient
junior colleges or branch campuses, large organisations like universities
simply do not and cannot change that much over the course of 12 months or less.
It would have been more honest, although probably not commercially
feasible, for THE to declare that they were starting with a
completely new set of rankings and to renounce the 2009-14 rankings in the way
that they had disowned the rankings produced in cooperation with QS between
2004 and 2008. THE seem to be trying to trade on the basis of
their trusted methodology while selling results suggesting that that
methodology is far from trustworthy. They are of course doing just what a
business has to do. But that is no reason why university administrators and
academic experts should be so tolerant of such a dubious product.
These rankings also contain quite a few small or specialised
institutions that would appear to be on the borderline of a reasonable
definition of an “independent university with a broad range of subjects”:
Scuala Normale Superiore di Pisa and Scuala Superiore Sant’Anna, both part of
the University of Pisa system, Charité-Universitätsmedizin Berlin, an affiliate
of two universities, St George’s, University of London, a medical school,
Copenhagen Business School, Rush university, the academic branch of a private
hospital in Chicago, the Royal College of Surgeons in Ireland, and the National
Research Nuclear University (MEPhI) in Moscow, specialising in physics. Even if THE have
not been too loose about who is included, the high scores achieved by such
narrowly focussed institutions calls the validity of the rankings into
question.
Round University Rankings
In general the THE rankings have received a broad and
respectful response from the international media and university
managers, and criticism has largely been confined to outsiders and
specialists. This is in marked contrast to the Rankings released by a
Russian organisation early in September. These are based entirely on data
supplied by Thompson Reuters, THE’s data provider and analyst
until last year. They contain a total of 20 indicators, including 12 out of the
13 in the THE rankings. Unlike THE, RUR do not bundle
indicators together in groups so it is possible to tell exactly why
universities are performing well or badly.
The RUR rankings are not elegantly presented but the content is more
transparent than THE, more comprehensive than QS, and apparently
less volatile than either. It is a strong indictment of the international
higher education establishment that these rankings are ignored while THE’s are
followed so avidly.
Best Global Universities
The second edition of the US News’s Best Global
Universities was published at the beginning of October. The US
News is best known for the ranking of American colleges and
universities and it has been cautious about venturing into the global arena.
These rankings are fairly similar to the Shanghai ARWU, containing only
research indicators and making no pretence to measure teaching or graduate
quality. The methodology avoids some elementary mistakes. It does not give too
much weight to any one indicator, with none getting more than 12.5%, and
measures citations in three different ways. For eight indicators log
manipulation was done before the calculation of z-scores to eliminate outliers
and statistical anomalies.
This year US News went a little way towards reducing
the rankers’ obsession with citations by including conferences and books in the
list of criteria.
Since they do not include any non-research indicators these rankings are
essentially competing with the Shanghai ARWU and it is possible that they may
eventually become the first choice for internationally mobile graduate
students.
But at the moment it seems that the traditional media and higher
education establishment have lost none of their fascination for the snakes and
ladders game of THE and QS.
Subscribe to:
Posts (Atom)