Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Thursday, August 24, 2017
Comment by Christian Scholz
This comment is by Christian Schulz of the University of Hamburg. He points that the University of Hamburg's rise in the Shanghai rankings was not the result of highly cited researchers moving from other institutions but the improvement of research within the university.
If this is something that applies to other German universities, then it could be that Germany has a policy of growing its own researchers rather than importing talent from around the world. It seems to have worked very well for football so perhaps the obsession of British universities with importing international researchers is not such a good idea..
If this is something that applies to other German universities, then it could be that Germany has a policy of growing its own researchers rather than importing talent from around the world. It seems to have worked very well for football so perhaps the obsession of British universities with importing international researchers is not such a good idea..
I just wanted to share with you, that we did not acquire two researchers to get on the HCR List to get a higher rank in the Shanghai Ranking. Those two researchers are Prof. Büchel and Prof. Ravens-Sieberer. Prof. Büchel is working at our university for over a decade now and Prof. Ravens-Sieberer is at our university since 2008.
Please also aknowledge, that our place in the Shanghai Ranking was very stable from 2010-2015. We were very unpleasent, when they decided to only use the one-year list of HCR, because in 2015 none of our researchers made it on the 2015-list, which caused the descend from 2015 to 2016.
Guest Post by Pablo Achard
This post is by Pablo Achard of the University of Geneva. It refers to the Shanghai subject rankings. However, the problem of outliers in subject and regional rankings is one that affects all the well known rankings and will probably become more important over the next few years
How a single article is worth 60 places
We can’t repeat it
enough: an indicator is bad when a small variation in the input is overly
amplified in the output. This is the case when indicators are based on very few
events.
I recently came
through this issue (again) with Shanghai’s subject ranking of universities. The
universities of Geneva and Lausanne (Switzerland) share the same School of Pharmacy and a huge share of published articles in this discipline are signed
under the name of both institutions. But in the “Pharmacy and pharmaceutical
sciences” ranking, one is ranked between the 101st and 150th
position while the other is 40th. Where does this difference come
from?
Comparing the scores
obtained under each category gives a clue
|
Geneva
|
Lausanne
|
Weight in the final score
|
PUB
|
46
|
44.3
|
1
|
CNCI
|
63.2
|
65.6
|
1
|
IC
|
83.6
|
79.5
|
0.2
|
TOP
|
0
|
40.8
|
1
|
AWARD
|
0
|
0
|
1
|
Weighted sum
|
125.9
|
166.6
|
|
So the main difference
between the two institutions is the score in “TOP”. Actually, the difference in
the weighted sum (40.7) is almost equal to the value of this score (40.8). If
Geneva and Lausanne had the same TOP score, they would be 40th and
41st.
Surprisingly, a look
at other institutions for that TOP indicator show only 5 different values : 0,
40.8, 57.7, 70.7 and 100. According to the methodology page of the ranking, “TOP
is the number of papers published in Top Journals in an Academic Subject for an
institution during the period of 2011-2015. Top Journals are identified through
ShanghaiRanking’s Academic Excellence Survey […] The
list of the top journals can be found here […] Only
papers of ‘Article’ type are considered.”
Looking deeper, there
is just one journal in this list for Pharmacy: NATURE REVIEWS DRUG DISCOVERY.
As its name indicates, this recognized journal mainly publishes ‘reviews’. A search
on Web of Knowledge shows that in the period 2011-2015, only 63 ‘articles’ were
published in this journal. That means a small variation in the input is overly
amplified.
I searched for several
institutions and rapidly found this rule: Harvard published 4 articles during
these five years and got a score of 100 ; MIT published 3 articles and got a
score of 70.7 ; 10 institutions published 2 articles and got a 57.7 and finally
about 50 institutions published 1 article and got a 40.8.
I still don’t get why
this score is so unlinear. But Lausanne published one single article in NATURE
REVIEWS DRUG DISCOVERY and Geneva none (they published ‘reviews’ and ‘letters’
but no ‘articles’) and that small difference led to at least a 60 places gap
between the two institutions.
This is of course just
one example of what happens too often: rankers want to publish sub-rankings and
end up with indicators where outliers can’t be absorbed into large
distributions. One article, one prize or one co-author in a large and
productive collaboration all of the sudden makes very large differences in final
scores and ranks.
Subscribe to:
Posts (Atom)