Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Monday, October 29, 2018
Is THE going to reform its methodology?
An article by Duncan Ross in Times Higher Education (THE) suggests that the World University Rankings are due for repair and maintenance. He notes that these rankings were originally aimed at a select group of research orientated world class universities but THE is now looking at a much larger group that is likely to be less internationally orientated, less research based and more concerned with teaching.
He says that it is unlikely that there will be major changes in the methodology for the 2019-20 rankings next year but after that there may be significant adjustment.
There is a chance that the industry income indicator, income from industry and commerce divided by the number of faculty, will be changed. This is an indirect attempt to capture innovation and is unreliable since it is based entirely on data submitted by institutions. Alex Usher of Higher Education Strategy Associates has pointed out some problems with this indicator.
Ross seems most concerned, however, with the citations indicator which at present is normalised by field, of which there are over 300, type of publication and year of publication. Universities are rated not according to the number of citations they receive but by comparison with the world average of citations to documents of a specific type in a specific field in a specific year. There are potentially over 8,000 boxes into which any single citation could be dropped for comparison.
Apart from anything else, this has resulted in a serious reduction in transparency. Checking on the scores for Highly Cited Researchers or Nobel and fields laureates in the Shanghai rankings can be done in few minutes. Try comparing thousands of world averages with the citation scores of a university.
This methodology has produced a series of bizarre results, noted several times in this blog. I hope I will be forgiven for yet again listing some of the research impact superstars that THE has identified over the last few years: Alexandria University, Moscow Nuclear Research University MEPhI, Anglia Ruskin University, Brighton and Sussex Medical School, St George's University of London, Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology, Babol Noshirvani University of Technology, Oregon Health and Science University, Jordan University of Science and Technology, Vita-Salute San Raffaele University.
The problems of this indicator go further than just a collection of quirky anomalies. It now accords a big privilege to medical research as it once did to fundamental physics research. It offers a quick route to ranking glory by recruiting highly cited researchers in strategic fields and introduces a significant element of instability into the rankings.
So here are some suggestions for THE should it actually get round to revamping the citations indicator.
1. The number of universities around the world that do a modest amount of research of any kind is relatively small, maybe five or six thousand. The number that can reasonably claim to have a significant global impact is much smaller, perhaps two or three hundred. Normalised citations are perhaps a reasonable way of distinguishing among the latter, but pointless or counterproductive when assessing the former. The current THE methodology might be able to tell whether a definitive literary biography by a Yale scholar has the same impact in its field as cutting edge research in particle physics at MIT but it is of little use in assessing the relative research output of mid-level universities in South Asia or Latin America.
THE should therefore consider reducing the weighting of citations to the same as research output or lower.
2. A major cause of problems with the citations indicator is the failure to introduce complete fractional counting, that is distributing credit for citations proportionately among authors or institutions. At the moment THE counts every author of a paper with less than a thousand authors as though each of them were the sole author of the paper. As a result, medical schools that produce papers with hundreds of authors now have a privileged position in the THE rankings, something that the use of normalisation was supposed to prevent.
THE has introduced a moderate form of fractional counting for papers with over a thousand authors but evidently this is not enough.
It seems that some, rankers do not like fractional counting because it might discourage collaboration. I would not dispute that collaboration might be a good thing, although it is often favoured by institutions that cannot do very well by themselves, but this is not sufficient reason to allow distortions like those noted above to flourish.
3. THE have a country bonus or regional modification which divides a university's citation impact score by the square root of the score of the country in which the university is located. This was supposed to compensate for the lacking of funding and networks that afflicts some countries, which apparently does not affect their reputation scores or publications output. The effect of this bonus is to give some universities a boost derived not from their excellence but from the mediocrity or worse of their compatriots. THE reduced the coverage of this bonus to fifty percent of the indicator in 2015. It might well be time to get rid of it altogether
4. Although QS stopped counting self-citations in 2011 THE continue to do so. They have said that overall they make little difference. Perhaps, but as the rankings expand to include more and more universities it will become more likely that a self-citer or mutual-citer will propel undistinguished schools up the charts. There could be more cases like Alexandria University or Veltech University.
5. THE needs to think about what they are using citations to measure. Are they trying to assess research quality in which they case they should use citations per papers? Are they trying to estimate overall research impact in which case the appropriate metric would be total citations.
6. Normalisation by field and document type might be helpful for making fine distinctions among elite research universities but lower down it creates or contributes to serious problems when a single document or an unusually productive author can cause massive distortions. Three hundred plus fields may be too many and THE should think about reducing the number of fields.
7. There has been a proliferation in recent years In the number of secondary affiliations. No doubt most of these are making a genuine contribution to the life of both or all of the universities with which they are affiliated. There is, however, a possibility of serious abuse if the practice continues. It would be greatly to THE's credit if they could find some way of omitting or reducing the weighting of secondary affiliations.
8. THE are talking about different models of excellence. Perhaps they could look at the Asiaweek rankings which had a separate table for technological universities or Maclean's with its separate rankings for doctoral/medical universities and primarily undergraduate schools. Different weightings could be given to citations for each of these categories.
Thursday, October 18, 2018
How many indicators do university rankings need?
The number of indicators used in international university rankings varies a lot. At one extreme we have the Russian Round University Rankings (RUR), which have 20 indicators. At the other, Nature Index and Reuters Top 100 Innovative Universities have just one.
In general, the more information provided by rankings the more helpful they are. If, however, the indicators produce very similar results then their value will be limited. The research and postgraduate teaching surveys in the THE world rankings and the RUR correlate so highly that they are in effect measuring the same thing.
There is probably an optimum number of indicators for a ranking, perhaps higher for general than for research-only rankings, above which no further information is provided.
A paper by Guleda Dogan of Hacettepe University, Ankara, looks at the indicators in three university rankings the Shanghai Academic Ranking of World Universities, the National Taiwan University Rankings and University Ranking by Academic Performance (URAP) and finds that they is a very high degree of internal similarity:
"Results of the analyses show that the intra-indicators used in ARWU, NTU and URAP are highly similar and that they can be grouped according to their similarities. The authors also examined the effect of similar indicators on 2015 overall ranking lists for these three rankings. NTU and URAP are affected least from the omitted similar indicators, which means it is possible for these two rankings to create very similar overall ranking lists to the existing overall ranking using fewer indicators."
In general, the more information provided by rankings the more helpful they are. If, however, the indicators produce very similar results then their value will be limited. The research and postgraduate teaching surveys in the THE world rankings and the RUR correlate so highly that they are in effect measuring the same thing.
There is probably an optimum number of indicators for a ranking, perhaps higher for general than for research-only rankings, above which no further information is provided.
A paper by Guleda Dogan of Hacettepe University, Ankara, looks at the indicators in three university rankings the Shanghai Academic Ranking of World Universities, the National Taiwan University Rankings and University Ranking by Academic Performance (URAP) and finds that they is a very high degree of internal similarity:
"Results of the analyses show that the intra-indicators used in ARWU, NTU and URAP are highly similar and that they can be grouped according to their similarities. The authors also examined the effect of similar indicators on 2015 overall ranking lists for these three rankings. NTU and URAP are affected least from the omitted similar indicators, which means it is possible for these two rankings to create very similar overall ranking lists to the existing overall ranking using fewer indicators."
Wednesday, October 10, 2018
The link between rankings and standardised testing
The big hole in current international university rankings is the absence of anything that effectively measures the quality of graduates. Some rankings use staff student ratio or income as a proxy for the provision of resources, on the assumption that the more money that is spent or the more teachers deployed then the better the quality of teaching. QS has an employer survey that asks about the universities from where employers like to recruit but that has many problems.
There is a lot of of evidence that university graduates are valued to a large extent because they are seen as intelligent, conscientious and, depending on place and field, open-minded or conformist. A metric that correlates with these attributes would be helpful in assessing and comparing universities.
A recent article in The Conversation by Jonathon Wai suggests that the US News America's Best Colleges rankings are highly regarded partly because they measure the academic ability of admitted students, which correlates very highly with that of graduates.
The popularity of these rankings is based on their capturing the average ability of students as measured by the SAT or ACT. Wai reports from a paper that he wrote in collaboration with Matt Brown and Christopher Chabris in the Journal of Intelligence that finds that there is a large correlation between the average SAT or ACT scores of students and overall scores in America's Best Colleges, .982 for national universities and .890 for liberal arts colleges.
The correlation with the THE/WSJ US college rankings is less but still very substantial, .787, and also for the THE World University Rankings, .659.
It seems that employers and professional schools expect universities to certify the intelligence of their graduates. The value of standardised tests such as ACT, SAT, GRE, LSAT, GMAT, which correlate highly with one another, is that they are a fairly robust proxy for general intelligence or general mental ability. Rankings could be valuable if they provideda clue to the ability of graduates .
It is, however, a shame that the authors should support their argument by referring only to one global ranking, the THE world rankings. There are now quite a few international rankings that are as good as or better than the THE tables.
I have therefore calculated the correlations between the average SAT/ACT scores of 30 colleges and universities in the USA and their scores in various global rankings.
The source for student scores is a supplement to the article by Wai at al, from which I have taken the top 30 ranked by SAT/ACT. I have used those rankings listed in the IREG inventory that provide numerical scores and and not just ranks. The GreenMetric was not used since only one US school out of these thirty, Washington University in St Louis, took part in that ranking. I used two indicators from Leiden Ranking, which does not give a composite score, total publications and the percentage of papers in the top 1% of journals.
It is interesting that there are many liberal arts colleges in the US that are not included in the international rankings. Prospective undergraduates looking for a college in the USA would do well to look beyond the global rankings. Harvey Mudd College, for example, is highly selective and its graduates much sought after but it does not appear in any of the rankings below.
The results are interesting. The indicator that correlates most significantly with student ability is Leiden Ranking's percentage of papers in the top 1% of journals. Next is CWUR, which does explicitly claim to measure graduate quality. The US News world rankings and the Shanghai rankings, which only include research indicators, also do well.
We are looking at just 30 US institutions here. There might be different results if we looked at other countries or a broader range of US schools.
So, it seems that if you want to look at the ability of students or graduates, an international ranking based on research is as good as or better than one that tries to measure teaching excellence with the blunt instruments currently available.
*significant at 0.05 level
There is a lot of of evidence that university graduates are valued to a large extent because they are seen as intelligent, conscientious and, depending on place and field, open-minded or conformist. A metric that correlates with these attributes would be helpful in assessing and comparing universities.
A recent article in The Conversation by Jonathon Wai suggests that the US News America's Best Colleges rankings are highly regarded partly because they measure the academic ability of admitted students, which correlates very highly with that of graduates.
The popularity of these rankings is based on their capturing the average ability of students as measured by the SAT or ACT. Wai reports from a paper that he wrote in collaboration with Matt Brown and Christopher Chabris in the Journal of Intelligence that finds that there is a large correlation between the average SAT or ACT scores of students and overall scores in America's Best Colleges, .982 for national universities and .890 for liberal arts colleges.
The correlation with the THE/WSJ US college rankings is less but still very substantial, .787, and also for the THE World University Rankings, .659.
It seems that employers and professional schools expect universities to certify the intelligence of their graduates. The value of standardised tests such as ACT, SAT, GRE, LSAT, GMAT, which correlate highly with one another, is that they are a fairly robust proxy for general intelligence or general mental ability. Rankings could be valuable if they provideda clue to the ability of graduates .
It is, however, a shame that the authors should support their argument by referring only to one global ranking, the THE world rankings. There are now quite a few international rankings that are as good as or better than the THE tables.
I have therefore calculated the correlations between the average SAT/ACT scores of 30 colleges and universities in the USA and their scores in various global rankings.
The source for student scores is a supplement to the article by Wai at al, from which I have taken the top 30 ranked by SAT/ACT. I have used those rankings listed in the IREG inventory that provide numerical scores and and not just ranks. The GreenMetric was not used since only one US school out of these thirty, Washington University in St Louis, took part in that ranking. I used two indicators from Leiden Ranking, which does not give a composite score, total publications and the percentage of papers in the top 1% of journals.
It is interesting that there are many liberal arts colleges in the US that are not included in the international rankings. Prospective undergraduates looking for a college in the USA would do well to look beyond the global rankings. Harvey Mudd College, for example, is highly selective and its graduates much sought after but it does not appear in any of the rankings below.
The results are interesting. The indicator that correlates most significantly with student ability is Leiden Ranking's percentage of papers in the top 1% of journals. Next is CWUR, which does explicitly claim to measure graduate quality. The US News world rankings and the Shanghai rankings, which only include research indicators, also do well.
We are looking at just 30 US institutions here. There might be different results if we looked at other countries or a broader range of US schools.
So, it seems that if you want to look at the ability of students or graduates, an international ranking based on research is as good as or better than one that tries to measure teaching excellence with the blunt instruments currently available.
Ranking
|
Address
|
correlation
|
significance
|
N
|
|
1
|
Leiden Ranking: papers in top 10% of journals
|
Netherlands
|
.65
|
.001*
|
22
|
2
|
Center for World University Ranking
|
UAE
|
.59
|
.003*
|
22
|
3
|
US News Best Global Universities
|
USA
|
.58
|
.004*
|
22
|
4
|
Shanghai ARWU
|
China
|
.57
|
.004*
|
24
|
5
|
Round University Rankings
|
Russia
|
.55
|
.008*
|
22
|
6
|
THE World University Rankings
|
UK
|
.51
|
.014*
|
22
|
7
|
QS World university Rankings
|
UK
|
.49
|
.025*
|
21
|
8
|
University Ranking by Academic Performance
|
Turkey
|
.48
|
.015*
|
25
|
9
|
Nature Index Fractional Count
|
USA
|
.45
|
.039*
|
21
|
10
|
National Taiwan University
|
Taiwan
|
.32
|
.147
|
22
|
11
|
Leiden: total Publications
|
Netherlands
|
.21
|
.342
|
22
|
*significant at 0.05 level
Wednesday, October 03, 2018
Ulster University: no need to back down
Ulster University seems to have retracted the claim on its website that it is in the top 3% of universities in the world. Elsewhere it has apparently claimed to be in the top 2%.
According to the Belfast Telegraph
It is unfortunate that Which? should use the THE World University Rankings as the only arbiter of excellence. There are now at least 17 global rankings plus a variety of specialist and niche international rankings. Some of these are just as comprehensive, valid or informative as the THE rankings, if not more so. The THE world rankings are also known (to some people anyway) for their whimsical assessment of the research impact of world universities. This year THE has included Babol Noshirvani University of Technology, the University of Reykjavik, Brighton and Sussex Medical School, Anglia Ruskin University, the University of Dessarollo and the University of Canberra as leaders of the global research scene.
Which? has a point. The university is thinking of the 28,077 universities listed in the Webometrics rankings. So if it were in the top 600 of the THE rankings, then it would be in the top 2.45% of world universities.
There is a fallacy here. Many universities that could do better than Ulster do not bother to submit data to THE. If they did they might well forge ahead.
But I see nothing wrong with the university noting that it is ranked 689th in the current Webometrics ranking, which includes web data and a measure of research output, out of 28,077 for which data is available and therefore is in the top 2.45% of the Webometrics ranking.
Ulster is also 777th for the Webometrics research excellence indicator, the number of papers in the top 10% most cited, which would put it in the top 2.77% of universities in the world.
A claim to be in the top 3% of universities would be legitimate providing the ranking and indicator is stated.
According to the Belfast Telegraph
"Ulster University stated that it 'is in the top 3% of universities in the world' when it came 501-600th in the 1,103 Times Higher Education World University Rankings in 2018, putting them in the mid-tier of those rankings," a Which? spokesperson said.
"Further, in a website document "Ulster's 50 Facts Worth Knowing" Ulster University included the claim 'Top 2% of universities in the world (QS World Rankings)'."
The spokesman continued: "Which? University is particularly concerned about these comparative claims, given it is less than one year since the Advertising Standards Authority (ASA) issued its comprehensive advice in November 2017 to universities about such claims and its rulings which upheld complaints about comparative claims made in university advertising.
"Where universities are making comparative claims about their relative performance, these should be verifiable and provided with sufficient qualifying information to ensure that prospective students are not misled."
It is unfortunate that Which? should use the THE World University Rankings as the only arbiter of excellence. There are now at least 17 global rankings plus a variety of specialist and niche international rankings. Some of these are just as comprehensive, valid or informative as the THE rankings, if not more so. The THE world rankings are also known (to some people anyway) for their whimsical assessment of the research impact of world universities. This year THE has included Babol Noshirvani University of Technology, the University of Reykjavik, Brighton and Sussex Medical School, Anglia Ruskin University, the University of Dessarollo and the University of Canberra as leaders of the global research scene.
Which? has a point. The university is thinking of the 28,077 universities listed in the Webometrics rankings. So if it were in the top 600 of the THE rankings, then it would be in the top 2.45% of world universities.
There is a fallacy here. Many universities that could do better than Ulster do not bother to submit data to THE. If they did they might well forge ahead.
But I see nothing wrong with the university noting that it is ranked 689th in the current Webometrics ranking, which includes web data and a measure of research output, out of 28,077 for which data is available and therefore is in the top 2.45% of the Webometrics ranking.
Ulster is also 777th for the Webometrics research excellence indicator, the number of papers in the top 10% most cited, which would put it in the top 2.77% of universities in the world.
A claim to be in the top 3% of universities would be legitimate providing the ranking and indicator is stated.
Monday, October 01, 2018
Here we go again: the THE citations indicator
The latest THE world rankings have just been announced. For most of the indicators there are few surprises. There are more universities from Japan in the rankings. Oxford is first, followed by Cambridge. The USA contributes the largest number of top universities. China rises steadily. India is as usual is a disappointment.
But, as in previous years, the most interesting thing is the citations indicator, which is supposed to measure research influence. Once again this has produced some very interesting results.
Here are some of the universities in the top 100.
Babol Noshirvani University of Technology: the most influential university in the world for research
Brighton and Sussex Medical School: most influential in Europe
Brandeis University: most influential in the USA
Reykjavik University
St George's, University of London: fallen a bit, probably because of Brexit
King Abdulaziz University: top university for research influence in the Middle East and Asia
Anglia Ruskin University
Jordan University of Science and Technology
Vita-Salute San Raffaele University
Ulsan National Institute for Science and Technology: top in Asia ex Middle East
University of Canberra: best in Australia
University of Dessarollo: best in Latin America
McMaster University: best in Canada
Universite de Versailles Saint-Quentin-en-Yvelines: best in France
Teikyo University: best in Japan
There are signs that THE are considering reforming this indicator. If that does happen, the rankings will be more valid but much less entertaining.
But, as in previous years, the most interesting thing is the citations indicator, which is supposed to measure research influence. Once again this has produced some very interesting results.
Here are some of the universities in the top 100.
Babol Noshirvani University of Technology: the most influential university in the world for research
Brighton and Sussex Medical School: most influential in Europe
Brandeis University: most influential in the USA
Reykjavik University
St George's, University of London: fallen a bit, probably because of Brexit
King Abdulaziz University: top university for research influence in the Middle East and Asia
Anglia Ruskin University
Jordan University of Science and Technology
Vita-Salute San Raffaele University
Ulsan National Institute for Science and Technology: top in Asia ex Middle East
University of Canberra: best in Australia
University of Dessarollo: best in Latin America
McMaster University: best in Canada
Universite de Versailles Saint-Quentin-en-Yvelines: best in France
Teikyo University: best in Japan
There are signs that THE are considering reforming this indicator. If that does happen, the rankings will be more valid but much less entertaining.
Sunday, September 30, 2018
Rankings and Higher Education Policy
Two examples of how the need to perform well in the rankings is shaping national research and higher education policy.
From the Irish Examiner
From Times Higher Education
https://www.irishexaminer.com/breakingnews/business/cern-membership-vital-for-irish-universities-872312.html
From the Irish Examiner
"Ireland must apply for membership of the world-renowned European Organisation for Nuclear Research (Cern) in order to combat the effect of Brexit and boost university rankings.
That is according to Cork senator Colm Burke as the campaign to join Cern gains momentum, after Ireland recently became a member of the European Space Observatory."
From Times Higher Education
"France’s programme of university mergers is paying off, improving the research performance and international visibility of its top providers, according to the Times Higher Education World University Rankings 2019.
Paris Sciences et Lettres – PSL Research University Paris, a 2010 merger of numerous institutions, climbed 31 places to 41st this year, becoming the first French university to feature in the top 50 best universities since 2011. PSL made its debut in the global table last year.
Its teaching and research scores improved, driven by increased global visibility and votes in the academic teaching and research reputation surveys.
Meanwhile, Sorbonne University, which was founded in January this year following the merger of Pierre and Marie Curie University and Paris-Sorbonne University, has joined the list at 73rd place – making it the highest-ranked newcomer in the table."
https://www.irishexaminer.com/breakingnews/business/cern-membership-vital-for-irish-universities-872312.html
Thursday, September 20, 2018
Philosophy Department Will Ignore GRE Scores
The philosophy department at the University of Pennsylvania has taken a step away from fairness and objectivity in university admissions. It will no longer look at the GRE scores of applicants to its graduate programme.
The department is good but not great. It is ranked 27th in the Leiter Report rankings and in the 101-150 band in the QS world subject rankings.
So how will students be selected without GRE scores? It seems it will be by letters of recommendation, undergraduate GPA, writing samples, admission statements.
Letters of recommendation have very little validity. The value of undergraduate grades has eroded in recent years and very likely will continue to do so. Admission essays and diversity statements say little about academic ability and a lot about political conformism.
The reasons for the move are not convincing. Paying for the GRE is supposed to be a burden on low income students. But the cost is much less than Penn's exorbitant tuition fees. It is also claimed that the GRE and other standardised tests do not predict performance in graduate school. In fact they are a reasonably good predictor of academic success although they should not be used by themselves.
Then there is the claim that the GRE "sometimes" underpredicts the performance of minorities and women. No doubt it sometimes does but then presumably sometimes it does not. Unless there is evidence that the underprediction is significant and that it is greater than that of other indicators this claim is meaningless.
What will be the result of this? The department will be able to admit students who "do not test well" but who can get good grades, something that is becoming less difficult at US colleges, or persuade letter writers at reputable schools that they will do well.
It is likely that more departments across the US will follow Penn's lead. American graduate programmes will slowly become less rigorous and less able to compete with the rising universities of Asia.
The department is good but not great. It is ranked 27th in the Leiter Report rankings and in the 101-150 band in the QS world subject rankings.
So how will students be selected without GRE scores? It seems it will be by letters of recommendation, undergraduate GPA, writing samples, admission statements.
Letters of recommendation have very little validity. The value of undergraduate grades has eroded in recent years and very likely will continue to do so. Admission essays and diversity statements say little about academic ability and a lot about political conformism.
The reasons for the move are not convincing. Paying for the GRE is supposed to be a burden on low income students. But the cost is much less than Penn's exorbitant tuition fees. It is also claimed that the GRE and other standardised tests do not predict performance in graduate school. In fact they are a reasonably good predictor of academic success although they should not be used by themselves.
Then there is the claim that the GRE "sometimes" underpredicts the performance of minorities and women. No doubt it sometimes does but then presumably sometimes it does not. Unless there is evidence that the underprediction is significant and that it is greater than that of other indicators this claim is meaningless.
What will be the result of this? The department will be able to admit students who "do not test well" but who can get good grades, something that is becoming less difficult at US colleges, or persuade letter writers at reputable schools that they will do well.
It is likely that more departments across the US will follow Penn's lead. American graduate programmes will slowly become less rigorous and less able to compete with the rising universities of Asia.
Sunday, September 09, 2018
Ranking Global Rankings: Information
Another indicator for ranking global rankings might be the amount of information that they contain. Here are 17 global rankings in the IREG Inventory ranked according to the number of indicators or groups of indicators for which scores or ranks are given. The median and the mode are both six.
The number for U-Multirank is perhaps misleading since data is not provided for all universities.
Rank
|
Ranking
|
Address
of publisher
|
Number of
indicators
|
1
|
Germany
|
112
|
|
2
|
Russia
|
20
|
|
3
|
Netherlands
|
19
|
|
4
|
USA
|
13
|
|
5
|
Taiwan
|
8
|
|
6
|
UAE
|
7
|
|
7=
|
UK
|
6
|
|
7=
|
China
|
6
|
|
7=
|
Indonesia
|
6
|
|
7=
|
URAP University
Ranking by Academic Performance
|
Turkey
|
6
|
11
|
UK
|
5
|
|
12
|
Ranking
Web of Universities (Webometrics)
|
Spain
|
4
|
13
|
Spain
|
3
|
|
14
|
UK
|
2
|
|
15=
|
France
|
1
|
|
15=
|
Reuters Top 100 Innovative
Universities
|
USA
|
1
|
15=
|
Australia
|
1
|
Monday, September 03, 2018
Ranking Global Rankings: Inclusion
The number of international global universities continues to grow and it is becoming harder to keep track of them. Earlier this year IREG published an inventory of international rankings that included 17 global rankings. Here are those rankings in order of the number of institutions that they rank in the most recent edition.
Webometrics is the clear winner, followed by uniRank and SCImago. There are, of course, other indicators to think about and some of these will be covered later.
Webometrics is the clear winner, followed by uniRank and SCImago. There are, of course, other indicators to think about and some of these will be covered later.
Number of Institutions ranked
Rank
|
Ranking
|
Address of publisher
|
Number ranked
|
1
|
Ranking Web of Universities (Webometrics)
|
Spain
|
28,077
|
2
|
Australia
|
13,146
|
|
3
|
Spain
|
5,637
|
|
4
|
URAP University Ranking by Academic Performance
|
Turkey
|
2,500
|
5
|
U-Multirank |
Germany
|
1,500
|
6
|
USA |
1,250
|
|
7
|
THE World University Rankings |
UK
|
1,000+
|
8= | Shanghai Ranking ARWU | China |
1,000
|
8= | CWUR University Rankings |
UAE
|
1,000
|
10
|
QS World University Rankings |
UK
|
916 |
11
|
CWTS Leiden Ranking | Netherlands | 903 |
12
|
Taiwan | 800 | |
13
|
Russia
|
783
|
|
14
|
UI GreenMetric Ranking | Indonesia | 619 |
15
|
UK | 500 | |
16
|
France
|
150
|
|
17
|
Reuters Top 100 Innovative Universities
|
USA
|
100
|
Subscribe to:
Posts (Atom)