Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Showing posts sorted by date for query MIT. Sort by relevance Show all posts
Showing posts sorted by date for query MIT. Sort by relevance Show all posts
Saturday, September 19, 2015
Who's Interrested in the Shanghai Rankings?
First results from a Google search for responses to the latest edition of the Shanghai world rankings.
Monday, August 31, 2015
Update on changes in ranking methodology
Times Higher Education (THE) have been preparing the ground for methodological changes in their world rankings. A recent article by Phil Baty announced that the new world rankings scheduled for September 30 will not count the citations to 649 papers, mainly in particle physics, with more than 1000 authors.
This is perhaps the best that is technically and/or commercially feasible at this moment but it is far from satisfactory. Some of these publications are dealing with the most basic questions about the nature of physical reality and it is a serious distortion not to include them in the ranking methodology. There have been complaints about this. Pavel Krokovny's comment was noted in a previous post while Mete Yeyisoglu argues that:
Baty provides a reasonable answer to the question why the cut-off point is 1,000 authors.
But there is a fundamental issue developing here that goes beyond ranking procedure. The concept of authorship of a philosophy paper written entirely by a single person or a sociological study from a small research team is very different from that of the huge multi-national capital and labour intensive publications in which the number of collaborating institutions exceeds the number of paragraphs and there are more authors than sentences.
Fractional counting does seem to be the only fair and sensible way forward and it is now apparently on THE's agenda although they have still not committed themselves.
The objection could be raised that while the current THE system gives a huge reward to even the least significant contributing institution, fractional counting would give major research universities insufficient credit for their role in important research projects.
A long term solution might be to draw a distinction between the contributors to and the authors of the mega papers. For most publications there would be no need to draw such a distinction but for those with some sort of input from dozens, hundreds or thousands of people it might be feasible for to allot half the credit to all those who had anything to do with the project and the other half to those who meet the standard criteria of authorship. There would no doubt be a lot of politicking about who gets the credit but that would be nothing new.
Duncan Ross, the new Data and Analytics Director at THE, seems to be thinking along these lines.
THE now appear to agree that this is indefensible in the long run and hope that a more inclusive academic survey and the shift to Scopus, with broader coverage than the Web of Science, will lead to this adjustment being phased out.
It is a bit odd that TR and THE should have introduced income, in three separate indicators, and international outlook, in another three, as markers of excellence, but then included a regional modification to compensate for limited funding and international contacts.
THE are to be congratulated for having put fractional counting and phasing out the regional modification on their agenda. Let's hope it doesn't take too long.
While we are on the topic, there are some more things about the citation indicator to think about . First, to repeat a couple of points mentioned in the earlier post.
Some other things THE could think about.
This is perhaps the best that is technically and/or commercially feasible at this moment but it is far from satisfactory. Some of these publications are dealing with the most basic questions about the nature of physical reality and it is a serious distortion not to include them in the ranking methodology. There have been complaints about this. Pavel Krokovny's comment was noted in a previous post while Mete Yeyisoglu argues that:
"Fractional counting is the ultimate solution. I wish you could have worked it out to use fractional counting for the 2015-16 rankings.
The current interim approach you came up with is objectionable.
Why 1,000 authors? How was the limit set? What about 999 authored-articles?
Although the institution I work for will probably benefit from this interim approach, I think you should have kept the same old methodology until you come up with an ultimate solution.
This year's interim fluctuation will adversely affect the image of university rankings."
Baty provides a reasonable answer to the question why the cut-off point is 1,000 authors.
But there is a fundamental issue developing here that goes beyond ranking procedure. The concept of authorship of a philosophy paper written entirely by a single person or a sociological study from a small research team is very different from that of the huge multi-national capital and labour intensive publications in which the number of collaborating institutions exceeds the number of paragraphs and there are more authors than sentences.
Fractional counting does seem to be the only fair and sensible way forward and it is now apparently on THE's agenda although they have still not committed themselves.
The objection could be raised that while the current THE system gives a huge reward to even the least significant contributing institution, fractional counting would give major research universities insufficient credit for their role in important research projects.
A long term solution might be to draw a distinction between the contributors to and the authors of the mega papers. For most publications there would be no need to draw such a distinction but for those with some sort of input from dozens, hundreds or thousands of people it might be feasible for to allot half the credit to all those who had anything to do with the project and the other half to those who meet the standard criteria of authorship. There would no doubt be a lot of politicking about who gets the credit but that would be nothing new.
Duncan Ross, the new Data and Analytics Director at THE, seems to be thinking along these lines.
"In the longer term there are one technical and one structural approach that would be viable. The technical approach is to use a fractional counting approach (2932 authors? Well you each get 0.034% of the credit). The structural approach is more of a long term solution: to persuade the academic community to adopt metadata that adequately explains the relationship of individuals to the paper that they are ‘authoring’. Unfortunately I’m not holding my breath on that one."The counting of citations to mega papers is not the only problem with the THE citations indicator. Another is the practice of giving a boost to universities in underperforming countries. Another item by Phil Baty quotes this justification from Thomson Reuters, THE's former data partner.
“The concept of the regional modification is to overcome the differences between publication and citation behaviour between different countries and regions. For example some regions will have English as their primary language and all the publications will be in English, this will give them an advantage over a region that publishes some of its papers in other languages (because non-English publications will have a limited audience of readers and therefore a limited ability to be cited). There are also factors to consider such as the size of the research network in that region, the ability of its researchers and academics to network at conferences and the local research, evaluation and funding policies that may influence publishing practice.”
THE now appear to agree that this is indefensible in the long run and hope that a more inclusive academic survey and the shift to Scopus, with broader coverage than the Web of Science, will lead to this adjustment being phased out.
It is a bit odd that TR and THE should have introduced income, in three separate indicators, and international outlook, in another three, as markers of excellence, but then included a regional modification to compensate for limited funding and international contacts.
THE are to be congratulated for having put fractional counting and phasing out the regional modification on their agenda. Let's hope it doesn't take too long.
While we are on the topic, there are some more things about the citation indicator to think about . First, to repeat a couple of points mentioned in the earlier post.
- Reducing the number of fields or doing away with normalisation by year of citation. The more boxes into which any given citation can be dropped the greater the chance of statistical anomalies when a cluster of citations meets a low world average of citations for that particular year of citations, year of publication and field (300 in Scopus?)
- Reducing the weighting for this indicator. Perhaps citations per paper normalized by field is a useful instrument for comparing the quality of research of MIT, Caltech, Harvard and the like but it might be of little value when comparing the research performance of Panjab University and IIT Bombay or Istanbul University and Bogazici.
Some other things THE could think about.
- Adding a measure of overall research impact, perhaps simply by counting citations. At the very least stop calling field- and year- normalised regionally modified citations per paper a measure of research impact. Call it research quality or something like that.
- Doing something about secondary affiliations. So far this seems to have been a problem mainly for the Highly Cited Researchers indicator in the Shanghai ARWU but it may not be very long before more universities realise that a few million dollars for adjunct faculty could have a disproportion impact on publication and citation counts.
- Also, perhaps THE should consider excluding self-citations (or even citations within the same institution although that would obviously be technically difficult). Self-citation caused a problem in 2010 when Dr El Naschie's diligent citation of himself and a few friends lifted Alexandria University to fourth place in the world for research impact. Something similar might happen again now that THE are using a larger and less selective database.
Tuesday, August 25, 2015
Not fair to call papers freaky
A comment by Pavel Krokovny of Heidelberg University about THE's proposal to exclude papers with 1,000+ authors from their citations indicator in the World University Rankings.
and
"It is true that all 3k+ authors do not draft the paper together, on the contrary, only a small part of them are involved in this very final step of a giant research work leading to a sound result. It is as well true that making the research performed public and disseminating the knowledge obtained is a crucial step of the whole project.
But what you probably missed is that this key stage would not be possible at all without a unique setup which was built and operated by profoundly more physicists and engineers than those who processed raw data and wrote a paper. Without that "hidden part of the iceberg" there would be no results at all. And it would be completely wrong to assume that the authors who did the data analysis and wrote the paper should be given the highest credit in the paper. It is very specific for the experimental HEP field that has gone far beyond the situation that was common still in the first half of 20th century when one scientist or a small group of them might produce some interesting results. The "insignificant" right tail in your distribution of papers on number of coauthors contains the hot part of the modern physics with high impact results topped by the discovery of Higgs-boson. And in your next rankings you are going to dishonour those universities that contributed to this discovery."
and
"the point is that frequent fluctuations of the ranking methodology might damage the credibility of the THE. Certainly, I do not imply here large and well-esteemed universities like Harvard or MIT. I believe their high rankings positions not to be affected by nearly any reasonable changes in the methodology. However, the highest attention to the rankings is attracted from numerous ordinary institutions across the world and their potential applicants and employees. In my opinion, these are the most concerned customers of the THE product. As I already pointed out above, it's very questionable whether participation in large HEP experiments (or genome studies) should be considered "unfair" for those institutions."
Sunday, August 23, 2015
Changes in Ranking Methodology
This year and next the international university rankings appear to be set for more volatility with unusually large upward and downward movement, partly as a result of changes to the methodology for counting citations in the QS and THE rankings.
ARWU
The global ranking season kicked off last week with the publication of the latest edition of the Academic Ranking of World Universities from the ShanghaiRanking Consultancy (SRC), which I hope to discuss in detail in a little while. These rankings are rather dull and boring, which is exactly what they should be. Harvard is, as always, number one for all but one of the indicators. Oxford has slipped from joint ninth to tenth place. Warwick has leaped into the top 100 by virtue of a Fields medal. At the foot of the table there are new contenders from France, Korea and Iran.
Since they began in 2003 the Shanghai rankings have been characterised by a generally stable methodology. In 2012, however, they had to deal with the recruitment of a large and unprecedented number of adjunct faculty by King Abdulaziz University. Previously SRC had simply divided the credit for the Highly Cited Researchers indicator equally between all institutions listed as affiliations. In 2012 and 2013 they wrote to all highly cited researchers with joint affiliations and thus determined the division of credit between primary and secondary affiliations. Then, in 2014 and this year they combined the old Thomson Reuters list, first issued in 2001, and the new one, issued in 2014, and excluded all secondary affiliations in the new list.
The result was that in 2014 the rankings showed an unusual degree of volatility although this year things are a lot more stable. My understanding is that Shanghai will move to counting only the new list next year, again without secondary affiliations, so there should be a lot of interesting changes then. It looks as though Stanford, Princeton, University of Wisconsin -- Madison, and Kyoto University will suffer because of the change while University of California Santa Cruz, Rice University, University of Exeter and University of Wollongong. will benefit.
While SRC has efficiently dealt with the issue of secondary affiliation with regard to its Highly Cited indicator, the issue has now resurfaced in the unusual high scores achieved by King Abdulaziz University for publications largely because of its adjunct faculty. Expect more discussion over the next year or so. It would seem sensible for SRC to think about a five or ten year period rather than one year for their Publications indicator and academic publishers, the media and rankers in general may need to give some thought to the proliferation of secondary affiliations.
QS
On July 27 Quacquarelli Symonds (QS) announced that for 18 months they had been thinking about normalising the counting of citations across five broad subject areas. They observed that a typical institution would receive about half of its citations from the life sciences and medicine, over a quarter from the natural sciences but just 1% from the arts and humanities.
In their forthcoming rankings QS will assign a 20% weighting for citations to each of the five subject areas something, according to Ben Sowter Research Director at QS, that they have been doing for the academic opinion survey.
It would seem then that there are likely to be some big rises and big falls this September. I would guess that places strong in humanities, social sciences and engineering like LSE, New York University and Nanyang Technological University may go up and some of the large US state universities and Russian institutions may go down. That's a guess because it is difficult to tell what happens with the academic and employer surveys.
QS have also made an attempt to deal with the issue of hugely cited papers with hundreds, even thousands of "authors" -- contributors would be a better term -- mainly in physics, medicine and genetics. Their approach is to exclude all papers with more than 10 contributing institutions, that is 0.34% of all publications in the database.
This is rather disappointing. Papers with huge numbers of authors and citations obviously do have distorting effects but they have often dealt with fundamental and important issues. To exclude them altogether is to ignore a very significant body of research.
The obvious solution to the problem of multi-contributor papers is fractional counting, dividing the number of citations by the number of contributors or contributing institutions. QS claim that to do so would discourage collaboration, which does not sound very plausible.
In addition, QS will likely extend the life of survey responses from three to five years. That could make the rankings more stable by smoothing out annual fluctuations in survey responses and reduce the volatility caused by the proposed changes in the counting of citations.
The shift to a moderate version of field normalisation is helpful as it will reduce the undue privilege given to medical research, without falling into the huge problems that result from using too many categories. It is unfortunate, however, that QS have not taken the plunge into fractional counting. One suspects that technical problems and financial considerations might be as significant as the altruistic desire not to discourage collaboration.
After a resorting in September the QS rankings are likely to become a bit more stable and and credible but their most serious problem, the structure, validity and excessive weighting of the academic survey, has still not been addressed.
THE
Meanwhile, Times Higher Education (THE) has also been grappling with the issue of authorship inflation. Phil Baty has announced that this year 649 papers with over 1,000 authors will be excluded from their calculation of citations because " we consider them to be so freakish that they have the potential to distort the global scientific landscape".
But it is not the papers that do the distorting. It is methodology. THE and their former data partners Thomson Reuters, like QS, have avoided fractional counting (except for a small experimental African ranking) and so every one of those hundreds or thousands of authors gets full credit for the hundreds or thousands of citations. This has given places like Tokyo Metropolitan University, Scuola Normale Superiore Pisa, Universite Cadi Ayyad in Morocco and Bogazici University in Turkey remarkably high scores for Citations: Research Impact, much higher than their scores for the bundled research indicators.
THE have decided to simply exclude 649 papers, 0r 0.006% of the total from their calculations for the world rankings. This seems a lot less than QS. Again, this is a rather crude measure. Many of the "freaks" are major contributions to advanced research and deserve to be acknowledged by the rankings in some way.
THE did use fractional counting in their recent experimental ranking of African universities and Baty indicates that they are considering doing so in the future.
It would be a big step forward for THE if they introduce fractional counting of citations. But they should not stop there. There are other bugs in the citations indicator that ought to be fixed.
First, it does not at present measure what it is supposed to measure. It does not measure a university's overall research impact. At best, it is a measure of the average quality of research papers no matter how few (above a certain threshold) they are.
Second, the "regional modification", which divides the university citation impact score by the square root of the the score of the country where the university is located, is another source of distortion. It gives a bonus to universities simply for being located in underperforming countries. THE or TR have justified the modification by suggesting that some universities deserve compensation because they lack funding or networking opportunities. Perhaps they do, but this can still lead to serious anomalies.
Thirdly, THE need to consider whether they should assign citations to so many fields since this increases the distortions that can arise when there is a highly cited paper in a normally lowly cited field.
Fourthly, should they assign a thirty per cent weighting to an indicator that may be useful for distinguishing between the likes of MIT and Caltech but may be of little relevance for the universities that are now signing up for the world rankings?
ARWU
The global ranking season kicked off last week with the publication of the latest edition of the Academic Ranking of World Universities from the ShanghaiRanking Consultancy (SRC), which I hope to discuss in detail in a little while. These rankings are rather dull and boring, which is exactly what they should be. Harvard is, as always, number one for all but one of the indicators. Oxford has slipped from joint ninth to tenth place. Warwick has leaped into the top 100 by virtue of a Fields medal. At the foot of the table there are new contenders from France, Korea and Iran.
Since they began in 2003 the Shanghai rankings have been characterised by a generally stable methodology. In 2012, however, they had to deal with the recruitment of a large and unprecedented number of adjunct faculty by King Abdulaziz University. Previously SRC had simply divided the credit for the Highly Cited Researchers indicator equally between all institutions listed as affiliations. In 2012 and 2013 they wrote to all highly cited researchers with joint affiliations and thus determined the division of credit between primary and secondary affiliations. Then, in 2014 and this year they combined the old Thomson Reuters list, first issued in 2001, and the new one, issued in 2014, and excluded all secondary affiliations in the new list.
The result was that in 2014 the rankings showed an unusual degree of volatility although this year things are a lot more stable. My understanding is that Shanghai will move to counting only the new list next year, again without secondary affiliations, so there should be a lot of interesting changes then. It looks as though Stanford, Princeton, University of Wisconsin -- Madison, and Kyoto University will suffer because of the change while University of California Santa Cruz, Rice University, University of Exeter and University of Wollongong. will benefit.
While SRC has efficiently dealt with the issue of secondary affiliation with regard to its Highly Cited indicator, the issue has now resurfaced in the unusual high scores achieved by King Abdulaziz University for publications largely because of its adjunct faculty. Expect more discussion over the next year or so. It would seem sensible for SRC to think about a five or ten year period rather than one year for their Publications indicator and academic publishers, the media and rankers in general may need to give some thought to the proliferation of secondary affiliations.
QS
On July 27 Quacquarelli Symonds (QS) announced that for 18 months they had been thinking about normalising the counting of citations across five broad subject areas. They observed that a typical institution would receive about half of its citations from the life sciences and medicine, over a quarter from the natural sciences but just 1% from the arts and humanities.
In their forthcoming rankings QS will assign a 20% weighting for citations to each of the five subject areas something, according to Ben Sowter Research Director at QS, that they have been doing for the academic opinion survey.
It would seem then that there are likely to be some big rises and big falls this September. I would guess that places strong in humanities, social sciences and engineering like LSE, New York University and Nanyang Technological University may go up and some of the large US state universities and Russian institutions may go down. That's a guess because it is difficult to tell what happens with the academic and employer surveys.
QS have also made an attempt to deal with the issue of hugely cited papers with hundreds, even thousands of "authors" -- contributors would be a better term -- mainly in physics, medicine and genetics. Their approach is to exclude all papers with more than 10 contributing institutions, that is 0.34% of all publications in the database.
This is rather disappointing. Papers with huge numbers of authors and citations obviously do have distorting effects but they have often dealt with fundamental and important issues. To exclude them altogether is to ignore a very significant body of research.
The obvious solution to the problem of multi-contributor papers is fractional counting, dividing the number of citations by the number of contributors or contributing institutions. QS claim that to do so would discourage collaboration, which does not sound very plausible.
In addition, QS will likely extend the life of survey responses from three to five years. That could make the rankings more stable by smoothing out annual fluctuations in survey responses and reduce the volatility caused by the proposed changes in the counting of citations.
The shift to a moderate version of field normalisation is helpful as it will reduce the undue privilege given to medical research, without falling into the huge problems that result from using too many categories. It is unfortunate, however, that QS have not taken the plunge into fractional counting. One suspects that technical problems and financial considerations might be as significant as the altruistic desire not to discourage collaboration.
After a resorting in September the QS rankings are likely to become a bit more stable and and credible but their most serious problem, the structure, validity and excessive weighting of the academic survey, has still not been addressed.
THE
Meanwhile, Times Higher Education (THE) has also been grappling with the issue of authorship inflation. Phil Baty has announced that this year 649 papers with over 1,000 authors will be excluded from their calculation of citations because " we consider them to be so freakish that they have the potential to distort the global scientific landscape".
But it is not the papers that do the distorting. It is methodology. THE and their former data partners Thomson Reuters, like QS, have avoided fractional counting (except for a small experimental African ranking) and so every one of those hundreds or thousands of authors gets full credit for the hundreds or thousands of citations. This has given places like Tokyo Metropolitan University, Scuola Normale Superiore Pisa, Universite Cadi Ayyad in Morocco and Bogazici University in Turkey remarkably high scores for Citations: Research Impact, much higher than their scores for the bundled research indicators.
THE have decided to simply exclude 649 papers, 0r 0.006% of the total from their calculations for the world rankings. This seems a lot less than QS. Again, this is a rather crude measure. Many of the "freaks" are major contributions to advanced research and deserve to be acknowledged by the rankings in some way.
THE did use fractional counting in their recent experimental ranking of African universities and Baty indicates that they are considering doing so in the future.
It would be a big step forward for THE if they introduce fractional counting of citations. But they should not stop there. There are other bugs in the citations indicator that ought to be fixed.
First, it does not at present measure what it is supposed to measure. It does not measure a university's overall research impact. At best, it is a measure of the average quality of research papers no matter how few (above a certain threshold) they are.
Second, the "regional modification", which divides the university citation impact score by the square root of the the score of the country where the university is located, is another source of distortion. It gives a bonus to universities simply for being located in underperforming countries. THE or TR have justified the modification by suggesting that some universities deserve compensation because they lack funding or networking opportunities. Perhaps they do, but this can still lead to serious anomalies.
Thirdly, THE need to consider whether they should assign citations to so many fields since this increases the distortions that can arise when there is a highly cited paper in a normally lowly cited field.
Fourthly, should they assign a thirty per cent weighting to an indicator that may be useful for distinguishing between the likes of MIT and Caltech but may be of little relevance for the universities that are now signing up for the world rankings?
Monday, August 03, 2015
The CWUR Rankings 2015
The Center for World University Rankings, based in Jeddah, Saudi Arabia, has produced the latest edition of its global ranking of 1,000 universities. The Center is headed by Nadim Mahassen, an Assistant Professor at King Abdulaziz University.
The rankings include five indicators that measure various aspects of publication and research: publications in "reputable journals", research papers in "highly influential" journals, citations, h-index and patents.
These indicators are given a combined weighting of 25%.
Another 25% goes to Quality of Education, which is measured by the number of alumni receiving major international awards relative to size (current number of students according to national agencies). This is obviously a crude measure which fails to distinguish among the great mass of universities that have never won an award.
The rankings include five indicators that measure various aspects of publication and research: publications in "reputable journals", research papers in "highly influential" journals, citations, h-index and patents.
These indicators are given a combined weighting of 25%.
Another 25% goes to Quality of Education, which is measured by the number of alumni receiving major international awards relative to size (current number of students according to national agencies). This is obviously a crude measure which fails to distinguish among the great mass of universities that have never won an award.
Similarly, another quarter is assigned to Quality of Faculty measured by the number of faculty receiving such awards and another quarter to Alumni Employment measured by the number of CEOs of top corporations. Again, these indicators are of little or no relevance to all but a few hundred institutions.
Alumni employment gets another 25%. This is measured by alumni holding CEO positions in top companies. Again, this would be of relevance to a limited number of universities.
1. Harvard
2. Stanford
3. MIT
4. Cambridge
5. Oxford
6. Columbia
7. Berkeley
8. Chicago
9. Princeton
10. Cornell.
The only change from last year is that Cornell has replaced Yale in tenth place.
Countries with Universities in the Top Hundred in 2015 and 2014
Countries with Universities in the Top Hundred in 2015 and 2014
Country | Universities in top 100 2015 | 2014 |
---|---|---|
US | 55 | 53 |
UK | 7 | 7 |
Japan | 7 | 8 |
Switzerland | 4 | 4 |
France | 4 | 4 |
Canada | 3 | 3 |
Israel | 3 | 3 |
South Korea | 2 | 1 |
Germany | 2 | 4 |
Australia | 2 | 2 |
China | 2 | 2 |
Netherlands | 2 | 1 |
Russia | 1 | 1 |
Taiwan | 1 | 1 |
Belgium | 1 | 1 |
Norway | 1 | 0 |
Sweden | 1 | 2 |
Singapore | 1 | 1 |
Denmark | 1 | 1 |
Italy | 0 | 1 |
Top Ranked in Region or Country
USA: Harvard
Canada: Toronto
Asia: Tokyo
South Asia: IIT Delhi
Southeast Asia : National University of Singapore
Europe: Cambridge
Central and Eastern Europe: Lomonosov Moscow State University
Arab World: King Saud University
Middle East: Hebrew University of Jerusalem
Latin America: Sao Paulo
Africa: University of the Witwatersrand
USA: Harvard
Canada: Toronto
Asia: Tokyo
South Asia: IIT Delhi
Southeast Asia : National University of Singapore
Europe: Cambridge
Central and Eastern Europe: Lomonosov Moscow State University
Arab World: King Saud University
Middle East: Hebrew University of Jerusalem
Latin America: Sao Paulo
Africa: University of the Witwatersrand
Carribbean: University of Puerto Rico at MayagĆ¼ez
Noise Index
In the top 20, the CWUR rankings are more stable than THE and QS but less stable than the Shanghai rankings.
Average position change of universities in the top 20 in 2014: 0.5
Comparison
CWUR 2013-14: 0.9
Shanghai Rankings (ARWU)
2011-12: 0.15
2012-13: 0.25
THE WUR 2012-13: 1.2
QS WUR 2012-13 1.7
With regard to the top 100, the CWUR rankings are more stable this year, with a volatility similar to the QS and THE rankings although significantly less so than ARWU.
Average position change of universities in the top 100 in 2014: 4.15
Comparison
CWUR 2013-14: 10.59
Shanghai Rankings (ARWU
2011-12: 2.01
2012-13: 1.66
THE WUR 2012-13: 5.36
QS WUR 2012-13: - 3.97
Noise Index
In the top 20, the CWUR rankings are more stable than THE and QS but less stable than the Shanghai rankings.
Average position change of universities in the top 20 in 2014: 0.5
Comparison
CWUR 2013-14: 0.9
Shanghai Rankings (ARWU)
2011-12: 0.15
2012-13: 0.25
THE WUR 2012-13: 1.2
QS WUR 2012-13 1.7
With regard to the top 100, the CWUR rankings are more stable this year, with a volatility similar to the QS and THE rankings although significantly less so than ARWU.
Average position change of universities in the top 100 in 2014: 4.15
Comparison
CWUR 2013-14: 10.59
Shanghai Rankings (ARWU
2011-12: 2.01
2012-13: 1.66
THE WUR 2012-13: 5.36
QS WUR 2012-13: - 3.97
Monday, May 11, 2015
The Geography of Excellence: the Importance of Weighting
So finally, the 2015 QS subject rankings were published. It seems that the first attempt was postponed when the original methodology produced implausible fluctuations, probably resulting from the volatility that is inevitable when there are a small number of data points -- citations and survey responses -- outside the top 50 for certain subjects.
QS have done some tweaking, some of it aimed at smoothing out the fluctuations in the responses to their academic and employer surveys.
These rankings look at bit different from the World University Rankings. Cambridge has the most top ten placings (31), followed by Oxford and Stanford (29 each), Harvard (28), Berkeley (26) and MIT (16).
But in the world rankings MIT is in first place, Cambridge second, Imperial College London third, Harvard fourth and Oxford and University College London joint fifth.
The subject rankings use two indicators from the world, the academic survey and the employer survey but not internationalisation, student faculty ratio and citations per faculty. They add two indicators, citations per paper and h-index.
The result is that the London colleges do less well in the subject rankings since they do not benefit from their large numbers of international students and faculty. Caltech, Princeton and Yale also do relatively badly probably because the new rankings do not take account of their low faculty student faculty ratios.
The lesson of this is that if weighting is not everything, it is definitely very important.
Below is a list of universities ordered by the number of top five placings. There are signs of the Asian advance -- Peking, Hong Kong and the National University of Singapore -- but it is an East Asian advance.
Europe is there too but it is Cold Europe -- Switzerland, Netherlands and Sweden -- not the Mediterranean.
Rank | University | Country | Number of Top Five Places |
---|---|---|---|
1 | Harvard | USA | 26 |
2 | Cambridge | UK | 20 |
3 | Oxford | UK | 18 |
4 | Stanford | USA | 17 |
5= | MIT | USA | 16 |
5= | UC Berkeley | USA | 16 |
7 | London School of Economics | UK | 7 |
8= | University College London | UK | 3 |
8= | ETH Zurich | Switzerland | 3 |
10= | New York University | USA | 2 |
10= | Yale | USA | 2 |
10= | Delft University of Technology | Netherlands | 2 |
10= | National University of Singapore | Singapore | 2 |
10= | UC Los Angeles | USA | 2 |
10= | UC Davis | USA | 2 |
10= | Cornell | USA | 2 |
10= | Wisconsin - Madison | USA | 2 |
10- | Michigan | USA | 2 |
10= | Imperial College London | UK | 2 |
20= | Wagenginen | Netherlands | 1 |
20= | University of Southern California | USA | 1 |
20= | Pratt Institute, New York | USA | 1 |
20= | Rhode Island School of Design | USA | 1 |
20= | Parsons: the New School for Design | USA | 1 |
20= | Royal College of Arts London | UK | 1 |
20= | Melbourne | Australia | 1 |
20= | Texas-Austin | USA | 1 |
20= | Sciences Po | France | 1 |
20= | Princeton | USA | 1 |
20= | Yale | USA | 1 |
20= | Chicago | USA | 1 |
20= | Manchester | UK | 1 |
20= | University of Pennsylvania | USA | 1 |
20= | Durham | UK | 1 |
20= | INSEAD | France | 1 |
20= | London Business School | UK | 1 |
20= | Northwestern | USA | 1 |
20= | Utrecht | Netherlands | 1 |
20= | Guelph | Canada | 1 |
20= | Royal Veterinary College London | UK | 1 |
20= | UC San Francisco | USA | 1 |
20= | Johns Hopkins | USA | 1 |
20= | KU Leuven | USA | 1 |
20= | Gothenburg | Sweden | 1 |
20= | Hong Kong | Hong Kong | 1 |
20= | Karolinska Institute | Sweden | 1 |
20= | Sussex | UK | 1 |
20= | Carnegie Mellon University | USA | 1 |
20= | Rutgers | USA | 1 |
20= | Pittsburgh | USA | 1 |
20= | Peking | China | 1 |
20= | Purdue | USA | 1 |
20= | Georgia Institute ofTechnology | USA | 1 |
20= | Edinburgh | UK | 1 |
Tuesday, October 07, 2014
The Times Higher Education World University Rankings
Publisher
Times Higher Education
Scope
Global. Data provided for 400 universities. Over 800 ranked.
Top Ten
Place | University |
---|---|
1 | California Institute of Technology (Caltech) |
2 | Harvard University |
3 | Oxford University |
4 | Stanford University |
5 | Cambridge University |
6 | Massachusetts Institute of Technology (MIT) |
7 | Princeton University |
8 | University of California Berkeley |
9= | Imperial College London |
9= | Yale University |
Countries with Universities in the Top Hundred
Country | Number of Universities |
---|---|
USA | 45 |
UK | 11 |
Germany | 6 |
Netherlands | 6 |
Australia | 5 |
Canada | 4 |
Switzerland | 3 |
Sweden | 3 |
South Korea | 3 |
Japan | 2 |
Singapore | 2 |
Hong Kong | 2 |
China | 2 |
France | 2 |
Belgium | 2 |
Italy | 1 |
Turkey | 1 |
Top Ranked in Region
North America
|
California Institute of Technology (Caltech)
|
---|---|
Africa | University of Cape Town |
Europe | Oxford University |
Latin America | Universidade de Sao Paulo |
Asia | University of Tokyo |
Central and Eastern Europe | Lomonosov Moscow State University |
Arab World | University of Marrakech Cadi Ayyad |
Middle East | Middle East Technical University |
Oceania | University of Melbourne |
Noise Index
In the top 20, this year's THE world rankings are less volatile than the previous edition and this year's QS rankings. They are still slightly less stable than the Shanghai rankings.
Ranking | Average Place Change of Universities in the top 20 |
---|---|
THE World rankings 2013-14 | 0.70 |
THE World Rankings 2012-2013 | 1.20 |
QS World Rankings 2013-2014 | 1.45 |
ARWU 2013 -2014 | 0.65 |
Webometrics 2013-2014 | 4.25 |
Center for World University Ranking (Jeddah) 2013-2014 | 0.90 |
Looking at the top 100 universities, the THE rankings are more stable than last year. The average university in the top 100 in 2013 rose or fell 4.34 places. The QS rankings are now more stable than the THE or Shanghai rankings.
Ranking | Average Place Change of Universities in the top 100 |
---|---|
THE World Rankings 2013-2014 | 4.34 |
THE World Rankings 2012-2013 | 5.36 |
QS World Rankings 2013-14 | 3.94 |
ARWU 2013 -2014 | 4.92 |
Webometrics 2013-2014 | 12.08 |
Center for World University Ranking (Jeddah) 2013-2014 | 10.59 |
Note: universities falling out of the top 100 are treated as though they fell to 101st position.
Thursday, October 02, 2014
Which universities have the greatest research influence?
Times Higher Education (THE) claims that its Citations:Research Influence indicator, prepared by Thomson Reuters (TR), is the flagship of its World University Rankings, It is strange then that the magazine has never published a research influence ranking although that ought to be just as interesting as its Young Universities Ranking, Reputation Rankings or gender index.
So let's have a look at the top 25 universities in the world this year ranked for research influence, measured by field- and year- normalised citations, by Thomson Reuters.
Santa Cruz and Tokyo Metropolitan have the same impact as MIT. Federico Santa MariaTechnical University is ahead of Princeton. Florida Institute of Technology beats Harvard. Bogazici University and Scuola Normale Superiore do better than Oxford and Cambridge.
Are they serious?
Apparently. There will be an explanation in the next post. Meanwhile go and check if you don't believe me. And let me know if there's any dancing in the streets of Valparaiso, Pisa, Golden or Istanbul.
Rank and Score for Citations: Research Influence 2014-15 THE World Rankings
So let's have a look at the top 25 universities in the world this year ranked for research influence, measured by field- and year- normalised citations, by Thomson Reuters.
Santa Cruz and Tokyo Metropolitan have the same impact as MIT. Federico Santa MariaTechnical University is ahead of Princeton. Florida Institute of Technology beats Harvard. Bogazici University and Scuola Normale Superiore do better than Oxford and Cambridge.
Are they serious?
Apparently. There will be an explanation in the next post. Meanwhile go and check if you don't believe me. And let me know if there's any dancing in the streets of Valparaiso, Pisa, Golden or Istanbul.
Rank and Score for Citations: Research Influence 2014-15 THE World Rankings
Rank | University | Score |
---|---|---|
1= | University of California Santa Cruz | 100 |
1= | MIT | 100 |
1= | Tokyo Metropolitan University | 100 |
4 | Rice University | 99.9 |
5= | Caltech | 99.7 |
5= | Federico Santa Maria Technical University, Chile | 99.7 |
7 | Princeton University | 99.6 |
8= | Florida Institute of Technology | 99.2 |
8= | University of California Santa Barbara | 99.2 |
10= | Stanford University | 99.1 |
10= | University of California Berkeley | 99.1 |
12= | Harvard University | 98.9 |
12= | Royal Holloway University of London | 98.9 |
14 | University of Colorado Boulder | 97.4 |
15 | University of Chicago | 97.3 |
16= | Washington University of St Louis | 97.1 |
16= | Colorado School of Mines | 97.1 |
18 | Northwestern University | 96.9 |
19 | Bogazici University, Turkey | 96.8 |
20 | Duke University | 96.6 |
21= | Scuola Normale Superiore Pisa, Italy | 96.4 |
21= | University of California San Diego | 96.4 |
23 | Boston College | 95.9 |
24 | Oxford University | 95.5 |
25= | Brandeis University | 95.3 |
25= | UCLA | 95.3 |
Thursday, September 25, 2014
How the Universities of Huddersfield, East London, Plymouth, Salford, Central Lancashire et cetera helped Cambridge overtake Harvard in the QS rankings
It is a cause of pride for the great and the good of British higher education that the country's universities do brilliantly in certain global rankings. Sometimes though, there is puzzlement about how UK universities can do so well even though the performance of the national economy and the level of adult cognitive skills are so mediocre.
In the latest QS World University Rankings Cambridge and Imperial College London pulled off a spectacular feat when they moved ahead of Harvard into joint second place behind MIT, an achievement at first glance as remarkable as Leicester City beating Manchester United. Is this a tribute to the outstanding quality of teaching, inspired leadership or cutting edge research, or perhaps something else?
Neither Cambridge nor Imperial does very well in the research based rankings. Cambridge is 18th and Imperial 26th among higher education institutions in the latest Scimago rankings for output and 32nd and 33rd for normalised impact (citations per paper adjusted for field). Harvard is 1st and 4th for these indicators. In the CWTS Leiden Ranking, Cambridge is 22nd and Imperial 32nd for the mean normalised citation score, sometimes regarded as the flagship of these rankings, while Harvard is 6th.
It is true that Cambridge does much better on the Shanghai Academic Ranking of World Universities with fifth place overall, but that is in large measure due to an excellent score, 96.6, for alumni winning Nobel and Fields awards, some dating back several decades. For Highly Cited Researchers and publications in Nature and Science its performance is not nearly so good.
Looking at the THE World University Rankings, which make some attempt to measure factors other than research, Cambridge and Imperial come in 7th and 10th overall, which is much better than they do in the Leiden and Scimago rankings. However, it is very likely that the postgraduate teaching and research surveys made a significant contribution to this performance. Cambridge is 4th in the THE reputation rankings based on last year's data and Imperial is 13th.
Reputation is also a key to the success of Cambridge and Imperial in the QS world rankings. Take a look at the scores and positions of Harvard, Cambridge and Imperial in the rankings just released.
Harvard gets 100 points (2nd place) for the academic survey, employer survey (3rd), and citations per faculty (3rd). It has 99.7 for faculty student ratio (29th), 98.1 for international faculty (53rd), and 83.8 for international students (117th). Harvard's big weakness is its relatively small percentage of international students.
Cambridge is in first place for the academic survey and 2nd in the employer survey, in both cases with a score of 100 and one place ahead of Harvard. The first secret of Cambridge's success is that it does much better on reputational measures than for bibliometric or other objective data. It was 18th for faculty student ratio, 73rd for international faculty, 50th for international students and 40th for citations per faculty.
So, Cambridge is ahead for faculty student ratio and international students and Harvard is ahead for international faculty and citations per faculty. Both get 100 for the two surveys.
Similarly, Imperial has 99.9 points for the academic survey (14th), 100 for the employer survey (7th), 99.8 for faculty student ratio (26th), 100 for international faculty (41st), 99.7 (20th) for international students and 96.2 (49th) for citations per faculty. It is behind Harvard for citations per faculty but just enough ahead for international students to squeeze past into joint second place.
The second secret is that QS's standardisation procedure combined with an expanding database means that the scores of the leading universities in the rankings are getting more and more squashed together at the top. QS turns its raw data into Z scores so that universities are measured according to their distance in standard deviations from the mean for all ranked universities. If the number of sub-elite universities in the rankings increases then the overall means for the indicators will fall and the scores of universities at the top end will rise as their distance in standard deviations from the mean increases.
Universities with scores of 98 and 99 will now start getting scores of 100. Universities with recorded scores of 100 will go on getting 100, although they might go up up a few invisible decimal points
In 2008, QS ranked 617 universities. In that year, nine universities had a score of 100 for the academic survey, four for the employer survey, nine for faculty student ratio, six for international faculty, six for international students and seven for citations per faculty.
By 2014 QS was ranking over 830 universities (I assume that those at the end of the rankings marked "NA" are there because they got votes in the surveys but are not ranked because they fail to meet the criteria for inclusion). For each indicator the number of universities getting a score of 100 increased. In 2014 there were 13 universities with a score of 100 for the academic survey, 14 for the employer survey, 16 for faculty student ratio, 41 for international faculty, 15 for international students and 10 for citations per faculty,
In 2008 Harvard got the same score as Cambridge for the academic and employer surveys. It was 0.3 (0.06 weighted) behind for faculty student ratio, 0.6 (0.53 weighted) behind for international faculty, and 14.1 (0.705 weighted) behind for international students, It was, however, 11.5 points. (2.3 weighted) ahead for citations per faculty. Harvard was therefore first and Cambridge third.
By 2014 Cambridge had fallen slightly behind Harvard for international faculty. It was slightly ahead for faculty student ratio. Scores for the survey remained the same, 100 for both places. Harvard reduced the gap for international students slightly.
What made the difference in 2014 and put Cambridge ahead of Harvard was that in 2008 Harvard in fifth place for citations and with a score 100 was 11.5 (2.3 weighted) points ahead of Cambridge. In 2014 Cambridge had improved a bit for this indicator -- it was 40th instead of 49th -- but now got 97.9 points reducing the difference with Harvard to 2.1 points (0.42 weighted). That was just enough to let Cambridge overtake Harvard.
Cambridge's rise between 2008 and 2014 was thus largely due to the increasing number of ranked universities which led to lower means for each indicator which led to higher Z scores at the top of each indicator and so reduced the effect of Cambridge's comparatively lower citations per faculty score.
The same thing happened to Imperial . It did a bit better for citations, rising from 58th to 49th place and this brought it a rise in points from 83.10 to 96.20 again allowing it to creep past Harvard.
Cambridge and Harvard should be grateful to those universities filling up the 701+ category at the bottom of the QS rankings. They are the invisible trampoline that propelled "Impbridge" into second place, just behind MIT.
QS should think carefully about adding more universities to their rankings. Another couple of hundred and there will be a dozen universities at the top getting 100 for everything.
In the latest QS World University Rankings Cambridge and Imperial College London pulled off a spectacular feat when they moved ahead of Harvard into joint second place behind MIT, an achievement at first glance as remarkable as Leicester City beating Manchester United. Is this a tribute to the outstanding quality of teaching, inspired leadership or cutting edge research, or perhaps something else?
Neither Cambridge nor Imperial does very well in the research based rankings. Cambridge is 18th and Imperial 26th among higher education institutions in the latest Scimago rankings for output and 32nd and 33rd for normalised impact (citations per paper adjusted for field). Harvard is 1st and 4th for these indicators. In the CWTS Leiden Ranking, Cambridge is 22nd and Imperial 32nd for the mean normalised citation score, sometimes regarded as the flagship of these rankings, while Harvard is 6th.
It is true that Cambridge does much better on the Shanghai Academic Ranking of World Universities with fifth place overall, but that is in large measure due to an excellent score, 96.6, for alumni winning Nobel and Fields awards, some dating back several decades. For Highly Cited Researchers and publications in Nature and Science its performance is not nearly so good.
Looking at the THE World University Rankings, which make some attempt to measure factors other than research, Cambridge and Imperial come in 7th and 10th overall, which is much better than they do in the Leiden and Scimago rankings. However, it is very likely that the postgraduate teaching and research surveys made a significant contribution to this performance. Cambridge is 4th in the THE reputation rankings based on last year's data and Imperial is 13th.
Reputation is also a key to the success of Cambridge and Imperial in the QS world rankings. Take a look at the scores and positions of Harvard, Cambridge and Imperial in the rankings just released.
Harvard gets 100 points (2nd place) for the academic survey, employer survey (3rd), and citations per faculty (3rd). It has 99.7 for faculty student ratio (29th), 98.1 for international faculty (53rd), and 83.8 for international students (117th). Harvard's big weakness is its relatively small percentage of international students.
Cambridge is in first place for the academic survey and 2nd in the employer survey, in both cases with a score of 100 and one place ahead of Harvard. The first secret of Cambridge's success is that it does much better on reputational measures than for bibliometric or other objective data. It was 18th for faculty student ratio, 73rd for international faculty, 50th for international students and 40th for citations per faculty.
So, Cambridge is ahead for faculty student ratio and international students and Harvard is ahead for international faculty and citations per faculty. Both get 100 for the two surveys.
Similarly, Imperial has 99.9 points for the academic survey (14th), 100 for the employer survey (7th), 99.8 for faculty student ratio (26th), 100 for international faculty (41st), 99.7 (20th) for international students and 96.2 (49th) for citations per faculty. It is behind Harvard for citations per faculty but just enough ahead for international students to squeeze past into joint second place.
The second secret is that QS's standardisation procedure combined with an expanding database means that the scores of the leading universities in the rankings are getting more and more squashed together at the top. QS turns its raw data into Z scores so that universities are measured according to their distance in standard deviations from the mean for all ranked universities. If the number of sub-elite universities in the rankings increases then the overall means for the indicators will fall and the scores of universities at the top end will rise as their distance in standard deviations from the mean increases.
Universities with scores of 98 and 99 will now start getting scores of 100. Universities with recorded scores of 100 will go on getting 100, although they might go up up a few invisible decimal points
In 2008, QS ranked 617 universities. In that year, nine universities had a score of 100 for the academic survey, four for the employer survey, nine for faculty student ratio, six for international faculty, six for international students and seven for citations per faculty.
By 2014 QS was ranking over 830 universities (I assume that those at the end of the rankings marked "NA" are there because they got votes in the surveys but are not ranked because they fail to meet the criteria for inclusion). For each indicator the number of universities getting a score of 100 increased. In 2014 there were 13 universities with a score of 100 for the academic survey, 14 for the employer survey, 16 for faculty student ratio, 41 for international faculty, 15 for international students and 10 for citations per faculty,
In 2008 Harvard got the same score as Cambridge for the academic and employer surveys. It was 0.3 (0.06 weighted) behind for faculty student ratio, 0.6 (0.53 weighted) behind for international faculty, and 14.1 (0.705 weighted) behind for international students, It was, however, 11.5 points. (2.3 weighted) ahead for citations per faculty. Harvard was therefore first and Cambridge third.
By 2014 Cambridge had fallen slightly behind Harvard for international faculty. It was slightly ahead for faculty student ratio. Scores for the survey remained the same, 100 for both places. Harvard reduced the gap for international students slightly.
What made the difference in 2014 and put Cambridge ahead of Harvard was that in 2008 Harvard in fifth place for citations and with a score 100 was 11.5 (2.3 weighted) points ahead of Cambridge. In 2014 Cambridge had improved a bit for this indicator -- it was 40th instead of 49th -- but now got 97.9 points reducing the difference with Harvard to 2.1 points (0.42 weighted). That was just enough to let Cambridge overtake Harvard.
Cambridge's rise between 2008 and 2014 was thus largely due to the increasing number of ranked universities which led to lower means for each indicator which led to higher Z scores at the top of each indicator and so reduced the effect of Cambridge's comparatively lower citations per faculty score.
The same thing happened to Imperial . It did a bit better for citations, rising from 58th to 49th place and this brought it a rise in points from 83.10 to 96.20 again allowing it to creep past Harvard.
Cambridge and Harvard should be grateful to those universities filling up the 701+ category at the bottom of the QS rankings. They are the invisible trampoline that propelled "Impbridge" into second place, just behind MIT.
QS should think carefully about adding more universities to their rankings. Another couple of hundred and there will be a dozen universities at the top getting 100 for everything.
Thursday, September 18, 2014
QS World University Rankings 2014
Publisher
QS (Quacquarelli Symonds)
Scope
Global. 701+ universities.
Top Ten
Place | University |
---|---|
1 | MIT |
2= | Cambridge |
2= | Imperial College London |
4 | Harvard |
5 | Oxford |
6 | University College London |
7 | Stanford |
8 | California Institute of Technology (Caltech) |
9 | Princeton |
10 | Yale |
Countries with Universities in the Top Hundred
Country | Number of Universities |
---|---|
USA | 28 |
UK | 19 |
Australia | 8 |
Netherlands | 7 |
Canada | 5 |
Switzerland | 4 |
Japan | 4 |
Germany | 3 |
China | 3 |
Korea | 3 |
Hong Kong | 3 |
Denmark | 2 |
Singapore | 2 |
France | 2 |
Sweden | 2 |
Ireland | 1 |
Taiwan | 1 |
Finland | 1 |
Belgium | 1 |
New Zealand | 1 |
Top Ranked in Region
North America
| MIT |
---|---|
Africa | University of Cape Town |
Europe | Cambridge Imperial College London |
Latin America | Universidade de Sao Paulo |
Asia | National University of Singapore |
Central and Eastern Europe | Lomonosov Moscow State University |
Arab World | King Fahd University of Petroleum and Minerals |
Middle East | Hebrew University of Jerusalem |
Noise Index
In the top 20, this year's QS world rankings are less volatile than the previous edition but more so than the THE rankings or Shanghai ARWU. The top 20 universities in 2013 rose or fell an average of 1.45 places. The most remarkable change was the rise of Imperial College and Cambridge to second place behind MIT and ahead of Harvard.
Ranking | Average Place Change of Universities in the top 20 |
---|---|
QS World Rankings 2013-2014 | 1.45 |
QS World Rankings 2012-2013 | 1.70 |
ARWU 2013 -2014 | 0.65 |
Webometrics 2013-2014 | 4.25 |
Center for World University Ranking (Jeddah) 2013-2014 | 0.90 |
THE World Rankings 2012-2013 | 1.20 |
Looking at the top 100 universities, the QS rankings are little different from last year. The average university in the top 100 moved up or down 3.94 places compared to 3.97 between 2012 and 2013. These rankings are more reliable than this year's ARWU, which was affected by the new lists of highly cited researchers, and last year's THE rankings.
Ranking | Average Place Change of Universities in the top 100 |
---|---|
QS World Rankings 2013-14 | 3.94 |
QS World Rankings 2012-2013 | 3.97 |
ARWU 2013 -2014 | 4.92 |
Webometrics 2013-2014 | 12.08 |
Center for World University Ranking (Jeddah) 2013-2014 | 10.59 |
THE World Rankings 2012-2013 | 5.36 |
Methodology (from topuniversities)
1. Academic reputation (40%)
Academic reputation is measured using a global survey, in which academics are asked to identify the institutions where they believe the best work is currently taking place within their field of expertise.
For the 2014/15 edition, the rankings draw on almost 63,700 responses from academics worldwide, collated over three years. Only participants’ most recent responses are used, and they cannot vote for their own institution. Regional weightings are applied to counter any discrepancies in response rates.
The advantage of this indicator is that it gives a more equal weighting to different discipline areas than research citation counts. Whereas citation rates are far higher in subjects like biomedical sciences than they are in English literature, for example, the academic reputation survey weights responses from academics in different fields equally.
It also gives students a sense of the consensus of opinion among those who are by definition experts. Academics may not be well positioned to comment on teaching standards at other institutions, but it is well within their remit to have a view on where the most significant research is currently taking place within their field.
2. Employer reputation (10%)
The employer reputation indicator is also based on a global survey, taking in almost 28,800 responses for the 2014/15 edition. The survey asks employers to identify the universities they perceive as producing the best graduates. This indicator is unique among international university rankings.
The purpose of the employer survey is to give students a better sense of how universities are viewed in the job market. A higher weighting is given to votes for universities that come from outside of their own country, so it’s especially useful in helping prospective students to identify universities with a reputation that extends beyond their national borders.
3. Student-to-faculty ratio (20%)
This is a simple measure of the number of academic staff employed relative to the number of students enrolled. In the absence of an international standard by which to measure teaching quality, it provides an insight into the universities that are best equipped to provide small class sizes and a good level of individual supervision.
4. Citations per faculty (20%)
This indicator aims to assess universities’ research output. A ‘citation’ means a piece of research being cited (referred to) within another piece of research. Generally, the more often a piece of research is cited by others, the more influential it is. So the more highly cited research papers a university publishes, the stronger its research output is considered.
QS collects this information using Scopus, the world’s largest database of research abstracts and citations. The latest five complete years of data are used, and the total citation count is assessed in relation to the number of academic faculty members at the university, so that larger institutions don’t have an unfair advantage.
5 6. International faculty ratio (5%) international student ratio (5%)
The last two indicators aim to assess how successful a university has been in attracting students and faculty members from other nations. This is based on the proportion of international students and faculty members in relation to overall numbers. Each of these contributes 5% to the overall ranking results.
Wednesday, September 10, 2014
America's Best Colleges
The US News & World Report's America's Best Colleges has just been published. There are no surprises at the top. Here are the top ten.
1. Princeton
2. Harvard
3. Yale
4= Columbia
4= Stanford
4= Chicago
7. MIT
8= Duke
8= University of Pennsylvania
10. Caltech
Analysis at the Washington Post indicates little movement at the top. Outside the elite there are some significant changes.
Liberal arts colleges
St. John's College, Annapolis from 123rd to 56th .
Bennington College from 122nd to 89th.
National universities
Northeastern University from 69th to 42nd.
Texas Christian University from 99th to 46th.
1. Princeton
2. Harvard
3. Yale
4= Columbia
4= Stanford
4= Chicago
7. MIT
8= Duke
8= University of Pennsylvania
10. Caltech
Analysis at the Washington Post indicates little movement at the top. Outside the elite there are some significant changes.
Liberal arts colleges
St. John's College, Annapolis from 123rd to 56th .
Bennington College from 122nd to 89th.
National universities
Northeastern University from 69th to 42nd.
Texas Christian University from 99th to 46th.
Sunday, August 17, 2014
The Shanghai Rankings (Academic Ranking of World Universities) 2014 Part 1
Publisher
Center for World-Class Universities, Shanghai Jiao Tong University
Scope
Global. 500 institutions.
Methodology
See ARWU site.
In contrast to the other indicators, the Highly Cited Researchers indicator has undergone substantial changes in recent years, partly as a result of changes by data provider Thomson Reuters. Originally, ARWU used the old list of highly cited researchers prepared by Thomson Reuters (TR), which was first published in 2001 and updated in 2004. Since them no names have been added although changes of affiliation submitted by researchers were recorded.
Until 2011 when a researcher listed more than one institution as his or her affiliation then credit for the highly cited indicator would be equally divided. Following the recruitment of a large number of part time researchers by King Abdulaziz University, ARWU introduced a new policy of asking researchers how their time was divided. When there was no response, secondary affiliations were counted as 16%, which was the average time given by those who responded to the survey.
In 2013 TR announced that they were introducing a new list based on field-normalised citations over the period 2002-2012. However, problems with the preparation of the new list meant that it could not be used in the 2013 rankings. Instead, the Shanghai rankings repeated the 2012 scores.
During 2013, KAU recruited over 100 highly cited researchers who nominated the university as a secondary affiliation. That caused some comment by researchers and analysts. A paper by Lutz Bornmann and Johann Bauer concluded that to " counteract attempts at manipulation, ARWU should only consider primary institutions of highly cited researchers."
It seems that Shanghai has acted on this advice: "It is worth noting that, upon the suggestion of many institutions and researchers including some Highly Cited Researchers, only the primary affiliations of new Highly Cited Researchers are considered in the calculation of an institution’s HiCi score for the new list."
As a result, KAU has risen into the lower reaches of the 150-200 band on the basis of publications, some papers in Nature and Science and a modest number of primary affiliations among highly cited researchers. That is a respectable achievement but one that would have been much greater if the secondary affiliations had been included.
Perhaps Shanghai should also take note of the suggestion in a paper by Lawrence Cram and Domingo Docampo that " [s]ignificant acrimony accompanies some published comparisons between ARWU and other rankings (Redden, 2013) driven in part by commercial positioning . Given its status as an academic ranking , it may be prudent for ARWU to consider replacing its HiCi indicator with a measure that is nit sourced from a commercial provider if such a product can be found that satisfies the criteria (objective, open, independent ) used by ARWU."
.
Top Ten
Countries With Universities in the Top 100
Center for World-Class Universities, Shanghai Jiao Tong University
Scope
Global. 500 institutions.
Methodology
See ARWU site.
In contrast to the other indicators, the Highly Cited Researchers indicator has undergone substantial changes in recent years, partly as a result of changes by data provider Thomson Reuters. Originally, ARWU used the old list of highly cited researchers prepared by Thomson Reuters (TR), which was first published in 2001 and updated in 2004. Since them no names have been added although changes of affiliation submitted by researchers were recorded.
Until 2011 when a researcher listed more than one institution as his or her affiliation then credit for the highly cited indicator would be equally divided. Following the recruitment of a large number of part time researchers by King Abdulaziz University, ARWU introduced a new policy of asking researchers how their time was divided. When there was no response, secondary affiliations were counted as 16%, which was the average time given by those who responded to the survey.
In 2013 TR announced that they were introducing a new list based on field-normalised citations over the period 2002-2012. However, problems with the preparation of the new list meant that it could not be used in the 2013 rankings. Instead, the Shanghai rankings repeated the 2012 scores.
During 2013, KAU recruited over 100 highly cited researchers who nominated the university as a secondary affiliation. That caused some comment by researchers and analysts. A paper by Lutz Bornmann and Johann Bauer concluded that to " counteract attempts at manipulation, ARWU should only consider primary institutions of highly cited researchers."
It seems that Shanghai has acted on this advice: "It is worth noting that, upon the suggestion of many institutions and researchers including some Highly Cited Researchers, only the primary affiliations of new Highly Cited Researchers are considered in the calculation of an institution’s HiCi score for the new list."
As a result, KAU has risen into the lower reaches of the 150-200 band on the basis of publications, some papers in Nature and Science and a modest number of primary affiliations among highly cited researchers. That is a respectable achievement but one that would have been much greater if the secondary affiliations had been included.
Perhaps Shanghai should also take note of the suggestion in a paper by Lawrence Cram and Domingo Docampo that " [s]ignificant acrimony accompanies some published comparisons between ARWU and other rankings (Redden, 2013) driven in part by commercial positioning . Given its status as an academic ranking , it may be prudent for ARWU to consider replacing its HiCi indicator with a measure that is nit sourced from a commercial provider if such a product can be found that satisfies the criteria (objective, open, independent ) used by ARWU."
.
Top Ten
Place | University |
---|---|
1 | Harvard |
2 | Stanford |
3 | MIT |
4 | University of California Berkeley |
5 | Cambridge |
6 | Princeton |
7 | California Institute of Technology (Caltech) |
8 | Columbia |
9= | Chicago |
9= | Oxford |
Countries With Universities in the Top 100
Country | Number of Universities |
---|---|
United States | 52 |
United Kingdom | 8 |
Switzerland | 5 |
Germany | 4 |
France | 4 |
Netherlands | 4 |
Australia | 4 |
Canada | 4 |
Japan | 3 |
Sweden | 3 |
Belgium | 2 |
Israel | 2 |
Denmark | 2 |
Norway | 1 |
Finland | 1 |
Russia | 1 |
Subscribe to:
Posts (Atom)