The Last Print Issue of Newsweek
At the end of the last print issue of Newsweek (31/12/12) is a special advertising feature about the Best Colleges and Universities in Asia, Korea, Vietnam, Hong Kong, Japan and the USA.
The feature is quite revealing about how the various global rankings are regarded in Asia. There is nothing about the Shanghai rankings, the Taiwan rankings, Scimago, Webometrics, URAP or the Leiden Ranking.
There are five references to the QS rankings one of which calls them "revered" (seriously!) and another that refers to the "SQ" rankings.
There are two to Times Higher Education, two to America's Best Colleges, one to Community College Weekly and one to the Korean Joongang Daily university rankings.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Monday, January 14, 2013
Sunday, January 13, 2013
A Bit More on the THE Citations Indicator
I have already posted on the citations (research influence) indicator in the Times Higher Education World University Rankings and how it can allow a few papers to have a disproportionate impact. But there are other features of this indicator that affect its stability and can produce large changes even if there is no change in methodology.
This indicator has a weighting of 30 percent. The next most heavily weighted indicator is the research reputation survey which carries a weighting of 18 percent and is combined with number of publications (6 percent) and research income (6 percent) to produce a weighting of 30 percent for research: volume, income and reputation.
It might be argued that the citations indicator accounts for only 30 percent of the total weighting so that anomalously high scores given to obscure or mediocre institutions for citations would be balanced or diluted by scores on the other indicators which have a weighting of 70 percent.
The problem with this is that the scores for the citation indicator are often substantially higher than the scores for other indicators, especially in the 300 - 400 region of the rankings so that the impact of this indicator is correspondingly increased. Full data can be found on the 2012-13 iPhone app.
For example, the University of Ferrara in 378th place with a score of 58.5 for citations has a total score of 31.3 so that nearly 60% of its total score comes from the citations indicator. King Mongkut's Unversity of Technology, Thonburi, in 389th place has a score of 68.4 for citations but its total score is 30.3 so that two thirds of its total score comes from citations. Southern Methodist University in 375th place gets 67.3 for citations which after weighting comes close to providing two thirds of its overall score of 31.6. For these universities a proportional change in the final processed score for citations would have a greater impact than a similar change in any of the other indicators.
Looking at the bottom 25 universities in the top 400, in eight cases the citation indicator provides half or more of the total score and in 22 cases it provides a third or more. Thus, the indicator could have more impact on total scores than its weighting of 30 percent would suggest.
It is also noticeable that the mean score for citations of the THE top 400 universities is much higher than that for research, about 65 compared to about 41.This disparity is especially large as we reach the 200s and the 300s.
So we find Southern Methodist University has a score of 67.3 for citations but 9.0 for research.Then the University of Ferrara has a score of 58.5 for citations and 13.0 for research. King Mongkut's University of Technology, has a score of 68.4 for citations and 10.2 for research.
One reason why the scores for the citations indicator are so high is the "regional modification" introduced by Thompson Reuters in 2011. To simplify, this means that the number of citations to a university in a given year and a a given field is divided by the square root of the average number of citations in the field and year for all universities in the country
So if a university in country A receives 100 citations in a certain year of publication and in a certain field and the average impact for that year and field for all universities is 100 then the university will get a score of 10 (100 divided by 10). If a university in country B receives 10 citations in the same year and the same field then but the average impact from all universities in the country is 1 then the citations score for that field year would also be 10 (10 divided by 1).
This drastically reduces the gap for citations between countries that produce many citations and those that produce few. Thompson Reuters justify this by saying that in some countries it is easier to get grants and to travel and join the networks that lead to citations and international collaborations than in others. The problem with this is that it can produce some rather dubious results.
Let us consider the University of Vigo and the University of Surrey. Vigo has an overall score of 30.7 and is in 383rd place. Surrey is just behind with a score of 30.5 and is 388th place.
But, with the exception of citations Surrey is well ahead of Vigo for everything: teaching (31.1 to 19.4), international outlook (80.4 to 26.9), industry income (45.4 to 36.6) and research (25.0 to 9.5).
Surrey, however, does comparatively badly for citations with a score of only 21.6. It does have a contribution to the massively cited 2008 review of particle physics but the university has too many publications for this review to have much effect. Vigo however has a score of 63.7 for citations which may be because of a much cited paper containing a new algorithm for genetic analysis but also presumably because it received, along with other Spanish and Portuguese-speaking universities, a boost from the regional modification.
There are several problems with this modification. First, it can contribute another element of instability. If we observe a university's score for citations has declined it could be because its citations have decreased overall or in key fields or because a much cited paper has slipped out of the five year period of assessment. It could also be that the number of publications has increased without a corresponding increase in citations.
Applying the regional modification could mean that a university's score would be affected by the fluctuations in the impact of the country's universities as a whole. If there was an increase in the number of citations or reduction in publications nationally then this would reduce the citations score of a particular university since the university's score would be divided by the square root of a larger number.
This could lead to the odd situation where stringent austerity measures lead to the emigration of talented researchers and eventually a fall in citations but some universities in the country may improve since they are being compared to a smaller national average.
The second problem is that it can lead to misleading comparisons. It would be a mistake to conclude that Vigo is a better university than Surrey or about the same or even that its research influence is more significant. What has happened is that is that Vigo is more ahead of the Spanish average than Surrey is ahead of the British.
Another problematic feature of the citations indicator is that its relationship with the research indicator is rather modest. Consider that 18 out of 30 points for research are from the reputation survey whose respondents are drawn from those researchers whose publications are in the ISI databases while the citations indicator counts citations in precisely those papers. Then another 6 percent goes to research income which we would expect to have some relationship with the the quality of research.
Yet the correlation between the scores for research and citations for the top 400 universities is modest at .409 which calls into question the validity of one of the indicators or both of them.
A further problem is that this indicator only counts the impact of papers that make it into an ISI database. A university where most of the faculty do not publish in ISI indexed journals would do no worse than one where there was a lot of publications but not many citations or not many citations in low cited fields.
To conclude, the current construction of the citations indicator has the potential to produce anomalous results and to introduce a significant degree of instability into the rankings.
I have already posted on the citations (research influence) indicator in the Times Higher Education World University Rankings and how it can allow a few papers to have a disproportionate impact. But there are other features of this indicator that affect its stability and can produce large changes even if there is no change in methodology.
This indicator has a weighting of 30 percent. The next most heavily weighted indicator is the research reputation survey which carries a weighting of 18 percent and is combined with number of publications (6 percent) and research income (6 percent) to produce a weighting of 30 percent for research: volume, income and reputation.
It might be argued that the citations indicator accounts for only 30 percent of the total weighting so that anomalously high scores given to obscure or mediocre institutions for citations would be balanced or diluted by scores on the other indicators which have a weighting of 70 percent.
The problem with this is that the scores for the citation indicator are often substantially higher than the scores for other indicators, especially in the 300 - 400 region of the rankings so that the impact of this indicator is correspondingly increased. Full data can be found on the 2012-13 iPhone app.
For example, the University of Ferrara in 378th place with a score of 58.5 for citations has a total score of 31.3 so that nearly 60% of its total score comes from the citations indicator. King Mongkut's Unversity of Technology, Thonburi, in 389th place has a score of 68.4 for citations but its total score is 30.3 so that two thirds of its total score comes from citations. Southern Methodist University in 375th place gets 67.3 for citations which after weighting comes close to providing two thirds of its overall score of 31.6. For these universities a proportional change in the final processed score for citations would have a greater impact than a similar change in any of the other indicators.
Looking at the bottom 25 universities in the top 400, in eight cases the citation indicator provides half or more of the total score and in 22 cases it provides a third or more. Thus, the indicator could have more impact on total scores than its weighting of 30 percent would suggest.
It is also noticeable that the mean score for citations of the THE top 400 universities is much higher than that for research, about 65 compared to about 41.This disparity is especially large as we reach the 200s and the 300s.
So we find Southern Methodist University has a score of 67.3 for citations but 9.0 for research.Then the University of Ferrara has a score of 58.5 for citations and 13.0 for research. King Mongkut's University of Technology, has a score of 68.4 for citations and 10.2 for research.
One reason why the scores for the citations indicator are so high is the "regional modification" introduced by Thompson Reuters in 2011. To simplify, this means that the number of citations to a university in a given year and a a given field is divided by the square root of the average number of citations in the field and year for all universities in the country
So if a university in country A receives 100 citations in a certain year of publication and in a certain field and the average impact for that year and field for all universities is 100 then the university will get a score of 10 (100 divided by 10). If a university in country B receives 10 citations in the same year and the same field then but the average impact from all universities in the country is 1 then the citations score for that field year would also be 10 (10 divided by 1).
This drastically reduces the gap for citations between countries that produce many citations and those that produce few. Thompson Reuters justify this by saying that in some countries it is easier to get grants and to travel and join the networks that lead to citations and international collaborations than in others. The problem with this is that it can produce some rather dubious results.
Let us consider the University of Vigo and the University of Surrey. Vigo has an overall score of 30.7 and is in 383rd place. Surrey is just behind with a score of 30.5 and is 388th place.
But, with the exception of citations Surrey is well ahead of Vigo for everything: teaching (31.1 to 19.4), international outlook (80.4 to 26.9), industry income (45.4 to 36.6) and research (25.0 to 9.5).
Surrey, however, does comparatively badly for citations with a score of only 21.6. It does have a contribution to the massively cited 2008 review of particle physics but the university has too many publications for this review to have much effect. Vigo however has a score of 63.7 for citations which may be because of a much cited paper containing a new algorithm for genetic analysis but also presumably because it received, along with other Spanish and Portuguese-speaking universities, a boost from the regional modification.
There are several problems with this modification. First, it can contribute another element of instability. If we observe a university's score for citations has declined it could be because its citations have decreased overall or in key fields or because a much cited paper has slipped out of the five year period of assessment. It could also be that the number of publications has increased without a corresponding increase in citations.
Applying the regional modification could mean that a university's score would be affected by the fluctuations in the impact of the country's universities as a whole. If there was an increase in the number of citations or reduction in publications nationally then this would reduce the citations score of a particular university since the university's score would be divided by the square root of a larger number.
This could lead to the odd situation where stringent austerity measures lead to the emigration of talented researchers and eventually a fall in citations but some universities in the country may improve since they are being compared to a smaller national average.
The second problem is that it can lead to misleading comparisons. It would be a mistake to conclude that Vigo is a better university than Surrey or about the same or even that its research influence is more significant. What has happened is that is that Vigo is more ahead of the Spanish average than Surrey is ahead of the British.
Another problematic feature of the citations indicator is that its relationship with the research indicator is rather modest. Consider that 18 out of 30 points for research are from the reputation survey whose respondents are drawn from those researchers whose publications are in the ISI databases while the citations indicator counts citations in precisely those papers. Then another 6 percent goes to research income which we would expect to have some relationship with the the quality of research.
Yet the correlation between the scores for research and citations for the top 400 universities is modest at .409 which calls into question the validity of one of the indicators or both of them.
A further problem is that this indicator only counts the impact of papers that make it into an ISI database. A university where most of the faculty do not publish in ISI indexed journals would do no worse than one where there was a lot of publications but not many citations or not many citations in low cited fields.
To conclude, the current construction of the citations indicator has the potential to produce anomalous results and to introduce a significant degree of instability into the rankings.
Sunday, January 06, 2013
QS Stars Again
The New York Times has an article by Don Guttenplan on the QS Stars ratings which award universities one to five stars according to eight criteria, some of which are already assessed by the QS World University Rankings and some of which are not. The criteria are teaching quality, innovation and knowledge transfer, research quality, specialist subject, graduate employability, third mission, infrastructure and internationalisation.
The article has comments from rankings experts Ellen Hazelkorn, Simon Marginson and Andrew oswald.
The QS Stars system does raise issues about commercial motivations and conflict of interests. Nonetheless, it has to be admitted that it does fill a gap in the current complex of international rankings. The Shanghai, QS and Times Higher Education rankings may be able to distinguish between Harvard and Cornell or Oxford and Manchester but they rank only a fraction of the world's universities. There are national ranking and rating systems but so far anyone wishing to compare middling universities in different countries has very little information available.
There is, however, the problem that making distinction among little known and mediocre universities, a disproportionate number of which are located in Indonesia, means a loss of discrimination at the top or near top. The National University of Ireland Galway, Ohio State University and Queensland University of Technology get the same five stars as Cambridge and Kings College London.
The QS Stars has potential to offer a broader assessment of university quality but it would be better for everybody if it was kept completely separate from the QS World University Rankings.
The New York Times has an article by Don Guttenplan on the QS Stars ratings which award universities one to five stars according to eight criteria, some of which are already assessed by the QS World University Rankings and some of which are not. The criteria are teaching quality, innovation and knowledge transfer, research quality, specialist subject, graduate employability, third mission, infrastructure and internationalisation.
The article has comments from rankings experts Ellen Hazelkorn, Simon Marginson and Andrew oswald.
The QS Stars system does raise issues about commercial motivations and conflict of interests. Nonetheless, it has to be admitted that it does fill a gap in the current complex of international rankings. The Shanghai, QS and Times Higher Education rankings may be able to distinguish between Harvard and Cornell or Oxford and Manchester but they rank only a fraction of the world's universities. There are national ranking and rating systems but so far anyone wishing to compare middling universities in different countries has very little information available.
There is, however, the problem that making distinction among little known and mediocre universities, a disproportionate number of which are located in Indonesia, means a loss of discrimination at the top or near top. The National University of Ireland Galway, Ohio State University and Queensland University of Technology get the same five stars as Cambridge and Kings College London.
The QS Stars has potential to offer a broader assessment of university quality but it would be better for everybody if it was kept completely separate from the QS World University Rankings.
Subscribe to:
Posts (Atom)