Interesting News
U.S. News are getting ready to start ranking American online colleges.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Friday, July 01, 2011
The THE Survey
Times Higher Education and its partner Thomson Reuters have announced the completion of their survey of academic opinion. There were 17,554 responses from 137 countries, nearly a third more than last year. That means nearly 31,000 responses over the last two years but THE, in contrast to their rivals, QS, will only count responses to this year's survey.
QS have still not closed their survey so it looks as though they might well be push the number of responses to their survey over 17,500 and claim victory. THE, no doubt, will point out that all of their respondents are new ones and that QS are counting respondents from 2010 and 2009.
THE have indicated the number of responses but not the number of survey forms that were sent out. So, the response rate for the survey is still unknown. This is more important for judging the validity of the survey than just the number of responses.
Times Higher Education and its partner Thomson Reuters have announced the completion of their survey of academic opinion. There were 17,554 responses from 137 countries, nearly a third more than last year. That means nearly 31,000 responses over the last two years but THE, in contrast to their rivals, QS, will only count responses to this year's survey.
QS have still not closed their survey so it looks as though they might well be push the number of responses to their survey over 17,500 and claim victory. THE, no doubt, will point out that all of their respondents are new ones and that QS are counting respondents from 2010 and 2009.
THE have indicated the number of responses but not the number of survey forms that were sent out. So, the response rate for the survey is still unknown. This is more important for judging the validity of the survey than just the number of responses.
Sunday, June 19, 2011
QS Latin American Rankings
QS have published the results of their preliminary study for a Latin American university ranking. This would be the second in their series of regional rankings after the Asian rankings, now in the third year.
The methodology suggested by the rankings is as follows:
Latin American Academic Reputation 30%
Papers per Faculty 10%
Citations per Paper 10%
Student Faculty Ratio 10%
Staff with Ph D 10%
Latin American Employer Reputation 20%
International Faculty 2.5%
International Students 2.5%
Inbound Exchange Students 2.5%
Outbound Exchange Students 2.5%
QS's surveys have been criticised on several grounds, including low response rates. However, the employer survey is valuable as an external assessment of universities, while the academic survey might be considered a complement to citations-based indicators which in both the THE and QS rankings have thrown up some odd results.
There are two indicators that are directly research based. The apparent ease with which citations can be manipulated means that a variety of indicators could be used here, including citations per paper, h-index, total publications and citations, proportion of funded research and publications in high impact journals.QS have missed an opportunity here.
Student faculty ratio is allocated 10% instead of 20 % as in the international ranking. This is an admittedly crude proxy for teaching quality. QS are apparently experimenting with a student satisfaction survey which might produce more valid results.
Ten per cent goes to the proportion of staff with Ph Ds. This may well encourage the further and pointless over-production of substandard doctorates.
Five per cent goes to international students and international faculty. I am not sure that this will mean very much especially in the smaller Central American republics. Counting exchange students is definitely not a good idea. This is something that can be easily manipulated. In the Asian rankings there were some large and puzzling increases in the numbers of exchange students between 2009 and 2010.
QS have published the results of their preliminary study for a Latin American university ranking. This would be the second in their series of regional rankings after the Asian rankings, now in the third year.
The methodology suggested by the rankings is as follows:
Latin American Academic Reputation 30%
Papers per Faculty 10%
Citations per Paper 10%
Student Faculty Ratio 10%
Staff with Ph D 10%
Latin American Employer Reputation 20%
International Faculty 2.5%
International Students 2.5%
Inbound Exchange Students 2.5%
Outbound Exchange Students 2.5%
QS's surveys have been criticised on several grounds, including low response rates. However, the employer survey is valuable as an external assessment of universities, while the academic survey might be considered a complement to citations-based indicators which in both the THE and QS rankings have thrown up some odd results.
There are two indicators that are directly research based. The apparent ease with which citations can be manipulated means that a variety of indicators could be used here, including citations per paper, h-index, total publications and citations, proportion of funded research and publications in high impact journals.QS have missed an opportunity here.
Student faculty ratio is allocated 10% instead of 20 % as in the international ranking. This is an admittedly crude proxy for teaching quality. QS are apparently experimenting with a student satisfaction survey which might produce more valid results.
Ten per cent goes to the proportion of staff with Ph Ds. This may well encourage the further and pointless over-production of substandard doctorates.
Five per cent goes to international students and international faculty. I am not sure that this will mean very much especially in the smaller Central American republics. Counting exchange students is definitely not a good idea. This is something that can be easily manipulated. In the Asian rankings there were some large and puzzling increases in the numbers of exchange students between 2009 and 2010.
Thursday, June 16, 2011
The QS Arts and Humanities Rankings
See here for the complete rankings.
Here are the top five in each indicator, academic survey, employer survey, citations per paper of the QS subject rankings.
There is nothing surprising about the leaders in the two surveys. But the citations indicator is another matter. Perhaps, QS has followed Times Higher in uncovering "clear pockets of excellence". Would any specialists out there like to comment on Newcastle University (the English one, not the Australian) and Durham as first for history -- something to do with proximity to Hadrian's Wall? What about Brown for Philosophy, Stellenbosch for Geography and Area Studies and Padua for linguistics?
English Language and Literature
Academic survey
1. Harvard
2. Oxford
3. Cambridge
4. UC Berkeley
5. Yale
Employer Survey
1. Oxford
2. Cambridge
3. Harvard
4. MIT
5. UC Los Angeles
No ranking for citations
Modern Languages
Academic Survey
1. Harvard
2, UC Berkeley
3. Oxford
4. Cambridge
5. Cornell
Employer Survey
1. Harvard
2. Oxford
3. Cambridge
4. MIT
5. Stanford
No rankings for citations
History
Academic Survey
1. Harvard
2. Cambridge
3. Oxford
4. Yale
5. UC Berkeley
Employer Survey
1. Oxford
2. Harvard
3. Cambridge
4. University of Pennsylvania
5. Yale
Citations per Paper
1= Newcastle (UK)
1= Durham
3. Liverpool
4. George Washington
5. University of Washington
Philosophy
Academic Survey
1. Oxford
2. Harvard
3. Cambridge
4. UC Berkeley
5. Princeton
Employer Survey
1. Cambridge
2. Harvard
3. Oxford
4. MIT
5. UC Berkeley
Citations per Paper
1. Brown
2. Melbourne
3. MIT
4= Rutgers
4= Zurich
Geography and Area Studies
Academic survey
1. UC Berkeley
2. Cambridge
3. Oxford
4. Harvard
5. Tokyo
Employer Survey
1. Harvard
2. Cambridge
3. Oxford
4. MIT
5. UC Berkeley
Citations per Paper
1. Stellenbosch
2. Lancaster
3. Durham'
4. Queen Mary London
5. University of Kansas
Linguistics
Academic Survey
1. Cambridge
2. Oxford
3. Harvard
4. UC Berkeley
5. Stanford
Employer Survey
1. Harvard
2. Oxford
3. MIT
4. UC Berkeley
5. Melbourne
Citations per Paper
1. Padua
2. Boston University
3. York University (UK)
4. Princeton
5. Harvard
See here for the complete rankings.
Here are the top five in each indicator, academic survey, employer survey, citations per paper of the QS subject rankings.
There is nothing surprising about the leaders in the two surveys. But the citations indicator is another matter. Perhaps, QS has followed Times Higher in uncovering "clear pockets of excellence". Would any specialists out there like to comment on Newcastle University (the English one, not the Australian) and Durham as first for history -- something to do with proximity to Hadrian's Wall? What about Brown for Philosophy, Stellenbosch for Geography and Area Studies and Padua for linguistics?
English Language and Literature
Academic survey
1. Harvard
2. Oxford
3. Cambridge
4. UC Berkeley
5. Yale
Employer Survey
1. Oxford
2. Cambridge
3. Harvard
4. MIT
5. UC Los Angeles
No ranking for citations
Modern Languages
Academic Survey
1. Harvard
2, UC Berkeley
3. Oxford
4. Cambridge
5. Cornell
Employer Survey
1. Harvard
2. Oxford
3. Cambridge
4. MIT
5. Stanford
No rankings for citations
History
Academic Survey
1. Harvard
2. Cambridge
3. Oxford
4. Yale
5. UC Berkeley
Employer Survey
1. Oxford
2. Harvard
3. Cambridge
4. University of Pennsylvania
5. Yale
Citations per Paper
1= Newcastle (UK)
1= Durham
3. Liverpool
4. George Washington
5. University of Washington
Philosophy
Academic Survey
1. Oxford
2. Harvard
3. Cambridge
4. UC Berkeley
5. Princeton
Employer Survey
1. Cambridge
2. Harvard
3. Oxford
4. MIT
5. UC Berkeley
Citations per Paper
1. Brown
2. Melbourne
3. MIT
4= Rutgers
4= Zurich
Geography and Area Studies
Academic survey
1. UC Berkeley
2. Cambridge
3. Oxford
4. Harvard
5. Tokyo
Employer Survey
1. Harvard
2. Cambridge
3. Oxford
4. MIT
5. UC Berkeley
Citations per Paper
1. Stellenbosch
2. Lancaster
3. Durham'
4. Queen Mary London
5. University of Kansas
Linguistics
Academic Survey
1. Cambridge
2. Oxford
3. Harvard
4. UC Berkeley
5. Stanford
Employer Survey
1. Harvard
2. Oxford
3. MIT
4. UC Berkeley
5. Melbourne
Citations per Paper
1. Padua
2. Boston University
3. York University (UK)
4. Princeton
5. Harvard
Tuesday, May 31, 2011
Asia: Japan Falling, Korea and China Rising
See my article on the QS Asian Rankings 2011 in University World News
Wednesday, May 18, 2011
The QS Life Sciences Ranking Continued
Looking at the scores for the three indicators, academic survey, employer survey and citations per paper, we find the situation is similar to that of the engineering rankings released last month. There is a reasonably high correlation between the scores for the two surveys:
Medicine .720
Biological Sciences .747
Psychology .570
The correlations between the score for citations per paper and the academic survey are low but still significant:
Medicine .290
Biological Sciences .177
Psychology .217
The correlations between the indicator citations and the employer survey are low or very low and insignificant:
Medicine .129
Biological Sciences .015
Psychology -027
Looking at the top five universities for each indicator, there are no surprises as far as the surveys are concerned but some of the universities in the top five for citations do cause some eyebrow raising. Arizona State university? University of Cinncinati? Tokyo Metropolitan University? Perhaps these are hitherto unnoticed pockets of excellence of the Alexandrian kind?
Top Five in Medicine
Academic Survey
1. Harvard
2. Cambridge
3. Oxford
4. Stanford
5. Yale
Employer Survey
1. Harvard
2. Cambridge
3. Oxford
4. MIT
5. Stanford
Citations per Paper
1. MIT
2. Rockefeller University
3. Caltech
4. The University of Texas M. D. Anderson Cancer Center
5. Harvard
Top Five in Biological Sciences
Academic Survey
1. Cambridge
2. Harvard
3. UC Berkeley
4. Oxford
5. MIT
Employer Survey
1. Harvard
2. Cambridge
3. MIT
4. Oxford
5. Stanford
Citations per Paper
1. Arizona State university
2. Tokyo Metropolitan University
3. MIT
4. Rockefeller University
5. Harvard
Top Five in Psychology
Academic Survey
1. Harvard
2. Stanford
3. UC Berkeley
4. Cambridge
5. Oxford
Employer Survey
1. Cambridge
2. Harvard
3. Oxford
4. Stanford
5. UC Berkeley
Citations per Paper
1. UC Irvine
2. Emory
3. Unuversity of Cinncinati
4. Princeton
5. Dartmouth College
Looking at the scores for the three indicators, academic survey, employer survey and citations per paper, we find the situation is similar to that of the engineering rankings released last month. There is a reasonably high correlation between the scores for the two surveys:
Medicine .720
Biological Sciences .747
Psychology .570
The correlations between the score for citations per paper and the academic survey are low but still significant:
Medicine .290
Biological Sciences .177
Psychology .217
The correlations between the indicator citations and the employer survey are low or very low and insignificant:
Medicine .129
Biological Sciences .015
Psychology -027
Looking at the top five universities for each indicator, there are no surprises as far as the surveys are concerned but some of the universities in the top five for citations do cause some eyebrow raising. Arizona State university? University of Cinncinati? Tokyo Metropolitan University? Perhaps these are hitherto unnoticed pockets of excellence of the Alexandrian kind?
Top Five in Medicine
Academic Survey
1. Harvard
2. Cambridge
3. Oxford
4. Stanford
5. Yale
Employer Survey
1. Harvard
2. Cambridge
3. Oxford
4. MIT
5. Stanford
Citations per Paper
1. MIT
2. Rockefeller University
3. Caltech
4. The University of Texas M. D. Anderson Cancer Center
5. Harvard
Top Five in Biological Sciences
Academic Survey
1. Cambridge
2. Harvard
3. UC Berkeley
4. Oxford
5. MIT
Employer Survey
1. Harvard
2. Cambridge
3. MIT
4. Oxford
5. Stanford
Citations per Paper
1. Arizona State university
2. Tokyo Metropolitan University
3. MIT
4. Rockefeller University
5. Harvard
Top Five in Psychology
Academic Survey
1. Harvard
2. Stanford
3. UC Berkeley
4. Cambridge
5. Oxford
Employer Survey
1. Cambridge
2. Harvard
3. Oxford
4. Stanford
5. UC Berkeley
Citations per Paper
1. UC Irvine
2. Emory
3. Unuversity of Cinncinati
4. Princeton
5. Dartmouth College
Friday, May 06, 2011
Inappropriate Analogy Watch
The point of the article is found in an editorial by Ann Mroz in the same issue.
Some post-1992 institutions facing immediate financial constraints are moving swiftly to deal with their problems.London Metropolitan University , for example, is cutting about 400 of its 557 degree courses, and the University of East London is planning to axe its School of Humanities and Social Sciences.
Staff at the former institution describe the move as "an attempted reversal of widening participation...of everything thatLondon Met...came into existence to promote". Staff at the latter describe its social sciences and humanities as high-performing areas. "Are UEL's non-traditional students going to be denied an academic education on the basis of managers' assumption that all such students are good for - and will be willing to pay for - is training?" they ask.
She therefore concludes.
UK universities have survived for 800 years through successful evolution in a relatively stable habitat, a context they share with the cooperative bonobo. The competitive chimpanzee, however, has had to adapt to more hostile conditions. In shaping the next stage of its evolution, the academy has the choice of emulating either the aggressive ape or the better angels of our nature.
Times Higher Education of April 21st has a rather disconcerting cover, a close up picture of a bonobo ape. Inside there is a long article by a graduate student at the University of British Columbia that argues that humans may have been too hasty in assuming that their current aggressive behavior is rooted in their ancestry. He suggests that humanity is more closely related to the bonobos than to the common chimpanzees. The former are peaceful, promiscuous, egalitarian, dominated by females and without hang ups about homosexuality. They sound rather like a mix between a hippie commune and a humanities faculty at an American state university or least like those places would imagine themselves to be. Common chimpanzees on the other hand are notorious for behaving like a gang of skinheads on a Saturday night.
This is a variant of a common theme in popularized social science writing. For a long time, western feminists and leftists have looked to contemporary or historical pre-modern societies for validation only to find disappointment. Margaret Mead’s free loving Samoans tuned out to be rather different while the search for mother earth worshipping matriarchies has been equally futile. Now, it seems they are forced to go back several million years. Perhaps the bonobo really are what primatologists say they are. But it would be unsurprising if they turn out to be as politically incorrect, competitive and unpleasant as the chimpanzees.
In any case, it is pseudo-science to suggest that humanity can take any other species as a model or inspiration . There are dozens of extinct species and subspecies between us and the bonobos who may have been even more gentle and promiscuous than the bonobo or even more violent and competitive than the chimpanzee.
In higher education, we appear to moving from an approach based on cooperation to one based on competition, from the bonobo compact to the chimp reforms, if you like. The Browne Review launches us into a quasi-market world, which in itself has far-reaching implications. Unfortunately, it comes on top of a range of pre-existing and co-existing factors: the concentration of research funding; tighter immigration rules; cuts in teacher training and NHS cash; and internationalisation.
Some post-1992 institutions facing immediate financial constraints are moving swiftly to deal with their problems.
Staff at the former institution describe the move as "an attempted reversal of widening participation...of everything that
She therefore concludes.
There is a problem with this. The bonobo are close to extinction. There are only 10,000 of them left, compared with 300,000 common chimpanzees and the only reason those 10,000 have survived is that they are separated by the Congo river from the chimpanzees.
If Ann Mroz thinks British universities have evolved though cooperation over 800 years she should start by reading the novels of C. P. Snow. No doubt they have become thoroughly cooperative over the last few years as diversity workshops, collaborative projects, performance appraisals, quality audits and professional development seminars have eradicted most signs of individuality in their faculty.
But there is no Congo river separating British universities from all those nerds and buffs in Korea, China and Singapore who work 80 hours a week and refuse to cooperate and are quite uninterested in diversity, safe and comfortable environments and collegiality.
And just what is so bad about training?
Wednesday, May 04, 2011
New QS Rankings
QS have just released their Life Sciences rankings based on their employer and academic surveys and citations per paper.
Here are the top five for medicine, biology and psychology.
Medicine
1. Harvard
2. Cambridge
3. MIT
4. Oxford
5. Stanford
Biological Sciences
1. Harvard
2. MIT
3. Cambridge
4. Oxford
5. Stanford
Psychology
1. Harvard
2. Cambridge
3. Stanford
4. Oxford
5. UC Berkeley
QS have just released their Life Sciences rankings based on their employer and academic surveys and citations per paper.
Here are the top five for medicine, biology and psychology.
Medicine
1. Harvard
2. Cambridge
3. MIT
4. Oxford
5. Stanford
Biological Sciences
1. Harvard
2. MIT
3. Cambridge
4. Oxford
5. Stanford
Psychology
1. Harvard
2. Cambridge
3. Stanford
4. Oxford
5. UC Berkeley
Monday, May 02, 2011
Rising and Falling in Asia-Pacific
One problem with most international rankings is that they tend to measure historical quality and are not much use for predicting what will happen in the near future. The Shanghai rankings' alumni and awards criteria allow Oxbridge and some German universities to live off intellectual capital generated decades ago. The surveys of the QS rankings inevitably favour big, old, wealthy universities with years of alumni and endowments behind them. It will take a long time for any rapidly developing school to score well on the eleven year criteria in the HEEACT rankings.
I have compiled a list of the percentage change in the number of publications in ISI databases of universities in the Asia Pacific region between 2009 and 2010. The ranking includes all the universities listed in the 2009 Shanghai ARWU from the Asia Pacific region.
King Saud University is at the top of the table, almost doubling its output of papers between 2009 and 2010. Six out of the top 10 are from Greater China. Some major Japanese universities seem to be shrinking and Israel and Australia do not seem to be doing very well.
Some caveats. This is basically a measure of quantity not quality of research. Also, the results may reflect organisational changes such as the acquisition or loss of a medical school. The data were collected over several weeks, during which there could be additions to the databases so the scores were rounded out to whole numbers.
Percentage change in publications in the ISI Databases, 2009-2010
1. King Saud University, Saudi Arabia, 95
2. Shandong University, China, 16
3. National Yang Ming University, Taiwan 15
4. Tainjin University, China, 13
5. Nanyang Technological University, Singapore, 13
6. Sun Yat Sen University, China, 11
7. Fudan University, China, 10
8. Victoria University Wellington, New Zealand 9
9. Niigata University, Japan, 9
10. Gunma University, Japan, 9
11. Nihon University, Japan, 9
12. University of Tasmania, Australia, 9
13. Tokyo Medical and Dental University, Japan, 8
14. Jilin University, China, 8
15. Chang Gung University, Taiwan, 7
16. Chinese University of Hong Kong, 7
17. Massey University of New Zealand, 7
18. University of Auckland, New Zealand, 6
19. Deakin University, Australia, 6
20. La Trobe University, Australia, 6
21. Seoul National University, Korea, 6
22. Nanjing University, China, 6
23. Nankai University, Japan, 6
24 University of Canterbury, New Zealand, 6
25. Swinburne University of Technology, Australia, 6
26. Weizman Institute of Science, Israel, 6
27. Kumamoto University, Japan, 6
28. Osaka Prefecture University, Japan, 5
29. Yonsei University, Korea, 4
30. Peking University, China, 4
31. University of Haifa, Israel, 4
32. Lanzhou University, China, 3
33. James Cook University, Australia, 3
34. Hong Kong Polytechnic University, 3
35. University of Hong Kong, 3
36. University of Queensland, Australia, 3
37. China Agricultural University, 3
38. Curtin University of Technology, Australia, 3
39. University of Otago, New Zealand, 3
40. Kyungpook National University, Korea, 3
41. Korea Advanced Institute of Science and Technology, 2
42. Sichuan University, China, 3
43. Korea University, 2
44. University of Adelaide, Australia, 2
45. Dalian University of Technology, China, 2
46. Macquarie University, Australia, 2
47. Sunkyunkwan University, Korea, 2
48. Pusan National University, Korea, 1
49. Flinders University, Australia, 1
50. Shanghai Jiao Tung University, China, 1
51. University of New South Wales, Australia, 1
52. National Taiwan University, 1
53. Osaka City University, Japan, 1
54. Monash University, Astralia, 0
55. Gifu University, Japan, 0
56. Tsinghua University, China, -1
57. Hiroshima University, Japan, -1
58. Zhejiang University, China, -1
59. Pohang University of Science and Technology, Korea, -1
60. Bar Ilan University, Israel, -1
61. National Chiao Tung University, Taiwan, -1
62. Kobe University, Japan,-1
63. University of Tehran, Iran, -1
64. University of Western Australia, -2
65. University of Western Sydney, Australia, -2
66. National Tsinghua University, Taiwan, -2
67. National University of Singapore, -2
68. Harbin Institute of Technology, China, -2
69. Kanazawa University, Japan, -3
70. City University of Hong Kong, -3
71. University of Tokushima, Japan, -4
72. University of Newcastle, Australia -4
73. University of Melbourne, Australia. -4
74. Ben Gurion University of the Negev, Israel, -4
75. Indian Institute of Science, - 4
76. Osaka University, Japan, -4
77. Kyushu Uniersity, Japan, -4
78. University of Wollongong, Australia, -4
79. Hokkaido University, Japan, -5
80. Huazhong University of Science and Technology, China, -5
81. University of Tsukuba, Japan, -5
82. University of Tokyo, Japan, -5
83. Yamaguchi University, Japan, -5
84. Hanyang University, Korea, -6
85. Hong Kong university of Science and Technlogy, -6
86. Technion - Israel Institute of Technology, -6
87. Hebrew University of Jerusalem, Israel, -6
88. Nagasaki University, Japan, -6
89. Kyoto University, Japan, -6
90. Chiba University, Japan, -6
91. Australian National University, -6
92. Kagoshima University, Japan, -7
93. Tel Aviv University, Israel. -7
94. Nagoya University, Japan, -7
95. National Cheng Kung University, Taiwan, -8
97. Keio University, Japan, -8
98. Okayama University, Japan, -8
99. National Central University, Taiwan, -9
100. Nara Institute of Science and Technology, Japan,-9
101. Ehime University, Japan, -10
102. Tohoku University, Japan, -10
103. Tokyo Institute of Technology, Japan, -10
104. Indian Institute of Technology Kharagpur, -13
One problem with most international rankings is that they tend to measure historical quality and are not much use for predicting what will happen in the near future. The Shanghai rankings' alumni and awards criteria allow Oxbridge and some German universities to live off intellectual capital generated decades ago. The surveys of the QS rankings inevitably favour big, old, wealthy universities with years of alumni and endowments behind them. It will take a long time for any rapidly developing school to score well on the eleven year criteria in the HEEACT rankings.
I have compiled a list of the percentage change in the number of publications in ISI databases of universities in the Asia Pacific region between 2009 and 2010. The ranking includes all the universities listed in the 2009 Shanghai ARWU from the Asia Pacific region.
King Saud University is at the top of the table, almost doubling its output of papers between 2009 and 2010. Six out of the top 10 are from Greater China. Some major Japanese universities seem to be shrinking and Israel and Australia do not seem to be doing very well.
Some caveats. This is basically a measure of quantity not quality of research. Also, the results may reflect organisational changes such as the acquisition or loss of a medical school. The data were collected over several weeks, during which there could be additions to the databases so the scores were rounded out to whole numbers.
Percentage change in publications in the ISI Databases, 2009-2010
1. King Saud University, Saudi Arabia, 95
2. Shandong University, China, 16
3. National Yang Ming University, Taiwan 15
4. Tainjin University, China, 13
5. Nanyang Technological University, Singapore, 13
6. Sun Yat Sen University, China, 11
7. Fudan University, China, 10
8. Victoria University Wellington, New Zealand 9
9. Niigata University, Japan, 9
10. Gunma University, Japan, 9
11. Nihon University, Japan, 9
12. University of Tasmania, Australia, 9
13. Tokyo Medical and Dental University, Japan, 8
14. Jilin University, China, 8
15. Chang Gung University, Taiwan, 7
16. Chinese University of Hong Kong, 7
17. Massey University of New Zealand, 7
18. University of Auckland, New Zealand, 6
19. Deakin University, Australia, 6
20. La Trobe University, Australia, 6
21. Seoul National University, Korea, 6
22. Nanjing University, China, 6
23. Nankai University, Japan, 6
24 University of Canterbury, New Zealand, 6
25. Swinburne University of Technology, Australia, 6
26. Weizman Institute of Science, Israel, 6
27. Kumamoto University, Japan, 6
28. Osaka Prefecture University, Japan, 5
29. Yonsei University, Korea, 4
30. Peking University, China, 4
31. University of Haifa, Israel, 4
32. Lanzhou University, China, 3
33. James Cook University, Australia, 3
34. Hong Kong Polytechnic University, 3
35. University of Hong Kong, 3
36. University of Queensland, Australia, 3
37. China Agricultural University, 3
38. Curtin University of Technology, Australia, 3
39. University of Otago, New Zealand, 3
40. Kyungpook National University, Korea, 3
41. Korea Advanced Institute of Science and Technology, 2
42. Sichuan University, China, 3
43. Korea University, 2
44. University of Adelaide, Australia, 2
45. Dalian University of Technology, China, 2
46. Macquarie University, Australia, 2
47. Sunkyunkwan University, Korea, 2
48. Pusan National University, Korea, 1
49. Flinders University, Australia, 1
50. Shanghai Jiao Tung University, China, 1
51. University of New South Wales, Australia, 1
52. National Taiwan University, 1
53. Osaka City University, Japan, 1
54. Monash University, Astralia, 0
55. Gifu University, Japan, 0
56. Tsinghua University, China, -1
57. Hiroshima University, Japan, -1
58. Zhejiang University, China, -1
59. Pohang University of Science and Technology, Korea, -1
60. Bar Ilan University, Israel, -1
61. National Chiao Tung University, Taiwan, -1
62. Kobe University, Japan,-1
63. University of Tehran, Iran, -1
64. University of Western Australia, -2
65. University of Western Sydney, Australia, -2
66. National Tsinghua University, Taiwan, -2
67. National University of Singapore, -2
68. Harbin Institute of Technology, China, -2
69. Kanazawa University, Japan, -3
70. City University of Hong Kong, -3
71. University of Tokushima, Japan, -4
72. University of Newcastle, Australia -4
73. University of Melbourne, Australia. -4
74. Ben Gurion University of the Negev, Israel, -4
75. Indian Institute of Science, - 4
76. Osaka University, Japan, -4
77. Kyushu Uniersity, Japan, -4
78. University of Wollongong, Australia, -4
79. Hokkaido University, Japan, -5
80. Huazhong University of Science and Technology, China, -5
81. University of Tsukuba, Japan, -5
82. University of Tokyo, Japan, -5
83. Yamaguchi University, Japan, -5
84. Hanyang University, Korea, -6
85. Hong Kong university of Science and Technlogy, -6
86. Technion - Israel Institute of Technology, -6
87. Hebrew University of Jerusalem, Israel, -6
88. Nagasaki University, Japan, -6
89. Kyoto University, Japan, -6
90. Chiba University, Japan, -6
91. Australian National University, -6
92. Kagoshima University, Japan, -7
93. Tel Aviv University, Israel. -7
94. Nagoya University, Japan, -7
95. National Cheng Kung University, Taiwan, -8
97. Keio University, Japan, -8
98. Okayama University, Japan, -8
99. National Central University, Taiwan, -9
100. Nara Institute of Science and Technology, Japan,-9
101. Ehime University, Japan, -10
102. Tohoku University, Japan, -10
103. Tokyo Institute of Technology, Japan, -10
104. Indian Institute of Technology Kharagpur, -13
Sunday, May 01, 2011
Queen's Will be Ranked This Year
A bit of good news for Times Higher Education. Queen's University, Canada, has decided to take part in this year's THE World University Rankings.
'Rankings methodologies had come under scrutiny in recent years. Some universities including Queen’s were concerned that inconsistent criteria and data used for comparing institutions did not accurately reflect their objectives, and some have participated in rankings selectively or not at all.
Last year, Queen’s decided not to submit information to the Times Higher Education ranking because of concerns about its methodology. As a result, Queen’s was not included in the Top 200 list. The Times [sic] has since changed its methodology.
“Queen’s is still concerned because the rankings focus mainly on research volume and intensity, and although Queen’s is one of Canada’s top research universities, our quality undergraduate student experience and out-of- classroom experience are not fully captured,” says Chris Conway, Director, Institutional Research and Planning. “This just means we need to work hard to tell the other side of our story – that we’re a balanced academy, excelling in both research and the student experience.” '
The implication that Queen's has decided to take part this year because of a change in methodology is difficult to accept. THE has talked about revising their citations indicator but nothing definite has emerged. The real reason might be this:
'Although both global and domestic rankings struggle with standardizing data collection and interpretation, they provide one of the few tools available to prospective undergraduate students and their families for evaluating universities.
“With so many options, rankings help to reassure parents and students about their decision to attend a given university,” says Andrea MacIntyre, the university’s international admission manager.
Queen’s position in rankings is one of the top three concerns among prospective undergraduate students, particularly in China and India, where the national education systems focus heavily on class standings from the early stages of education.'
A bit of good news for Times Higher Education. Queen's University, Canada, has decided to take part in this year's THE World University Rankings.
'Rankings methodologies had come under scrutiny in recent years. Some universities including Queen’s were concerned that inconsistent criteria and data used for comparing institutions did not accurately reflect their objectives, and some have participated in rankings selectively or not at all.
Last year, Queen’s decided not to submit information to the Times Higher Education ranking because of concerns about its methodology. As a result, Queen’s was not included in the Top 200 list. The Times [sic] has since changed its methodology.
“Queen’s is still concerned because the rankings focus mainly on research volume and intensity, and although Queen’s is one of Canada’s top research universities, our quality undergraduate student experience and out-of- classroom experience are not fully captured,” says Chris Conway, Director, Institutional Research and Planning. “This just means we need to work hard to tell the other side of our story – that we’re a balanced academy, excelling in both research and the student experience.” '
The implication that Queen's has decided to take part this year because of a change in methodology is difficult to accept. THE has talked about revising their citations indicator but nothing definite has emerged. The real reason might be this:
'Although both global and domestic rankings struggle with standardizing data collection and interpretation, they provide one of the few tools available to prospective undergraduate students and their families for evaluating universities.
“With so many options, rankings help to reassure parents and students about their decision to attend a given university,” says Andrea MacIntyre, the university’s international admission manager.
Queen’s position in rankings is one of the top three concerns among prospective undergraduate students, particularly in China and India, where the national education systems focus heavily on class standings from the early stages of education.'
Saturday, April 30, 2011
300: The Iranian Version
I have always thought that university ranking organisations are unimaginative in their choice of names. Surely they could do better than ARWU, WUR and HEEACT? I admit though that SCIMAGO sounds a bit better. What about renaming one of the tables the Comparative Ranking of Academic Performance?
For a moment, when reading the text below, I thought QS had come with a slightly more interesting name for their rankings, SUE, but it is, it seems, just the QS WUR somewhat mutated by translation in and out of Farsi.
I wonder where the bit about 300 jurors came from.
I have always thought that university ranking organisations are unimaginative in their choice of names. Surely they could do better than ARWU, WUR and HEEACT? I admit though that SCIMAGO sounds a bit better. What about renaming one of the tables the Comparative Ranking of Academic Performance?
For a moment, when reading the text below, I thought QS had come with a slightly more interesting name for their rankings, SUE, but it is, it seems, just the QS WUR somewhat mutated by translation in and out of Farsi.
"QS World University Ranking asks head of ISC (Islamic World Science Citation Center) for cooperation in electing high ranked universities of the world.
IBNA: According to the public relations of ISC, each year the QS World University Ranking releases the list of top world universities and for the year 2011, Dr Jafar Mehrad is asked for cooperation in electing high-ranked universities.
Meanwhile, ten out of total state and non-state Iranian universities and 30 universities from Asia and the Middle East are to be sorted in this election according to the criteria of this ranking system.
For the election of top world universities, 300 experts are annually invited and this year Dr Jafar Mehrad is one of the jurors.
The results of QS World University ranking will be released in autumn 2011. "
Meanwhile, ten out of total state and non-state Iranian universities and 30 universities from Asia and the Middle East are to be sorted in this election according to the criteria of this ranking system.
For the election of top world universities, 300 experts are annually invited and this year Dr Jafar Mehrad is one of the jurors.
The results of QS World University ranking will be released in autumn 2011. "
I wonder where the bit about 300 jurors came from.
Friday, April 22, 2011
Worth Reading
Retraction Watch is a blog that deals with the retraction of scientific papers because of plagiarism, duplication, fabrication of data and so on.
Second Edition
The National Research Council in the US has released a new version of its 2010 doctoral program rankings. It seems that there were a large number of errors first time around According to an article by David Glenn in the Chronicle of Higher Education:
The National Research Council in the US has released a new version of its 2010 doctoral program rankings. It seems that there were a large number of errors first time around According to an article by David Glenn in the Chronicle of Higher Education:
The National Research Council released on Thursday a revised edition of its 2010 rankings of American doctoral programs that corrects four types of errors discovered in the original report, which was issued last September. But the new rankings do not deal with certain other concerns that scholars have raised about the project.
In the revised edition, almost all programs' positions on the council's "ranges of rankings" have changed at least slightly, but in most cases the changes are not substantial. In a few academic fields, however, the numbers have changed significantly for at least 20 percent of the programs. Those include geography, linguistics, and operations research.
A spreadsheet of the new rankings is available for download at the council's Web site. The council has also released a separate, much smaller spreadsheet that summarizes the changes in programs' "R" and "S" rankings. (R rankings reflect how similar a program is to the programs in its field with the strongest reputations. S rankings more directly reflect a program's performance on variables that scholars in the field say are most important, such as faculty research productivity or student diversity.)
The new edition makes four kinds of corrections. The original report in many cases undercounted faculty members' honors and awards, the proportion of new graduates who find academic jobs, and the proportion of first-year students who are given full financial support. In nonhumanities fields, the report also used faulty data for faculty members' 2002 publications, which in turn caused errors in calculations of citation counts.
Tuesday, April 19, 2011
Reviewing the THE Rankings
An article by Phil Baty in Times Higher Education looks at the various components of last year's THE World University Rankings and gives some hints about changes to come this year. Some good points but also some problems. My comments are in red.
We look at research in a number of different ways, examining reputation, income and volume (through publication in leading academic journals indexed by Thomson Reuters). But we give the highest weighting to an indicator of “research influence”, measured by the number of times published research is cited by academics across the globe.
We looked at more than 25 million citations over a five-year period from more than five million articles.
Yes, but when you normalise by field and by year, then you get very low benchmark figures and a few hundred citations to a few dozen articles can acquire disproportionate influence.
All the data were normalised to reflect variations in citation volume between different subject areas, so universities with strong research in fields with lower global citation rates were not penalised.
The lower the global citation rates the more effect a strategically timed and placed citation can have and the greater the possibility of gaming the system.
We also sought to acknowledge excellence in research from institutions in developing nations, where there are less-established research networks and lower innate citation rates, by normalising the data to reflect variations in citation volume between regions. We are proud to have done this, but accept that more discussion is needed to refine this modification.
In principle this sounds like a good idea but it could just mean that Singapore, Israel, South Africa and the south of Brazil might be rewarded for being located in under-achieving regions of which they are not really a part.
The “research influence” indicator has proved controversial, as it has shaken up the established order, giving high scores to smaller institutions with clear pockets of research excellence and boosting those in the developing world, often at the expense of larger, more established research-intensive universities.
Here is a list of universities that benefited disproportionately from high scores for the "research influence" indicator. Are they really smaller, are they really in the developing world? And as for those clear pockets of excellence, that would certainly be the case for Bilkent (you can find out who he is in five minutes) but for Alexandria...?
Boston College
University of California Santa Cruz
Royal Holloway, University of London
Pompeu Fabra
Bilkent
Kent State University
Hong Kong Baptist University
Alexandria
Barcelona
Victoria University WellingtonTokyo Metropolitan University
University of Warsaw
Something else about this indicator that nobody seems to have noticed is that even if the methodology remains completely unchanged, it is capable of producing dramatic changes from year to year. Suppose that an article in a little cited field like applied math was cited ten times in its first year of publication. That could easily be 100 times the benchmark figure. But in the second year that might be only ten times the benchmark. So if the clear pocket of research excellence stops doing research and becomes a newspaper columnist or something like that, the research influence score will go tumbling down.
We judge knowledge transfer with just one indicator – research income earned from industry – but plan to enhance this category with other indicators.
This is a good idea since it represents, however indirectly, an external assessment of universities.
Internationalisation is recognised through data on the proportion of international staff and students attracted to each institution.
Enough has been said about the abuses involved in recruiting international students. Elsewhere THE have said that they are adding more measures of internationalisation.
The flagship – and most dramatic – innovation is the set of five indicators used to give proper credit to the role of teaching in universities, with a collective weighting of 30 per cent.
But I should make one thing very clear: the indicators do not measure teaching “quality”. There is no recognised, globally comparative data on teaching outputs at present. What the THE rankings do is look at the teaching “environment” to give a sense of the kind of learning milieu in which students are likely to find themselves.
The key indicator for this category draws on the results of a reputational survey on teaching. Thomson Reuters carried out its Academic Reputation Survey – a worldwide, invitation-only poll of 13,388 experienced scholars, statistically representative of global subject mix and geography – in early 2010.
It examined the perceived prestige of institutions in both research and teaching. Respondents were asked only to pass judgement within their narrow area of expertise, and we asked them “action-based” questions (such as: “Where would you send your best graduates for the most stimulating postgraduate learning environment?”) to elicit more meaningful responses.
In some ways, the survey is an improvement on the THE-QS "peer review" but the number of responses was lower than the target and we still do not know how many survey forms were sent out. Without knowing the response rate we cannot determine the validity of the survey.
The rankings also measure staff-to-student ratios. This is admittedly a relatively crude proxy for teaching quality, hinting at the level of personal attention students may receive from faculty, so it receives a relatively low weighting of just 4.5 per cent.
Wait a minute. This means the measure is the number of faculty or staff per student. But the THE web site says "undergraduates admitted per academic", which is the complete opposite. An explanation is needed
We also look at the ratio of PhD to bachelor’s degrees awarded, to give a sense of how knowledge-intensive the environment is, as well as the number of doctorates awarded, scaled for size, to indicate how committed institutions are to nurturing the next generation of academics and providing strong supervision.
Counting the proportion of postgraduate students is not a bad idea. If nothing else, it is a crude measure of the maturity of the students. However, counting doctoral students may well have serious backwash effects as students who would be quite happy in professional or masters programs are coerced or cajoled into Ph D courses that they may never finish and which will lead to a life of ill-paid drudgery if they do.
The last of our teaching indicators is a simple measure of institutional income scaled against academic staff numbers. This figure, adjusted for purchasing-price parity so that all nations compete on a level playing field, gives a broad sense of the general infrastructure and facilities available.
Yes, this is important and it's time someone started counting it.
An article by Phil Baty in Times Higher Education looks at the various components of last year's THE World University Rankings and gives some hints about changes to come this year. Some good points but also some problems. My comments are in red.
We look at research in a number of different ways, examining reputation, income and volume (through publication in leading academic journals indexed by Thomson Reuters). But we give the highest weighting to an indicator of “research influence”, measured by the number of times published research is cited by academics across the globe.
We looked at more than 25 million citations over a five-year period from more than five million articles.
Yes, but when you normalise by field and by year, then you get very low benchmark figures and a few hundred citations to a few dozen articles can acquire disproportionate influence.
All the data were normalised to reflect variations in citation volume between different subject areas, so universities with strong research in fields with lower global citation rates were not penalised.
The lower the global citation rates the more effect a strategically timed and placed citation can have and the greater the possibility of gaming the system.
We also sought to acknowledge excellence in research from institutions in developing nations, where there are less-established research networks and lower innate citation rates, by normalising the data to reflect variations in citation volume between regions. We are proud to have done this, but accept that more discussion is needed to refine this modification.
In principle this sounds like a good idea but it could just mean that Singapore, Israel, South Africa and the south of Brazil might be rewarded for being located in under-achieving regions of which they are not really a part.
The “research influence” indicator has proved controversial, as it has shaken up the established order, giving high scores to smaller institutions with clear pockets of research excellence and boosting those in the developing world, often at the expense of larger, more established research-intensive universities.
Here is a list of universities that benefited disproportionately from high scores for the "research influence" indicator. Are they really smaller, are they really in the developing world? And as for those clear pockets of excellence, that would certainly be the case for Bilkent (you can find out who he is in five minutes) but for Alexandria...?
Boston College
University of California Santa Cruz
Royal Holloway, University of London
Pompeu Fabra
Bilkent
Kent State University
Hong Kong Baptist University
Alexandria
Barcelona
Victoria University WellingtonTokyo Metropolitan University
University of Warsaw
Something else about this indicator that nobody seems to have noticed is that even if the methodology remains completely unchanged, it is capable of producing dramatic changes from year to year. Suppose that an article in a little cited field like applied math was cited ten times in its first year of publication. That could easily be 100 times the benchmark figure. But in the second year that might be only ten times the benchmark. So if the clear pocket of research excellence stops doing research and becomes a newspaper columnist or something like that, the research influence score will go tumbling down.
We judge knowledge transfer with just one indicator – research income earned from industry – but plan to enhance this category with other indicators.
This is a good idea since it represents, however indirectly, an external assessment of universities.
Internationalisation is recognised through data on the proportion of international staff and students attracted to each institution.
Enough has been said about the abuses involved in recruiting international students. Elsewhere THE have said that they are adding more measures of internationalisation.
The flagship – and most dramatic – innovation is the set of five indicators used to give proper credit to the role of teaching in universities, with a collective weighting of 30 per cent.
But I should make one thing very clear: the indicators do not measure teaching “quality”. There is no recognised, globally comparative data on teaching outputs at present. What the THE rankings do is look at the teaching “environment” to give a sense of the kind of learning milieu in which students are likely to find themselves.
The key indicator for this category draws on the results of a reputational survey on teaching. Thomson Reuters carried out its Academic Reputation Survey – a worldwide, invitation-only poll of 13,388 experienced scholars, statistically representative of global subject mix and geography – in early 2010.
It examined the perceived prestige of institutions in both research and teaching. Respondents were asked only to pass judgement within their narrow area of expertise, and we asked them “action-based” questions (such as: “Where would you send your best graduates for the most stimulating postgraduate learning environment?”) to elicit more meaningful responses.
In some ways, the survey is an improvement on the THE-QS "peer review" but the number of responses was lower than the target and we still do not know how many survey forms were sent out. Without knowing the response rate we cannot determine the validity of the survey.
The rankings also measure staff-to-student ratios. This is admittedly a relatively crude proxy for teaching quality, hinting at the level of personal attention students may receive from faculty, so it receives a relatively low weighting of just 4.5 per cent.
Wait a minute. This means the measure is the number of faculty or staff per student. But the THE web site says "undergraduates admitted per academic", which is the complete opposite. An explanation is needed
We also look at the ratio of PhD to bachelor’s degrees awarded, to give a sense of how knowledge-intensive the environment is, as well as the number of doctorates awarded, scaled for size, to indicate how committed institutions are to nurturing the next generation of academics and providing strong supervision.
Counting the proportion of postgraduate students is not a bad idea. If nothing else, it is a crude measure of the maturity of the students. However, counting doctoral students may well have serious backwash effects as students who would be quite happy in professional or masters programs are coerced or cajoled into Ph D courses that they may never finish and which will lead to a life of ill-paid drudgery if they do.
The last of our teaching indicators is a simple measure of institutional income scaled against academic staff numbers. This figure, adjusted for purchasing-price parity so that all nations compete on a level playing field, gives a broad sense of the general infrastructure and facilities available.
Yes, this is important and it's time someone started counting it.
Monday, April 18, 2011
Art Imitates Life: Update
It is time to bring the unbearable suspense to an end. The first text is the April Fool's Joke. Go here. The second is an article in Notices of the AMS by Douglas Arnold and Kristine Fowler.
It is time to bring the unbearable suspense to an end. The first text is the April Fool's Joke. Go here. The second is an article in Notices of the AMS by Douglas Arnold and Kristine Fowler.
Thursday, April 14, 2011
Art Imitates Life
Which of these texts is an April Fool joke? How can you tell? Links will be posted in a few days.
TEXT 1
The Federal Intelligence Service discovered a Ponzi scheme of academic citations lead by an unemployed particle physicist. A house search conducted in Berlin last week revealed material documenting the planning and administration of a profitable business of trading citations for travel reimbursement.
According to the Federal Intelligence Service, the hint came from researchers at Michigan University, Ann Arbor, who were analyzing the structure of citation networks in the academic community. In late 2010, their analysis pointed towards an exponentially growing cluster originating from a previously unconnected researcher based in Germany's capital. A member of the Ann Arbor group, who wants to remain unnamed, inquired about the biography of the young genius, named Al Bert, sparking such amount of activity. The researcher was easily able to find Dr. Bert scheduled for an unusual amount of seminars in locations all over the world, sometimes more than 4 per week. However, upon contacting the respective institutions, nobody could remember the seminars, which according to Prof. Dr. Dr. Hubert at The Advanced Institute is "Not at all unusual." The network researcher from Ann Arbor suspected Dr. Bert to be a fictitious person and notified the university whose email address Dr. Bert was still using.
It turned out Dr. Bert is not a fictitious person. Dr. Bert's graduated in 2006, but his contract at the university run out in 2008. After this, colleagues lost sight of Dr. Bert. He applied for unemployment benefits in October 2008. As the Federal Intelligence Service reported this Wednesday, he later founded an agency called 'High Impact' (the website has since been taken down) that offered to boost a paper's citation count. A user registered with an almost finished, but not yet published, paper and agreed to pay EUR 10 to Dr. Bert's agency for each citation his paper received above the author's average citation count at the time of registration. The user also agreed to cite 5 papers the agency would name. A registered user would earn EUR 10 for each recruitment of a new paper, possibly their own.
This rapidly created a growing network of researchers citing each others papers, and encouraged the authors to produce new papers, certain they would become well cited. Within only a few months, the network had spread from physics to other research fields. With each citation, Dr. Bert made an income. The algorithm he used to assign citations also ensured his own works became top cites. Yet, with many researchers suddenly having papers with several hundred citations above their previously average citation count, their fee went into some thousand dollars. On several instances Dr. Bert would suggest they invite him for a seminar at their institution and locate it in a non-existent room. He would then receive reimbursement for a fraudulent self-printed boarding pass, illegible due to an alleged malfunctioning printer.
Names of researchers subscribed to Dr. Bert's agency were not accessible at the time of writing.
TEXT 2
The field of applied mathematics provides an illuminating case in which we can study such impact-factor distortion. For the last several years, the International Journal of Nonlinear Sciences and Numerical Simulation (IJNSNS) has dominated the impact-factor charts in the “Mathematics, Applied” category. It took first place in each year 2006, 2007, 2008, and 2009, generally by a wide margin, and came in second in 2005. However, as we shall see, a more careful look indicates that IJNSNS is nowhere near the top of its field. Thus we set out to understand the origin of its large impact factor.
Which of these texts is an April Fool joke? How can you tell? Links will be posted in a few days.
TEXT 1
The Federal Intelligence Service discovered a Ponzi scheme of academic citations lead by an unemployed particle physicist. A house search conducted in Berlin last week revealed material documenting the planning and administration of a profitable business of trading citations for travel reimbursement.
According to the Federal Intelligence Service, the hint came from researchers at Michigan University, Ann Arbor, who were analyzing the structure of citation networks in the academic community. In late 2010, their analysis pointed towards an exponentially growing cluster originating from a previously unconnected researcher based in Germany's capital. A member of the Ann Arbor group, who wants to remain unnamed, inquired about the biography of the young genius, named Al Bert, sparking such amount of activity. The researcher was easily able to find Dr. Bert scheduled for an unusual amount of seminars in locations all over the world, sometimes more than 4 per week. However, upon contacting the respective institutions, nobody could remember the seminars, which according to Prof. Dr. Dr. Hubert at The Advanced Institute is "Not at all unusual." The network researcher from Ann Arbor suspected Dr. Bert to be a fictitious person and notified the university whose email address Dr. Bert was still using.
It turned out Dr. Bert is not a fictitious person. Dr. Bert's graduated in 2006, but his contract at the university run out in 2008. After this, colleagues lost sight of Dr. Bert. He applied for unemployment benefits in October 2008. As the Federal Intelligence Service reported this Wednesday, he later founded an agency called 'High Impact' (the website has since been taken down) that offered to boost a paper's citation count. A user registered with an almost finished, but not yet published, paper and agreed to pay EUR 10 to Dr. Bert's agency for each citation his paper received above the author's average citation count at the time of registration. The user also agreed to cite 5 papers the agency would name. A registered user would earn EUR 10 for each recruitment of a new paper, possibly their own.
This rapidly created a growing network of researchers citing each others papers, and encouraged the authors to produce new papers, certain they would become well cited. Within only a few months, the network had spread from physics to other research fields. With each citation, Dr. Bert made an income. The algorithm he used to assign citations also ensured his own works became top cites. Yet, with many researchers suddenly having papers with several hundred citations above their previously average citation count, their fee went into some thousand dollars. On several instances Dr. Bert would suggest they invite him for a seminar at their institution and locate it in a non-existent room. He would then receive reimbursement for a fraudulent self-printed boarding pass, illegible due to an alleged malfunctioning printer.
Names of researchers subscribed to Dr. Bert's agency were not accessible at the time of writing.
TEXT 2
The Case of IJNSNS
The field of applied mathematics provides an illuminating case in which we can study such impact-factor distortion. For the last several years, the International Journal of Nonlinear Sciences and Numerical Simulation (IJNSNS) has dominated the impact-factor charts in the “Mathematics, Applied” category. It took first place in each year 2006, 2007, 2008, and 2009, generally by a wide margin, and came in second in 2005. However, as we shall see, a more careful look indicates that IJNSNS is nowhere near the top of its field. Thus we set out to understand the origin of its large impact factor.
In 2008, the year we shall consider in most detail, IJNSNS had an impact factor of 8.91, easily the highest among the 175 journals in the applied math category in ISI’s Journal Citation Reports (JCR). As controls, we will also look at the two journals in the category with the second and third highest impact factors, Communications on Pure and Applied Mathematics (CPAM) and SIAM Review (SIREV), with 2008 impact factors of 3.69 and 2.80, respectively. CPAM is closely associated with the Courant Institute of Mathematical Sciences, and SIREV is the flagship journal of the Society for Industrial and Applied Mathematics (SIAM ). Both journals have a reputation for excellence.
Evaluation based on expert judgment is the best alternative to citation-based measures for journals. Though not without potential problems of its own, a careful rating by experts is likely to provide a much more accurate and holistic guide journal quality than impact factor or similar metrics. In mathematics, as in many fields, researchers are widely in agreement about which are the best journals in their specialties. The Australian Research Council recently released such an evaluation, listing quality ratings for over 20,000 peer reviewed journals across disciplines. The list was developed through an extensive review process involving learned academies (such as the Australian Academy of Science ), disciplinary bodies (such as the Australian Mathematical Society), and many researchers and expert reviewers.11 This rating is being used in 2010 for the Excellence in Research Australia assessment initiative and is referred to as the ERA 2010 Journal List. The assigned quality rating, which is intended to represent “the overall quality of the journal,” is one of four values:
• A*: one of the best in its field or subfield
• A*: one of the best in its field or subfield
• A: very high quality
• B: solid, though not outstanding, reputation
• C: does not meet the criteria of the higher tiers.
The ERA list included all but five of the 175 journals assigned a 2008 impact factor by JCR in the category “Mathematics, Applied”. Figure 1 shows the impact factors for journals in each of the four rating tiers. We see that, as a proxy for expert opinion, the impact factor does rather poorly. There are many examples of journals with a higher impact factor than other journals that are one, two, and even three rating tiers higher. The red line is drawn so that 20% of the A* journals are below it; it is notable that 51% of the A journals have an impact factor above that level, as do 23% of the B journals and even 17% of those in the C category. The most extreme outlier is IJNSNS, which, despite its relatively astronomical impact factor, is not in the first or second but, rather, third tier.
The ERA rating assigned its highest score, A*, to 25 journals. Most of the journals with the highest impact factors are here, including CPAM and SIREV, but, of the top 10 journals by impact factor, two were assigned an A, and only IJNSNS was assigned a B. There were 53 A-rated journals and 69 B-rated journals altogether. If IJNSNS were assumed to be the best of the B journals, there would be 78 journals with higher ERA ratings, whereas if it were the worst, its ranking would fall to 147. In short, the ERA ratings suggest that IJNSNS is not only not the top applied math journal but also that its rank should be somewhere in the range 75–150. This remarkable mismatch between reputation and impact factor needs an explanation.
Makings of a High Impact Factor
A first step to understanding IJNSNS’s high impact factor is to look at how many authors contributed substantially to the counted citations and who they were. The top-citing author to IJNSNS in 2008 was the journal’s editor-in-chief, Ji-Huan He, who cited the journal (within the two-year window) 243 times. The second top citer, D. D. Ganji, with 114 cites, is also a member of the editorial board, as is the third, regional editor Mohamed El Naschie, with 58 cites. Together these three account for 29% of the citations counted toward the impact factor.
For comparison, the top three citers to SIREV contributed only 7, 4, and 4 citations, respectively, accounting for less than 12% of the counted citations, nd none of these authors is involved in editing the journal. For CPAM the top three citers (9, 8, and 8) contributed about 7% of the citations and, again, were not on the editorial board.
Another significant phenomenon is the extent to which citations to IJNSNS are concentrated within the two-year window used in the impact factor calculation. Our analysis of 2008 citations to articles published since 2000 shows that 16% of the citations to CPAM fell within that two-year window and only 8% of those to SIREV did; in contrast, 71.5% of the 2008 citations to IJNSNS fell within the two-year window. In Table 1, we show the 2008 impact factors for the three journals, as well as a modified impact factor, which gives the average number of citations in 2008 to articles the journals published not in 2006 and 2007 but in the preceding six years. Since the cited half-life (thetime it takes to generate half of all the eventual citations to an article) for applied mathematics is nearly 10 years,12 this measure is at least as reasonable as the impact factor. It is also independent, unlike JCR’s 5-Year Impact Factor, as its time period does not overlap with that targeted by the Journal.
Table 1. 2008 impact factor with normal 2006–7 window Modified 2008 “impact factor” with 2000–5 window
IJNSNS 8.91 1.27
CPAM 3.69 3.46 S
IREV 2.8 10.4
2008 impact factors computed with the usual two-preceding-years window, and with a window going back eight years but neglecting the two immediately preceding.
Note that the impact factor of JNSNS drops precipitously, by a factor of seven, when we consider a different citation window. By contrast, the impact factor of CPAM stays about the same, and that of SIREV increases markedly. One may simply note that, in distinction to the controls, the citations made to IJNSNS in 2008 greatly favor articles published in precisely the two years that are used to calculate the impact factor. Further striking insights arise when we examine the high-citing journals rather than high-citing authors. The counting of journal self-citations in the impact factor is frequently criticized, and indeed it does come into play in this case. In 2008 IJNSNS supplied 102, or 7%, of its own impact factor citations.
The corresponding numbers are 1 citation (0.8%) for SIREV and 8 citations (2.4%) for CPAM. The disparity in other recent years is similarly large or larger. However, it was Journal of Physics: Conference Series that provided the greatest number of IJNSNS citations. A single issue of that journal provided 294 citations to IJNSNS in the impact factor window, accounting for more than 20% of its impact factor. What was this issue? It was the proceedings of a conference organized by IJNSNS editor-in-chief He at his home university. He was responsible for the peer review of the issue. The second top-citing journal for IJNSNS was Topological Methods in Nonlinear Analysis, which contributed 206 citations (14%), again with all citations coming from a single issue. This was a special issue with Ji-Huan He as the guest editor; his co-editor, Lan Xu, is also on the IJNSNS editorial board. J.-H. He himself contributed a brief article to the special issue, consisting of three pages of text and thirty references. Of these, twenty were citations to IJNSNS within the impact-factor window. The remaining ten consisted of eight citations to He and two to Xu.
Continuing down the list of IJNSNS high-citing journals, another similar circumstance comes to light: 50 citations from a single issue of the Journal of Polymer Engineering (which, like IJNSNS, is published by Freund), guest edited by the same pair, Ji-Huan He and Lan Xu. However, third place is held by the journal Chaos, Solitons and Fractals, with 154 citations spread over numerous issues. These are again citations that may be viewed as subject to editorial influence or control. In 2008 Ji-Huan He served on the editorial board of CS&F, and its editor-in-chief was Mohamed El Naschie, who was also a coeditor of IJNSNS. In a highly publicized case, the entire editorial board of CS&F was recently replaced, but El Naschie remained coeditor of IJNSNS.
Many other citations to IJNSNS came from papers published in journals for which He served as editor, such as Zeitschrift für Naturforschung A, which provided forty citations; there are too many others to list here, since He serves in an editorial capacity on more than twenty journals (and has just been named editor-in-chief of four more journals from the newly formed Asian Academic Publishers). Yet another source of citations came from papers authored by IJNSNS editors other than He, which accounted for many more. All told, the aggregation of such editor-connected citations, which are time-consuming to detect, account for more than 70% of all the citations contributing to the IJNSNS impact factor.
Tuesday, April 12, 2011
QS Engineering Rankings
QS have started to published detailed subject rankings based on citations per paper over five years and their surveys of academics and employers. The first of these is engineering. There are five subfields: Computer Science and Information Systems, Chemical Engineering, Civil and Structural Engineering, Electrical and Electronic Engineering and Mechanical, Aeronautical and Manufacturing.
For Civil and Structural Engineering the weighting is 50% for the academic survey, 30% for the employers' survey and 20 % for citations per paper. For the others it is 40%, 30% and 30%.
MIT, not surprisingly, is top in each of the five engineering fields that are ranked. In general, the upper levels of these rankings seem reasonable. However, a look at the details, especially in the bottom half, 100-200 places, raises some questions.
One basic problem is that as QS make finer distinctions, they have to rely on smaller sets of data. There were 285 respondents to the academic survey for chemical engineering and 394 for civil and structural engineering. For the employer survey there were 836 for computer science. Each respondent to the academic survey was allowed to nominate up to 40 universities but usually the number was much lower than this. Around the 151-200 level the number of responses would surely have been very low. Similarly, the number of papers counted in each field varied considerably from 43,222 in civil and structural engineering to 514,95 in electrical and electronic engineering. We should therefore be rather sceptical about these rankings.
Something that is noticeable is that there is a reasonably high correlation between the scores for the academic survey and the employer survey. For electrical engineering it is .682, chemical engineering .695, civil engineering .695, computer science .722.
But there is no correlation at all between the citations per paper indicator and the surveys. For electrical engineering it is .064 between citations and academic survey and -.004 between citations and the employer survey. It is the same for the other subfields. None of the correlations are statistically significant.
Looking at the top universities for the three indicators, we see the same familiar places in each of the subfields according to the surveys: MIT, Stanford, Cambridge, Berkeley, Oxford, Harvard, Imperial College London, Melbourne, Caltech.
But looking at the top scorers for citations per paper, we find a much more varied and unfamiliar array of institutions: New York University, Wageningen, Dartmouth College, Notre Dame, Aalborg, Athens, Lund, Uppsala, Drexel, Tufts, IIT Roorkee, University of Washington, Rice, University of Massachusetts.
The agreement of employers and academic about the quality of engineering programs, even though they refer to different aspects, research and graduate employability, suggests that the surveys are moderately accurate, at least for the top hundred or so.
However, the lack of any correlation at all between the citations indicator and the surveys needs to be raised. It could be that citations have identified up and coming superstars. Perhaps the number of papers is so low in the various subfields that the indicator does not mean very much. Perhaps citations have been so manipulated in recent years -- see the case of Alexandria University -- that they are no longer a robust indicator of quality.
QS have started to published detailed subject rankings based on citations per paper over five years and their surveys of academics and employers. The first of these is engineering. There are five subfields: Computer Science and Information Systems, Chemical Engineering, Civil and Structural Engineering, Electrical and Electronic Engineering and Mechanical, Aeronautical and Manufacturing.
For Civil and Structural Engineering the weighting is 50% for the academic survey, 30% for the employers' survey and 20 % for citations per paper. For the others it is 40%, 30% and 30%.
MIT, not surprisingly, is top in each of the five engineering fields that are ranked. In general, the upper levels of these rankings seem reasonable. However, a look at the details, especially in the bottom half, 100-200 places, raises some questions.
One basic problem is that as QS make finer distinctions, they have to rely on smaller sets of data. There were 285 respondents to the academic survey for chemical engineering and 394 for civil and structural engineering. For the employer survey there were 836 for computer science. Each respondent to the academic survey was allowed to nominate up to 40 universities but usually the number was much lower than this. Around the 151-200 level the number of responses would surely have been very low. Similarly, the number of papers counted in each field varied considerably from 43,222 in civil and structural engineering to 514,95 in electrical and electronic engineering. We should therefore be rather sceptical about these rankings.
Something that is noticeable is that there is a reasonably high correlation between the scores for the academic survey and the employer survey. For electrical engineering it is .682, chemical engineering .695, civil engineering .695, computer science .722.
But there is no correlation at all between the citations per paper indicator and the surveys. For electrical engineering it is .064 between citations and academic survey and -.004 between citations and the employer survey. It is the same for the other subfields. None of the correlations are statistically significant.
Looking at the top universities for the three indicators, we see the same familiar places in each of the subfields according to the surveys: MIT, Stanford, Cambridge, Berkeley, Oxford, Harvard, Imperial College London, Melbourne, Caltech.
But looking at the top scorers for citations per paper, we find a much more varied and unfamiliar array of institutions: New York University, Wageningen, Dartmouth College, Notre Dame, Aalborg, Athens, Lund, Uppsala, Drexel, Tufts, IIT Roorkee, University of Washington, Rice, University of Massachusetts.
The agreement of employers and academic about the quality of engineering programs, even though they refer to different aspects, research and graduate employability, suggests that the surveys are moderately accurate, at least for the top hundred or so.
However, the lack of any correlation at all between the citations indicator and the surveys needs to be raised. It could be that citations have identified up and coming superstars. Perhaps the number of papers is so low in the various subfields that the indicator does not mean very much. Perhaps citations have been so manipulated in recent years -- see the case of Alexandria University -- that they are no longer a robust indicator of quality.
Monday, April 04, 2011
First They Came For the Fire Buffs, Then They Came for the Science Nerds....
An interesting aspect of the 2009 court case brought by firemen in New Haven, Connecticut, was the not very subtle disdain shown by members of the American academic elite towards the pretensions of those who thought that fire fighting required a degree of knowledge and intelligence. The aggrieved firemen had been denied promotion because the test resulted in white firemen doing better than their African American and Hispanic colleagues. See here for an insightful account.
An article by Nicole Allan and Emily Bazelon, graduate of Yale Law School, granddaughter of a judge of the US Court of Appeals, former law clerk for a judge of the US Court of Appeals and Senior Research Scholar in Law and Truman Capote Fellow for Creative Writing and Law at Yale Law School, reported without noticeable irony complaints about the unfairness of the test that passed too many white firefighters: it favored "fire buffs" (enthusiasts according to the dictionary) who read firefighting manuals in their spare time or who came from families with lots of firefighters.
One wonders whether Bazelon ever wondered whether she had derived an unfair advantage from her family background or felt guilty because she had had read books about law when she did not have to.
The article concluded:
That has in fact been done in Chicago.
Meanwhile, admission into North American graduate and professional schools has followed the cities of the US by becoming increasingly less selective as intelligence and knowledge are downgraded and admission is increasingly dependent on vague and unverifiable personality traits.
Educational Testing Service, producers of the Graduate Record Exam, have now introduced a Personal Potential Index that will supplement (and perhaps eventually replace?) the use of the GRE and undergraduate grades for admissions to graduate school.
Basically, this new tool involves applicants nominating five evaluators who will provide assessments of "Knowledge and Creativity; Communication Skills; Teamwork; Resilience; Planning and Organization; and Ethics and Integrity". I can see Newton, Darwin, Einstein and James Watson all tripping up on at least one of these.
The latest development is that the Medical College Admissions Test will do away with its writing test because it does not add much information beyond undergraduate grades. It will be replaced with a section on "behavioural and social sciences principles."
It seems that the point of this is to increase the number of minorities in medical schools although it is not clear why it is assumed that they will do better in answering questions about the social sciences and critical thinking than in writing an essay and verbal analysis.
More changes may be coming soon. Already there are pilot projects in which schools are "doing brief interviews of applicants involving various ethical and social scenarios to learn more about would-be students".
It seems that these developments are a response to criticism of the MCAT from organisations like The National Center for Fair and Open Testing:
According to Wikipedia, "Nerd is a term that refers to a social perception of a person who avidly pursues intellectual activities, technical or scientific endeavors, esoteric knowledge, or other obscure interests, rather than engaging in more social or conventional activities."
Will the time come when the likes of Emily Bazelon are denied promotion or appointment because of their inappropriate buffiness or avid pursuit of intellectual activity? Not to worry. They could probably get jobs as firefighters somewhere.
Here is a prediction. As American universities increasingly select students and faculty because they are communicative, culturally sensitive, resilient and and so on while cleansing themselves of all those buffs and nerds, China, Korea and a few other countries will catch up and then overtake them first in scientific output and then in quality.
An interesting aspect of the 2009 court case brought by firemen in New Haven, Connecticut, was the not very subtle disdain shown by members of the American academic elite towards the pretensions of those who thought that fire fighting required a degree of knowledge and intelligence. The aggrieved firemen had been denied promotion because the test resulted in white firemen doing better than their African American and Hispanic colleagues. See here for an insightful account.
An article by Nicole Allan and Emily Bazelon, graduate of Yale Law School, granddaughter of a judge of the US Court of Appeals, former law clerk for a judge of the US Court of Appeals and Senior Research Scholar in Law and Truman Capote Fellow for Creative Writing and Law at Yale Law School, reported without noticeable irony complaints about the unfairness of the test that passed too many white firefighters: it favored "fire buffs" (enthusiasts according to the dictionary) who read firefighting manuals in their spare time or who came from families with lots of firefighters.
One wonders whether Bazelon ever wondered whether she had derived an unfair advantage from her family background or felt guilty because she had had read books about law when she did not have to.
The article concluded:
If New Haven could start over, maybe it could also admit outright that it has more deserving firefighters than it has rewards. The city could come up with a measure for who is qualified for the promotions, rather than who is somehow best. And then it could choose from that pool by lottery. That might not exactly be fair, either. But it would recognize that sometimes there may be no such thing.
That has in fact been done in Chicago.
Meanwhile, admission into North American graduate and professional schools has followed the cities of the US by becoming increasingly less selective as intelligence and knowledge are downgraded and admission is increasingly dependent on vague and unverifiable personality traits.
Educational Testing Service, producers of the Graduate Record Exam, have now introduced a Personal Potential Index that will supplement (and perhaps eventually replace?) the use of the GRE and undergraduate grades for admissions to graduate school.
Basically, this new tool involves applicants nominating five evaluators who will provide assessments of "Knowledge and Creativity; Communication Skills; Teamwork; Resilience; Planning and Organization; and Ethics and Integrity". I can see Newton, Darwin, Einstein and James Watson all tripping up on at least one of these.
The latest development is that the Medical College Admissions Test will do away with its writing test because it does not add much information beyond undergraduate grades. It will be replaced with a section on "behavioural and social sciences principles."
It seems that the point of this is to increase the number of minorities in medical schools although it is not clear why it is assumed that they will do better in answering questions about the social sciences and critical thinking than in writing an essay and verbal analysis.
More changes may be coming soon. Already there are pilot projects in which schools are "doing brief interviews of applicants involving various ethical and social scenarios to learn more about would-be students".
It seems that these developments are a response to criticism of the MCAT from organisations like The National Center for Fair and Open Testing:
Robert Schaeffer, public education director of the center, said that the MCAT has been viewed as encouraging "memorization and regurgitation" and is "better at identifying science nerds than candidates who would become capable physicians well-equipped to serve their patients." The changes being proposed appear to be "responding directly" to these critiques, he said.
According to Wikipedia, "Nerd is a term that refers to a social perception of a person who avidly pursues intellectual activities, technical or scientific endeavors, esoteric knowledge, or other obscure interests, rather than engaging in more social or conventional activities."
Will the time come when the likes of Emily Bazelon are denied promotion or appointment because of their inappropriate buffiness or avid pursuit of intellectual activity? Not to worry. They could probably get jobs as firefighters somewhere.
Here is a prediction. As American universities increasingly select students and faculty because they are communicative, culturally sensitive, resilient and and so on while cleansing themselves of all those buffs and nerds, China, Korea and a few other countries will catch up and then overtake them first in scientific output and then in quality.
Sunday, April 03, 2011
The View from Hong Kong
University World News has an article by Kevin Downing of the City University of Hong Kong. It begins:
University World News has an article by Kevin Downing of the City University of Hong Kong. It begins:
Are Asian institutions finally coming out of the shadow cast by their Western counterparts? At the 2010 World Universities Forum in Davos, a theme was China's increasing public investment in higher education at a time when reductions in public funding are being seen in Europe and North America. China is not alone in Asia in increasing public investment in higher education, with similar structured and significant investment evident in Singapore, South Korea and Taiwan.
While in many ways this investment is not at all surprising and merely reflects the continued rise of Asia as a centre of global economic power, it nonetheless raises some interesting questions in relation to the potential benefits of rankings for Asian institutions.
Interest in rankings in Asian higher education is undoubtedly high and the introduction of the QS Asian University Rankings in 2009 served to reinforce this. The publication of ranking lists is now greeted with a mixture of trepidation and relief by many university presidents and is often followed by intense questioning from media that are interested to know what lies behind a particular rise or fall on the global or regional stage.
Friday, April 01, 2011
Best Grad Schools
The US News Graduate School Rankings were published on March 15th. Here are the top universities in various subject areas.
Business: Stanford
Education: Vanderbilt
Engineering: MIT
Law: Yale
Medical: Harvard
Biology: Stanford
Chemistry: Caltech, MIT, UC Berkeley
Computer Science; Carnegie-Mellon, MIT, Stanford, UC Berkeley
Earth Sciences: Caltech, MIT
Mathematics: MIT
Physics: Caltech, Harvard, MIT, Stanford
Statistics: Stanford
Library and Information Studies: Illinois at Urbana-Champagne
Criminology: Maryland -- College Park
Economics: Harvard, MIT, Princeton, Chicago
English: UC Berkeley
History: Princeton
Political Science: Harvard, Princeton, Stanford
Psychology: Stanford, UC Berkeley
Sociology: UC Berkeley
Public Affairs: Syracuse
Fine Arts: Rhode Island School of Design
The US News Graduate School Rankings were published on March 15th. Here are the top universities in various subject areas.
Business: Stanford
Education: Vanderbilt
Engineering: MIT
Law: Yale
Medical: Harvard
Biology: Stanford
Chemistry: Caltech, MIT, UC Berkeley
Computer Science; Carnegie-Mellon, MIT, Stanford, UC Berkeley
Earth Sciences: Caltech, MIT
Mathematics: MIT
Physics: Caltech, Harvard, MIT, Stanford
Statistics: Stanford
Library and Information Studies: Illinois at Urbana-Champagne
Criminology: Maryland -- College Park
Economics: Harvard, MIT, Princeton, Chicago
English: UC Berkeley
History: Princeton
Political Science: Harvard, Princeton, Stanford
Psychology: Stanford, UC Berkeley
Sociology: UC Berkeley
Public Affairs: Syracuse
Fine Arts: Rhode Island School of Design
Sunday, March 27, 2011
Say It Loud
Phil Baty has an article in THE, based on a speech in Hong Kong entitled 'Say it loud: I'm a ranker and I'm proud'. Very interesting but personally I prefer the James Brown version.
Phil Baty has an article in THE, based on a speech in Hong Kong entitled 'Say it loud: I'm a ranker and I'm proud'. Very interesting but personally I prefer the James Brown version.
Saturday, March 26, 2011
Growth of Academic Publications: Southwest Asia, 2009-2010
One of several surprises in last year's THE rankings was the absence of any Israeli university from the Top 200. QS had three and, as noted earlier on this blog, over a fifth of Israeli universities were in the Shanghai 500, a higher proportion than any other country. It seems that in the case of at least two universities, Tel Aviv and the Hebrew University of Jerusalem, there was a failure of communication that meant that data was not submitted to Thomson Reuters, who collect and analyse data for THE.
The high quality of Israeli universities might seem rather surprising since Israeli secondary school students perform poorly on international tests of scholastic attainment and the national average IQ is mediocre. It could be that part of the reason for the strong Israeli academic performance is the Psychometric Entrance Test for university admission that measures quantitative and verbal reasoning and also includes an English test. The contrast with the trend in the US, Europe and elsewhere towards holistic assessment, credit for leadership, community involvement, overcoming adversity and being from the right post code area is striking.
Even so, Israeli scientific supremacy in the Middle East is looking precarious. Already the annual production of academic papers in Israel has been exceeded by Iran and Turkey.
Meanwhile, the total number of papers produced in Israel is shrinking while that of Iran and Turkey continues to grow at a respectable rate. The fastest growth in Southwest Asia comes from Saudi Arabia and the smaller Gulf states of Qatar and Bahrain.
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Saudi Arabia 35% (3924)
2. Qatar 31% (453)
3. Syria 14% (333)
4. Bahrain 13% (184)
5. Palestine 9% (24)
6. UAE 6% (303)
7. Turkey 5% (26835)
8. Lebanon 4% (2058)
9. Iran 4% (21047)
10. Oman 4% (494)
11. Jordan 1% (1637)
12. Iraq -3% (333)
13. Israel -4% (17719)
14. Yemen -8% (125)
15. Kuwait -13% (759)
(data collected 23/3/11)
This of course may not say very much about the quality of research. A glance at the ISI list of highly cited researchers shows that Israel is ahead, for the moment at least, with 50 compared to 29 for Saudi Arabia and one each for Turkey and Iran.
One of several surprises in last year's THE rankings was the absence of any Israeli university from the Top 200. QS had three and, as noted earlier on this blog, over a fifth of Israeli universities were in the Shanghai 500, a higher proportion than any other country. It seems that in the case of at least two universities, Tel Aviv and the Hebrew University of Jerusalem, there was a failure of communication that meant that data was not submitted to Thomson Reuters, who collect and analyse data for THE.
The high quality of Israeli universities might seem rather surprising since Israeli secondary school students perform poorly on international tests of scholastic attainment and the national average IQ is mediocre. It could be that part of the reason for the strong Israeli academic performance is the Psychometric Entrance Test for university admission that measures quantitative and verbal reasoning and also includes an English test. The contrast with the trend in the US, Europe and elsewhere towards holistic assessment, credit for leadership, community involvement, overcoming adversity and being from the right post code area is striking.
Even so, Israeli scientific supremacy in the Middle East is looking precarious. Already the annual production of academic papers in Israel has been exceeded by Iran and Turkey.
Meanwhile, the total number of papers produced in Israel is shrinking while that of Iran and Turkey continues to grow at a respectable rate. The fastest growth in Southwest Asia comes from Saudi Arabia and the smaller Gulf states of Qatar and Bahrain.
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Saudi Arabia 35% (3924)
2. Qatar 31% (453)
3. Syria 14% (333)
4. Bahrain 13% (184)
5. Palestine 9% (24)
6. UAE 6% (303)
7. Turkey 5% (26835)
8. Lebanon 4% (2058)
9. Iran 4% (21047)
10. Oman 4% (494)
11. Jordan 1% (1637)
12. Iraq -3% (333)
13. Israel -4% (17719)
14. Yemen -8% (125)
15. Kuwait -13% (759)
(data collected 23/3/11)
This of course may not say very much about the quality of research. A glance at the ISI list of highly cited researchers shows that Israel is ahead, for the moment at least, with 50 compared to 29 for Saudi Arabia and one each for Turkey and Iran.
Thursday, March 24, 2011
Growth in Academic Publications: Southeast Asia 2009-2010
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Malaysia 31% (8603)
2. Laos 30% (96)
3. Indonesia 30% (1631)
4. Brunei 16% (88)
5. Papua New Guinea 5% (67)
6. Vietnam 5% (1247)
7. Singapore 4% (11900)
8. Thailand 2% (2248)
9. Timor Leste 0% (4)
10. Cambodia -5% (158)
11. Myanmar -12% (78)
Singapore is still the dominant research power in Southeast Asia but Malaysia and Indonesia, admittedly with much larger populations, are closing fast. Thailand is growing very slowly and Myanmar is shrinking.
(data collected 23/3/11)
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Malaysia 31% (8603)
2. Laos 30% (96)
3. Indonesia 30% (1631)
4. Brunei 16% (88)
5. Papua New Guinea 5% (67)
6. Vietnam 5% (1247)
7. Singapore 4% (11900)
8. Thailand 2% (2248)
9. Timor Leste 0% (4)
10. Cambodia -5% (158)
11. Myanmar -12% (78)
Singapore is still the dominant research power in Southeast Asia but Malaysia and Indonesia, admittedly with much larger populations, are closing fast. Thailand is growing very slowly and Myanmar is shrinking.
(data collected 23/3/11)
Tuesday, March 22, 2011
Comparing Rankings 3: Omissions
The big problem with the Asiaweek rankings of 1999-2000 was that they relied on data submitted by universities. This meant that if enough were dissatisfied they could effectively sabotage the rankings by withholding information, which is in fact what happened.
The THES-QS rankings, and since 2010 the QS rankings, avoided this problem by ranking universities whether they liked or not. Nonetheless, there were a few omissions in the early years: Lancaster, Essex, Royal Holloway University of London and the SUNY campuses at Binghamton, Buffalo and Albany.
In 2010 THE decided that they would not rank universities that did not submit data, a principled decision but one that has its dangers. Too many conscientious objectors (or maybe poor losers) and the rankings would begin to lose face validity.
When the THE rankings came out last year, there were some noticeable absentees, among them the Chinese University of Hong Kong, the University of Queensland, Tel Aviv University, the Hebrew University of Jerusalem, the University of Texas at Austin, the Catholic University of Louvain, Fudan University, Rochester, Calgary, the Indian Institutes of Technology and Science and Sciences Po Paris.
As Danny Byrne pointed out in University World News, Texas at Austin and Moscow State University were in the top100 in the Reputation Rankings but not in the THE World University Rankings. Producing a reputation-only ranking without input from the institutions could be a smart move for THE.
Monday, March 21, 2011
QS comments on the THE Reputation Ranking
In University World News, Danny Byrne from QS comments on the new THE reputation ranking.
In University World News, Danny Byrne from QS comments on the new THE reputation ranking.
So why has THE decided to launch a world ranking based entirely on institutional reputation? Is it for the benefit of institutions like Moscow State University, which did not appear in THE's original top 200 but now appears 33rd in the world?
The data on which the new reputational ranking is based has been available for six months and comprised 34.5% of the world university rankings published by THE in September 2010.
But this is the first time the magazine has allowed anyone to view this data in isolation. Allowing users to access the data six months ago may have attracted less attention, but it would perhaps have been less confusing for prospective students.
The order of the universities in the reputational rankings differs from the THE's overall ranking. But no new insights have been offered and nothing has changed. This plays into the hands of those who are sceptical about university rankings.
Subscribe to:
Posts (Atom)