QS Engineering Rankings
QS have started to published detailed subject rankings based on citations per paper over five years and their surveys of academics and employers. The first of these is engineering. There are five subfields: Computer Science and Information Systems, Chemical Engineering, Civil and Structural Engineering, Electrical and Electronic Engineering and Mechanical, Aeronautical and Manufacturing.
For Civil and Structural Engineering the weighting is 50% for the academic survey, 30% for the employers' survey and 20 % for citations per paper. For the others it is 40%, 30% and 30%.
MIT, not surprisingly, is top in each of the five engineering fields that are ranked. In general, the upper levels of these rankings seem reasonable. However, a look at the details, especially in the bottom half, 100-200 places, raises some questions.
One basic problem is that as QS make finer distinctions, they have to rely on smaller sets of data. There were 285 respondents to the academic survey for chemical engineering and 394 for civil and structural engineering. For the employer survey there were 836 for computer science. Each respondent to the academic survey was allowed to nominate up to 40 universities but usually the number was much lower than this. Around the 151-200 level the number of responses would surely have been very low. Similarly, the number of papers counted in each field varied considerably from 43,222 in civil and structural engineering to 514,95 in electrical and electronic engineering. We should therefore be rather sceptical about these rankings.
Something that is noticeable is that there is a reasonably high correlation between the scores for the academic survey and the employer survey. For electrical engineering it is .682, chemical engineering .695, civil engineering .695, computer science .722.
But there is no correlation at all between the citations per paper indicator and the surveys. For electrical engineering it is .064 between citations and academic survey and -.004 between citations and the employer survey. It is the same for the other subfields. None of the correlations are statistically significant.
Looking at the top universities for the three indicators, we see the same familiar places in each of the subfields according to the surveys: MIT, Stanford, Cambridge, Berkeley, Oxford, Harvard, Imperial College London, Melbourne, Caltech.
But looking at the top scorers for citations per paper, we find a much more varied and unfamiliar array of institutions: New York University, Wageningen, Dartmouth College, Notre Dame, Aalborg, Athens, Lund, Uppsala, Drexel, Tufts, IIT Roorkee, University of Washington, Rice, University of Massachusetts.
The agreement of employers and academic about the quality of engineering programs, even though they refer to different aspects, research and graduate employability, suggests that the surveys are moderately accurate, at least for the top hundred or so.
However, the lack of any correlation at all between the citations indicator and the surveys needs to be raised. It could be that citations have identified up and coming superstars. Perhaps the number of papers is so low in the various subfields that the indicator does not mean very much. Perhaps citations have been so manipulated in recent years -- see the case of Alexandria University -- that they are no longer a robust indicator of quality.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Tuesday, April 12, 2011
Monday, April 04, 2011
First They Came For the Fire Buffs, Then They Came for the Science Nerds....
An interesting aspect of the 2009 court case brought by firemen in New Haven, Connecticut, was the not very subtle disdain shown by members of the American academic elite towards the pretensions of those who thought that fire fighting required a degree of knowledge and intelligence. The aggrieved firemen had been denied promotion because the test resulted in white firemen doing better than their African American and Hispanic colleagues. See here for an insightful account.
An article by Nicole Allan and Emily Bazelon, graduate of Yale Law School, granddaughter of a judge of the US Court of Appeals, former law clerk for a judge of the US Court of Appeals and Senior Research Scholar in Law and Truman Capote Fellow for Creative Writing and Law at Yale Law School, reported without noticeable irony complaints about the unfairness of the test that passed too many white firefighters: it favored "fire buffs" (enthusiasts according to the dictionary) who read firefighting manuals in their spare time or who came from families with lots of firefighters.
One wonders whether Bazelon ever wondered whether she had derived an unfair advantage from her family background or felt guilty because she had had read books about law when she did not have to.
The article concluded:
That has in fact been done in Chicago.
Meanwhile, admission into North American graduate and professional schools has followed the cities of the US by becoming increasingly less selective as intelligence and knowledge are downgraded and admission is increasingly dependent on vague and unverifiable personality traits.
Educational Testing Service, producers of the Graduate Record Exam, have now introduced a Personal Potential Index that will supplement (and perhaps eventually replace?) the use of the GRE and undergraduate grades for admissions to graduate school.
Basically, this new tool involves applicants nominating five evaluators who will provide assessments of "Knowledge and Creativity; Communication Skills; Teamwork; Resilience; Planning and Organization; and Ethics and Integrity". I can see Newton, Darwin, Einstein and James Watson all tripping up on at least one of these.
The latest development is that the Medical College Admissions Test will do away with its writing test because it does not add much information beyond undergraduate grades. It will be replaced with a section on "behavioural and social sciences principles."
It seems that the point of this is to increase the number of minorities in medical schools although it is not clear why it is assumed that they will do better in answering questions about the social sciences and critical thinking than in writing an essay and verbal analysis.
More changes may be coming soon. Already there are pilot projects in which schools are "doing brief interviews of applicants involving various ethical and social scenarios to learn more about would-be students".
It seems that these developments are a response to criticism of the MCAT from organisations like The National Center for Fair and Open Testing:
According to Wikipedia, "Nerd is a term that refers to a social perception of a person who avidly pursues intellectual activities, technical or scientific endeavors, esoteric knowledge, or other obscure interests, rather than engaging in more social or conventional activities."
Will the time come when the likes of Emily Bazelon are denied promotion or appointment because of their inappropriate buffiness or avid pursuit of intellectual activity? Not to worry. They could probably get jobs as firefighters somewhere.
Here is a prediction. As American universities increasingly select students and faculty because they are communicative, culturally sensitive, resilient and and so on while cleansing themselves of all those buffs and nerds, China, Korea and a few other countries will catch up and then overtake them first in scientific output and then in quality.
An interesting aspect of the 2009 court case brought by firemen in New Haven, Connecticut, was the not very subtle disdain shown by members of the American academic elite towards the pretensions of those who thought that fire fighting required a degree of knowledge and intelligence. The aggrieved firemen had been denied promotion because the test resulted in white firemen doing better than their African American and Hispanic colleagues. See here for an insightful account.
An article by Nicole Allan and Emily Bazelon, graduate of Yale Law School, granddaughter of a judge of the US Court of Appeals, former law clerk for a judge of the US Court of Appeals and Senior Research Scholar in Law and Truman Capote Fellow for Creative Writing and Law at Yale Law School, reported without noticeable irony complaints about the unfairness of the test that passed too many white firefighters: it favored "fire buffs" (enthusiasts according to the dictionary) who read firefighting manuals in their spare time or who came from families with lots of firefighters.
One wonders whether Bazelon ever wondered whether she had derived an unfair advantage from her family background or felt guilty because she had had read books about law when she did not have to.
The article concluded:
If New Haven could start over, maybe it could also admit outright that it has more deserving firefighters than it has rewards. The city could come up with a measure for who is qualified for the promotions, rather than who is somehow best. And then it could choose from that pool by lottery. That might not exactly be fair, either. But it would recognize that sometimes there may be no such thing.
That has in fact been done in Chicago.
Meanwhile, admission into North American graduate and professional schools has followed the cities of the US by becoming increasingly less selective as intelligence and knowledge are downgraded and admission is increasingly dependent on vague and unverifiable personality traits.
Educational Testing Service, producers of the Graduate Record Exam, have now introduced a Personal Potential Index that will supplement (and perhaps eventually replace?) the use of the GRE and undergraduate grades for admissions to graduate school.
Basically, this new tool involves applicants nominating five evaluators who will provide assessments of "Knowledge and Creativity; Communication Skills; Teamwork; Resilience; Planning and Organization; and Ethics and Integrity". I can see Newton, Darwin, Einstein and James Watson all tripping up on at least one of these.
The latest development is that the Medical College Admissions Test will do away with its writing test because it does not add much information beyond undergraduate grades. It will be replaced with a section on "behavioural and social sciences principles."
It seems that the point of this is to increase the number of minorities in medical schools although it is not clear why it is assumed that they will do better in answering questions about the social sciences and critical thinking than in writing an essay and verbal analysis.
More changes may be coming soon. Already there are pilot projects in which schools are "doing brief interviews of applicants involving various ethical and social scenarios to learn more about would-be students".
It seems that these developments are a response to criticism of the MCAT from organisations like The National Center for Fair and Open Testing:
Robert Schaeffer, public education director of the center, said that the MCAT has been viewed as encouraging "memorization and regurgitation" and is "better at identifying science nerds than candidates who would become capable physicians well-equipped to serve their patients." The changes being proposed appear to be "responding directly" to these critiques, he said.
According to Wikipedia, "Nerd is a term that refers to a social perception of a person who avidly pursues intellectual activities, technical or scientific endeavors, esoteric knowledge, or other obscure interests, rather than engaging in more social or conventional activities."
Will the time come when the likes of Emily Bazelon are denied promotion or appointment because of their inappropriate buffiness or avid pursuit of intellectual activity? Not to worry. They could probably get jobs as firefighters somewhere.
Here is a prediction. As American universities increasingly select students and faculty because they are communicative, culturally sensitive, resilient and and so on while cleansing themselves of all those buffs and nerds, China, Korea and a few other countries will catch up and then overtake them first in scientific output and then in quality.
Sunday, April 03, 2011
The View from Hong Kong
University World News has an article by Kevin Downing of the City University of Hong Kong. It begins:
University World News has an article by Kevin Downing of the City University of Hong Kong. It begins:
Are Asian institutions finally coming out of the shadow cast by their Western counterparts? At the 2010 World Universities Forum in Davos, a theme was China's increasing public investment in higher education at a time when reductions in public funding are being seen in Europe and North America. China is not alone in Asia in increasing public investment in higher education, with similar structured and significant investment evident in Singapore, South Korea and Taiwan.
While in many ways this investment is not at all surprising and merely reflects the continued rise of Asia as a centre of global economic power, it nonetheless raises some interesting questions in relation to the potential benefits of rankings for Asian institutions.
Interest in rankings in Asian higher education is undoubtedly high and the introduction of the QS Asian University Rankings in 2009 served to reinforce this. The publication of ranking lists is now greeted with a mixture of trepidation and relief by many university presidents and is often followed by intense questioning from media that are interested to know what lies behind a particular rise or fall on the global or regional stage.
Friday, April 01, 2011
Best Grad Schools
The US News Graduate School Rankings were published on March 15th. Here are the top universities in various subject areas.
Business: Stanford
Education: Vanderbilt
Engineering: MIT
Law: Yale
Medical: Harvard
Biology: Stanford
Chemistry: Caltech, MIT, UC Berkeley
Computer Science; Carnegie-Mellon, MIT, Stanford, UC Berkeley
Earth Sciences: Caltech, MIT
Mathematics: MIT
Physics: Caltech, Harvard, MIT, Stanford
Statistics: Stanford
Library and Information Studies: Illinois at Urbana-Champagne
Criminology: Maryland -- College Park
Economics: Harvard, MIT, Princeton, Chicago
English: UC Berkeley
History: Princeton
Political Science: Harvard, Princeton, Stanford
Psychology: Stanford, UC Berkeley
Sociology: UC Berkeley
Public Affairs: Syracuse
Fine Arts: Rhode Island School of Design
The US News Graduate School Rankings were published on March 15th. Here are the top universities in various subject areas.
Business: Stanford
Education: Vanderbilt
Engineering: MIT
Law: Yale
Medical: Harvard
Biology: Stanford
Chemistry: Caltech, MIT, UC Berkeley
Computer Science; Carnegie-Mellon, MIT, Stanford, UC Berkeley
Earth Sciences: Caltech, MIT
Mathematics: MIT
Physics: Caltech, Harvard, MIT, Stanford
Statistics: Stanford
Library and Information Studies: Illinois at Urbana-Champagne
Criminology: Maryland -- College Park
Economics: Harvard, MIT, Princeton, Chicago
English: UC Berkeley
History: Princeton
Political Science: Harvard, Princeton, Stanford
Psychology: Stanford, UC Berkeley
Sociology: UC Berkeley
Public Affairs: Syracuse
Fine Arts: Rhode Island School of Design
Sunday, March 27, 2011
Say It Loud
Phil Baty has an article in THE, based on a speech in Hong Kong entitled 'Say it loud: I'm a ranker and I'm proud'. Very interesting but personally I prefer the James Brown version.
Phil Baty has an article in THE, based on a speech in Hong Kong entitled 'Say it loud: I'm a ranker and I'm proud'. Very interesting but personally I prefer the James Brown version.
Saturday, March 26, 2011
Growth of Academic Publications: Southwest Asia, 2009-2010
One of several surprises in last year's THE rankings was the absence of any Israeli university from the Top 200. QS had three and, as noted earlier on this blog, over a fifth of Israeli universities were in the Shanghai 500, a higher proportion than any other country. It seems that in the case of at least two universities, Tel Aviv and the Hebrew University of Jerusalem, there was a failure of communication that meant that data was not submitted to Thomson Reuters, who collect and analyse data for THE.
The high quality of Israeli universities might seem rather surprising since Israeli secondary school students perform poorly on international tests of scholastic attainment and the national average IQ is mediocre. It could be that part of the reason for the strong Israeli academic performance is the Psychometric Entrance Test for university admission that measures quantitative and verbal reasoning and also includes an English test. The contrast with the trend in the US, Europe and elsewhere towards holistic assessment, credit for leadership, community involvement, overcoming adversity and being from the right post code area is striking.
Even so, Israeli scientific supremacy in the Middle East is looking precarious. Already the annual production of academic papers in Israel has been exceeded by Iran and Turkey.
Meanwhile, the total number of papers produced in Israel is shrinking while that of Iran and Turkey continues to grow at a respectable rate. The fastest growth in Southwest Asia comes from Saudi Arabia and the smaller Gulf states of Qatar and Bahrain.
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Saudi Arabia 35% (3924)
2. Qatar 31% (453)
3. Syria 14% (333)
4. Bahrain 13% (184)
5. Palestine 9% (24)
6. UAE 6% (303)
7. Turkey 5% (26835)
8. Lebanon 4% (2058)
9. Iran 4% (21047)
10. Oman 4% (494)
11. Jordan 1% (1637)
12. Iraq -3% (333)
13. Israel -4% (17719)
14. Yemen -8% (125)
15. Kuwait -13% (759)
(data collected 23/3/11)
This of course may not say very much about the quality of research. A glance at the ISI list of highly cited researchers shows that Israel is ahead, for the moment at least, with 50 compared to 29 for Saudi Arabia and one each for Turkey and Iran.
One of several surprises in last year's THE rankings was the absence of any Israeli university from the Top 200. QS had three and, as noted earlier on this blog, over a fifth of Israeli universities were in the Shanghai 500, a higher proportion than any other country. It seems that in the case of at least two universities, Tel Aviv and the Hebrew University of Jerusalem, there was a failure of communication that meant that data was not submitted to Thomson Reuters, who collect and analyse data for THE.
The high quality of Israeli universities might seem rather surprising since Israeli secondary school students perform poorly on international tests of scholastic attainment and the national average IQ is mediocre. It could be that part of the reason for the strong Israeli academic performance is the Psychometric Entrance Test for university admission that measures quantitative and verbal reasoning and also includes an English test. The contrast with the trend in the US, Europe and elsewhere towards holistic assessment, credit for leadership, community involvement, overcoming adversity and being from the right post code area is striking.
Even so, Israeli scientific supremacy in the Middle East is looking precarious. Already the annual production of academic papers in Israel has been exceeded by Iran and Turkey.
Meanwhile, the total number of papers produced in Israel is shrinking while that of Iran and Turkey continues to grow at a respectable rate. The fastest growth in Southwest Asia comes from Saudi Arabia and the smaller Gulf states of Qatar and Bahrain.
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Saudi Arabia 35% (3924)
2. Qatar 31% (453)
3. Syria 14% (333)
4. Bahrain 13% (184)
5. Palestine 9% (24)
6. UAE 6% (303)
7. Turkey 5% (26835)
8. Lebanon 4% (2058)
9. Iran 4% (21047)
10. Oman 4% (494)
11. Jordan 1% (1637)
12. Iraq -3% (333)
13. Israel -4% (17719)
14. Yemen -8% (125)
15. Kuwait -13% (759)
(data collected 23/3/11)
This of course may not say very much about the quality of research. A glance at the ISI list of highly cited researchers shows that Israel is ahead, for the moment at least, with 50 compared to 29 for Saudi Arabia and one each for Turkey and Iran.
Thursday, March 24, 2011
Growth in Academic Publications: Southeast Asia 2009-2010
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Malaysia 31% (8603)
2. Laos 30% (96)
3. Indonesia 30% (1631)
4. Brunei 16% (88)
5. Papua New Guinea 5% (67)
6. Vietnam 5% (1247)
7. Singapore 4% (11900)
8. Thailand 2% (2248)
9. Timor Leste 0% (4)
10. Cambodia -5% (158)
11. Myanmar -12% (78)
Singapore is still the dominant research power in Southeast Asia but Malaysia and Indonesia, admittedly with much larger populations, are closing fast. Thailand is growing very slowly and Myanmar is shrinking.
(data collected 23/3/11)
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Malaysia 31% (8603)
2. Laos 30% (96)
3. Indonesia 30% (1631)
4. Brunei 16% (88)
5. Papua New Guinea 5% (67)
6. Vietnam 5% (1247)
7. Singapore 4% (11900)
8. Thailand 2% (2248)
9. Timor Leste 0% (4)
10. Cambodia -5% (158)
11. Myanmar -12% (78)
Singapore is still the dominant research power in Southeast Asia but Malaysia and Indonesia, admittedly with much larger populations, are closing fast. Thailand is growing very slowly and Myanmar is shrinking.
(data collected 23/3/11)
Tuesday, March 22, 2011
Comparing Rankings 3: Omissions
The big problem with the Asiaweek rankings of 1999-2000 was that they relied on data submitted by universities. This meant that if enough were dissatisfied they could effectively sabotage the rankings by withholding information, which is in fact what happened.
The THES-QS rankings, and since 2010 the QS rankings, avoided this problem by ranking universities whether they liked or not. Nonetheless, there were a few omissions in the early years: Lancaster, Essex, Royal Holloway University of London and the SUNY campuses at Binghamton, Buffalo and Albany.
In 2010 THE decided that they would not rank universities that did not submit data, a principled decision but one that has its dangers. Too many conscientious objectors (or maybe poor losers) and the rankings would begin to lose face validity.
When the THE rankings came out last year, there were some noticeable absentees, among them the Chinese University of Hong Kong, the University of Queensland, Tel Aviv University, the Hebrew University of Jerusalem, the University of Texas at Austin, the Catholic University of Louvain, Fudan University, Rochester, Calgary, the Indian Institutes of Technology and Science and Sciences Po Paris.
As Danny Byrne pointed out in University World News, Texas at Austin and Moscow State University were in the top100 in the Reputation Rankings but not in the THE World University Rankings. Producing a reputation-only ranking without input from the institutions could be a smart move for THE.
Monday, March 21, 2011
QS comments on the THE Reputation Ranking
In University World News, Danny Byrne from QS comments on the new THE reputation ranking.
In University World News, Danny Byrne from QS comments on the new THE reputation ranking.
So why has THE decided to launch a world ranking based entirely on institutional reputation? Is it for the benefit of institutions like Moscow State University, which did not appear in THE's original top 200 but now appears 33rd in the world?
The data on which the new reputational ranking is based has been available for six months and comprised 34.5% of the world university rankings published by THE in September 2010.
But this is the first time the magazine has allowed anyone to view this data in isolation. Allowing users to access the data six months ago may have attracted less attention, but it would perhaps have been less confusing for prospective students.
The order of the universities in the reputational rankings differs from the THE's overall ranking. But no new insights have been offered and nothing has changed. This plays into the hands of those who are sceptical about university rankings.
Wednesday, March 16, 2011
Worth Reading
Ellen Hazelkorn, 'Questions Abound as the College-Rankings Race Goes Global' in Chronicle of Higher Education
"It is amazing that more than two decades after U.S. News & World Report first published its special issue on "America's Best Colleges," and almost a decade since Shanghai Jiao Tong University first published the Academic Ranking of World Universities, rankings continue to dominate the attention of university leaders. Indeed, the range of people watching them now includes politicians, students, parents, businesses, and donors. Simply put, rankings have caught the imagination of the public and have insinuated their way into public discourse and almost every level of government. There are even iPhone applications to help individuals and colleges calculate their ranks.
More than 50 country-specific rankings and 10 global rankings are available today, including the European Union's new U-Multirank, due this year. What started as small-scale, nationally focused guides for students and parents has become a global business that heavily influences higher education and has repercussions well beyond academe."
Tuesday, March 15, 2011
Bright Ideas Department
This is from today's Guardian:
This is from today's Guardian:
The coalition is considering a Soviet-style central intervention policy to effectively fine individual universities if they impose unreasonable tuition fees next year.Next bright idea? A Gulag for recalcitrant vice-chancellors? Re-education camps for those who don't take their teaching philosophy statements seriously enough?
Vince Cable, the business secretary whose department is responsible for universities, and David Willetts, the universities minister, are looking at allowing colleges that charge a modest fee to expand and constraining those that are charging too much.
The government, through the Higher Education Funding Council, sets the grant and numbers for each university and has the power to fine a university as much as £3,000 per student if it over-recruits in a single year.
Ministers are looking at cutting funding from universities that unreasonably charge the maximum £9,000 fee from 2012-13. They admit it is likely most universities will charge well over £8,000 a year.
One minister said: "A form of dramatic centralisation is under active consideration - a form of Gosplan if you like," a reference to the Russian state planning committee set up in the 1920s.
Saturday, March 12, 2011
Going Global Hong Kong 2011
Speeches about rankings by Martin Davidson, British Council, Phil Baty, THE, John Molony, QS, and others can be seen here.
Speeches about rankings by Martin Davidson, British Council, Phil Baty, THE, John Molony, QS, and others can be seen here.
Thursday, March 10, 2011
A Bit More on the THE Reputation Rankings
There is a brief article in the Guardian with a lot of comments.
Incidentally, I don't see Alexandria, Hong Kong Baptist and Bilkent Universities in the top 100 for reputation despite the outstanding work that gave them high scores for research impact in the 2010 THE WUR. Perhaps I'm not looking hard enough.
There is a brief article in the Guardian with a lot of comments.
Incidentally, I don't see Alexandria, Hong Kong Baptist and Bilkent Universities in the top 100 for reputation despite the outstanding work that gave them high scores for research impact in the 2010 THE WUR. Perhaps I'm not looking hard enough.
The THE Reputation Rankings
Times Higher Education have constructed a reputation ranking from the data collected for last year's World University Rankings. There is a weighting of two thirds for research and one third for postgraduate teaching. The top five are:
1. Harvard
2. MIT
3. Cambridge
4. UC Berkeley
5. Stanford
Scores are given only for the top fifty universities. Then another fifty are sorted in bands of ten without scores. Evidently, the number of responses favouring universities outside the top 100 was so small that it was not worth listing.
This means that the THE reputational survey reveals significant differences between Harvard and MIT or between Cambridge and Oxford but it would be of no help to those wondering whether to study or work at the University of Cape Town or the University of Kwazulu-Natal or Trinity College Dublin or University College Dublin.
The scores for research reputation (top fifty for total reputation scores only) show a moderate correlation with the THE citations indicator (.422) and, perhaps surprisingly, a higher correlation with the citations per faculty score on the QS World University Rankings of 2010 (.538).
looking at the QS academic survey, which asked only about research, we can see that there was an insignificant correlation of .213 between the QS scores and the score for citations per faculty in the QS rankings (THE reputation ranking top 50 only). However, there was a higher correlation between the QS survey and the THE citations indicator of .422, the same as that between the THE research reputation scores and the the THE citations indicator.
Comparing the two research surveys with a third party, the citations indicator in the Scimago 2010 rankings, the THE research reputation survey did better with a correlation of .438 compared to an insignificant .188 for the QS academic survey.
This seems to suggest that the THE reputational survey does a better job at differentiating between the world's elite universities. But once we leave the top 100 it is perhaps less helpful and there may still be a role for the QS rankings
Times Higher Education have constructed a reputation ranking from the data collected for last year's World University Rankings. There is a weighting of two thirds for research and one third for postgraduate teaching. The top five are:
1. Harvard
2. MIT
3. Cambridge
4. UC Berkeley
5. Stanford
Scores are given only for the top fifty universities. Then another fifty are sorted in bands of ten without scores. Evidently, the number of responses favouring universities outside the top 100 was so small that it was not worth listing.
This means that the THE reputational survey reveals significant differences between Harvard and MIT or between Cambridge and Oxford but it would be of no help to those wondering whether to study or work at the University of Cape Town or the University of Kwazulu-Natal or Trinity College Dublin or University College Dublin.
The scores for research reputation (top fifty for total reputation scores only) show a moderate correlation with the THE citations indicator (.422) and, perhaps surprisingly, a higher correlation with the citations per faculty score on the QS World University Rankings of 2010 (.538).
looking at the QS academic survey, which asked only about research, we can see that there was an insignificant correlation of .213 between the QS scores and the score for citations per faculty in the QS rankings (THE reputation ranking top 50 only). However, there was a higher correlation between the QS survey and the THE citations indicator of .422, the same as that between the THE research reputation scores and the the THE citations indicator.
Comparing the two research surveys with a third party, the citations indicator in the Scimago 2010 rankings, the THE research reputation survey did better with a correlation of .438 compared to an insignificant .188 for the QS academic survey.
This seems to suggest that the THE reputational survey does a better job at differentiating between the world's elite universities. But once we leave the top 100 it is perhaps less helpful and there may still be a role for the QS rankings
Wednesday, March 09, 2011
The Second Wave
It seems that another wave of rankings is coming. The new edition of America's best graduate schools will be out soon, QS will be releasing detailed subject rankings and, according to Bloomberg Businessweek, THE's ranking by reputation is imminent. It seems that the academic anglosphere dominates when reputation alone is considered.
It seems that another wave of rankings is coming. The new edition of America's best graduate schools will be out soon, QS will be releasing detailed subject rankings and, according to Bloomberg Businessweek, THE's ranking by reputation is imminent. It seems that the academic anglosphere dominates when reputation alone is considered.
Tuesday, March 08, 2011
Comment on the Paris Ranking
Ben Wildavsky in the Chronicle of Higher Education says:
The Mines ParisTech ranking is an explicitly chauvinistic exercise, born of French unhappiness with the dismal showing of its universities in influential surveys such as the Academic Ranking of World Universities created at Shanghai Jiao Tong University in 2003. When designing the Mines ParisTech ranking, with a view to influencing the architects of the Shanghai methodology, the college says in the FAQ section of its survey results, “we believed it was useful to highlight the good results of French institutions at a time when the Shanghai ranking was widely and is still widely discussed, and not always to the advantage of our own schools and universities.” What’s more, it goes on, “these results constitute a genuine communication tool at an international level, both for the recruitment of foreign students as well as among foreign companies which are not always very familiar with our education system.” Given the genesis of the ranking, it doesn’t seem too surprising that three French institutions made it into this year’s top 10 — École Polytechnique and École Nationale d’Administration joined HEC Paris — while Mines ParisTech itself placed 21st in the world.
Sunday, March 06, 2011
The Paris rankings
The fifth edition of the Professional Ranking of World Universities from Paris Mines Tech has just been published. This is based on one indicator, the number of top corporate CEOs. Here are the top ten:
1. Harvard
2. Tokyo
3. Keio
4. HEC, France
5. Kyoto
5. Oxford
7. Ecole Polytechnique
8. Waseda
9. ENA
10. Seoul National University
The fifth edition of the Professional Ranking of World Universities from Paris Mines Tech has just been published. This is based on one indicator, the number of top corporate CEOs. Here are the top ten:
1. Harvard
2. Tokyo
3. Keio
4. HEC, France
5. Kyoto
5. Oxford
7. Ecole Polytechnique
8. Waseda
9. ENA
10. Seoul National University
Saturday, February 26, 2011
Surveys and Citations
I have just finished calculating the correlation between the scores for the academic survey and citations per faculty on the 2010 QS world University rankings.
Since the survey asked about research and since citations are supposed to be a robust indicator of research excellence we would expect a high correlation between the two.
It is in fact .391, which is on the low side. There could be valid reasons why it is so low. Citations, by definition, must follow publication which follows research which in turn is preceded by proposals and a variety of bureaucratic procedures. A flurry of citations might be indicative of the quality of research begun a decade ago. The responses to the survey might, on the other hand, be based on the first signs of research excellence long before the citations start rolling in.
Still, the correlation does not seem high enough. At first glance one would suspect that the survey is faulty but it could be that citations do not mean very much any more as a measure of excellence.
It would be very interesting to calculate the correlation between the score for research reputation on the Times Higher Education WUR and its citation indicator.
We would expect the THE survey to be more valid since the basic qualification for being included in the survey is being the corresponding author of an article included in the ISI indexes whereas for QS it is signing up for a journal published by World Scientific. But it can no longer be assumed that authorship of any article means very much . Does it always require more initiative and interest to get on the list of co-authors than to sign up for an online subscription?
It should also be noted that there is an overlap between the two surveys as both are supplemented with arts and humanities respondents from the Mardev mailing lists.
I have calculated the correlation between the citations indicator (normalised average citations per paper) in the THE 2010 rankings and the research indicator -- volume ( 4.5 % of the total score) income (6%) and reputation (19.5%)
This is .562, quite a bit better than the QS correlation . However, the research indicator combines a survey with other data.
It would be very interesting if THE and/or Thomson Reuters released the scores of the individual components of the research indicator.
I have just finished calculating the correlation between the scores for the academic survey and citations per faculty on the 2010 QS world University rankings.
Since the survey asked about research and since citations are supposed to be a robust indicator of research excellence we would expect a high correlation between the two.
It is in fact .391, which is on the low side. There could be valid reasons why it is so low. Citations, by definition, must follow publication which follows research which in turn is preceded by proposals and a variety of bureaucratic procedures. A flurry of citations might be indicative of the quality of research begun a decade ago. The responses to the survey might, on the other hand, be based on the first signs of research excellence long before the citations start rolling in.
Still, the correlation does not seem high enough. At first glance one would suspect that the survey is faulty but it could be that citations do not mean very much any more as a measure of excellence.
It would be very interesting to calculate the correlation between the score for research reputation on the Times Higher Education WUR and its citation indicator.
We would expect the THE survey to be more valid since the basic qualification for being included in the survey is being the corresponding author of an article included in the ISI indexes whereas for QS it is signing up for a journal published by World Scientific. But it can no longer be assumed that authorship of any article means very much . Does it always require more initiative and interest to get on the list of co-authors than to sign up for an online subscription?
It should also be noted that there is an overlap between the two surveys as both are supplemented with arts and humanities respondents from the Mardev mailing lists.
I have calculated the correlation between the citations indicator (normalised average citations per paper) in the THE 2010 rankings and the research indicator -- volume ( 4.5 % of the total score) income (6%) and reputation (19.5%)
This is .562, quite a bit better than the QS correlation . However, the research indicator combines a survey with other data.
It would be very interesting if THE and/or Thomson Reuters released the scores of the individual components of the research indicator.
Wednesday, February 23, 2011
Reputation, reputation, reputation!
As the world (or some of it) waits for the ranking survey forms to appear in its mail boxes, both THE and QS are promoting their surveys.
According to Phil Baty of THE:
"But in our consultation with the sector, there was strong support for the continued use of reputation information in the world rankings. Some 79 per cent of respondents to a survey by our rankings data provider Thomson Reuters rated reputation as a “must have” or “nice to have” measure. We operate in a global market where reputation clearly matters."
He then indicates several ways in which the THE survey is an improvement over the THE-QS, now QS, survey.
"We received a record 13,388 usable responses in just three months, making the survey the biggest of its kind in the world.
We promised a transparent approach. The methodology and survey instrument were published in full and this week, the thousands of academics who took part in the survey were sent a detailed report on the respondent profile. It makes reassuring reading:
• Responses were received from 131 countries"
It would, however, be interesting if the number of respondents from all countries were indicated. There are some people who wonder whether THE's sampling technique means that Singapore got the lion's share of responses in Southeast Asia.
Also, will THE publish the scores for the reputation surveys? At the moment they are bundled in with the other teaching and research indicators. What is the correlation between the score for research reputation and the citations indicator? Is there any sign that Alexandria, Bilkant or Hong Kong Baptist University have reputations that match their scores for research impact?
Meanwhile QS also has an item on its survey. They find that there is a similar demand for data on reputations.
"An impressive 79% of respondents, voted reputation for research as one of their top three criteria, with 60% choosing international profile of faculty, essentially another indicator of international reputation for research. This is in stark contrast to the 26% and 30% that prioiritised citations as a key measure.
Furthermore, when breaking these results out by broad faculty area, we can see consistent support across disciplines for the reputation measure but a marked dip in support for citations as a measure amongst respondents in the Arts & Humanities area – which tends to be the area least recognized by traditional measures of research output."
As the world (or some of it) waits for the ranking survey forms to appear in its mail boxes, both THE and QS are promoting their surveys.
According to Phil Baty of THE:
"But in our consultation with the sector, there was strong support for the continued use of reputation information in the world rankings. Some 79 per cent of respondents to a survey by our rankings data provider Thomson Reuters rated reputation as a “must have” or “nice to have” measure. We operate in a global market where reputation clearly matters."
He then indicates several ways in which the THE survey is an improvement over the THE-QS, now QS, survey.
"We received a record 13,388 usable responses in just three months, making the survey the biggest of its kind in the world.
We promised a transparent approach. The methodology and survey instrument were published in full and this week, the thousands of academics who took part in the survey were sent a detailed report on the respondent profile. It makes reassuring reading:
• Responses were received from 131 countries"
It would, however, be interesting if the number of respondents from all countries were indicated. There are some people who wonder whether THE's sampling technique means that Singapore got the lion's share of responses in Southeast Asia.
Also, will THE publish the scores for the reputation surveys? At the moment they are bundled in with the other teaching and research indicators. What is the correlation between the score for research reputation and the citations indicator? Is there any sign that Alexandria, Bilkant or Hong Kong Baptist University have reputations that match their scores for research impact?
Meanwhile QS also has an item on its survey. They find that there is a similar demand for data on reputations.
"An impressive 79% of respondents, voted reputation for research as one of their top three criteria, with 60% choosing international profile of faculty, essentially another indicator of international reputation for research. This is in stark contrast to the 26% and 30% that prioiritised citations as a key measure.
Furthermore, when breaking these results out by broad faculty area, we can see consistent support across disciplines for the reputation measure but a marked dip in support for citations as a measure amongst respondents in the Arts & Humanities area – which tends to be the area least recognized by traditional measures of research output."
Comment on Internationalisation
International Focus, the newsletter of UK HE international Unit, has an article by Jane Knight on myths of internationalisation. The second myth is:
"Myth two rests on a belief that the more international a university is – in terms of students, faculty, curriculum, research, agreements, network memberships – the better its reputation is.
This is tied to the false notion that a strong international reputation is a proxy for quality. Cases of questionable admission and exit standards for universities highly dependent on the revenue and ‘brand equity’ of international students are concrete evidence that internationalisation does not always translate into improved quality or high standards.
This myth is further complicated by the quest for higher rankings on a global or regional league table such as the Times Higher Education or Academic World Ranking of Universities (AWRU). It is highly questionable whether the league tables accurately measure the internationality of a university and more importantly whether the international dimension is always a robust indicator of quality."
Also, it is much easier to be international in Switzerland or Singapore than in Central China or the Midwest of the US.
International Focus, the newsletter of UK HE international Unit, has an article by Jane Knight on myths of internationalisation. The second myth is:
"Myth two rests on a belief that the more international a university is – in terms of students, faculty, curriculum, research, agreements, network memberships – the better its reputation is.
This is tied to the false notion that a strong international reputation is a proxy for quality. Cases of questionable admission and exit standards for universities highly dependent on the revenue and ‘brand equity’ of international students are concrete evidence that internationalisation does not always translate into improved quality or high standards.
This myth is further complicated by the quest for higher rankings on a global or regional league table such as the Times Higher Education or Academic World Ranking of Universities (AWRU). It is highly questionable whether the league tables accurately measure the internationality of a university and more importantly whether the international dimension is always a robust indicator of quality."
Also, it is much easier to be international in Switzerland or Singapore than in Central China or the Midwest of the US.
Tuesday, February 22, 2011
Penn State Law School
Malcolm Gladwell has an article in the current New Yorker about the US News and World Report college rankings. There is quite a lot there that I would like to discuss in another post. For the moment, I will just comment on an anecdote about the appearance of a non-existent law school in a ranking.
Gladwell descibes how Thomas Brennan, who edits a well known ranking of law schools, once sent out a questionnaire to other lawyers asking them to rank law schools and found that Penn State was, as Brennan is quoted as recalling, ranked around fifth. This was strange since there was no law school at Penn State until quite recently (1997 or 2000 in different sources).
This immediately struck me as odd since I remember a similar story about the Princeton Law School, which does not exist and which was also supposed to have made its appearance in a ranking.
The Princeton story is very probably apocryphal and might have begun with a comment by the dean of New York University Law School in the Dartmouth Law Journal that Princeton would appear in the top twenty law schools if a questionnaire was asked about it.
This story was plausible since it was an apparent example of the halo effect with Princeton's general excellence being reflected in the perception of a school that did not exist.
The problem with Brennan's account retold by Gladwell, which does not appear to be supported by documentary evidence, is that it requires that many lawyers should not only have mistakenly thought that Penn State had a law school (getting mixed up with the University of Pennsylvania?) but should have been in error about the general quality of the university. Penn State is nowhere near being a top ten or even a top fifty school.
Could this be another academic legend?
Malcolm Gladwell has an article in the current New Yorker about the US News and World Report college rankings. There is quite a lot there that I would like to discuss in another post. For the moment, I will just comment on an anecdote about the appearance of a non-existent law school in a ranking.
Gladwell descibes how Thomas Brennan, who edits a well known ranking of law schools, once sent out a questionnaire to other lawyers asking them to rank law schools and found that Penn State was, as Brennan is quoted as recalling, ranked around fifth. This was strange since there was no law school at Penn State until quite recently (1997 or 2000 in different sources).
This immediately struck me as odd since I remember a similar story about the Princeton Law School, which does not exist and which was also supposed to have made its appearance in a ranking.
The Princeton story is very probably apocryphal and might have begun with a comment by the dean of New York University Law School in the Dartmouth Law Journal that Princeton would appear in the top twenty law schools if a questionnaire was asked about it.
This story was plausible since it was an apparent example of the halo effect with Princeton's general excellence being reflected in the perception of a school that did not exist.
The problem with Brennan's account retold by Gladwell, which does not appear to be supported by documentary evidence, is that it requires that many lawyers should not only have mistakenly thought that Penn State had a law school (getting mixed up with the University of Pennsylvania?) but should have been in error about the general quality of the university. Penn State is nowhere near being a top ten or even a top fifty school.
Could this be another academic legend?
Sunday, February 20, 2011
Impact Assessment
The use of citations as a measure of research quality was highlighted by the remarkable performance of Alexandria University, Bilkent University, Hong Kong Baptist University and others in the 2010 Times Higher Education World University Rankings. As THE and Thomson Reuters review their methodology, perhaps they could take note of this post in Francis' World Inside Out, that refers to a paper by Arnold and Fowler.
'“Goodhart’s law warns us that “when a measure becomes a target, it ceases to be a good measure.” The impact factor has moved in recent years from an obscure bibliometric indicator to become the chief quantitative measure of the quality of a journal, its research papers, the researchers who wrote those papers and even the institution they work in. The impact factor for a journal in a given year is calculated by ISI (Thomson Reuters) as the average number of citations in that year to the articles the journal published in the preceding two years. It is widely used by researchers deciding where to publish and what to read, by tenure and promotion committees laboring under the assumption that publication in a higher impact-factor journal represents better work. However, it has been widely criticized on a variety of grounds (it does not determine paper’s quality, it is a crude and flawed statistic, etc.). Impact factor manipulation can take numerous forms. Let us follow Douglas N. Arnold and Kristine K. Fowler, “Nefarious Numbers,” Notices of the AMS 58: 434-437, Mach 2011 [ArXiv, 1 Oct 2010].
Editors can manipulate the impact factor by means of the following practices: (1) “canny editors cultivate a cadre of regulars who can be relied upon to boost the measured quality of the journal by citing themselves and each other shamelessly;” (2) “authors of manuscripts under review often were asked or required by editors to cite other papers from the journal; this practice borders on extortion, even when posed as a suggestion;” and (3) “editors raise their journals’ impact factors is by publishing review items with large numbers of citations to the journal.” “These unscientific practices wreak upon the scientific literature have raised occasional alarms. A counterexample should confirm the need for alarm.” '
The use of citations as a measure of research quality was highlighted by the remarkable performance of Alexandria University, Bilkent University, Hong Kong Baptist University and others in the 2010 Times Higher Education World University Rankings. As THE and Thomson Reuters review their methodology, perhaps they could take note of this post in Francis' World Inside Out, that refers to a paper by Arnold and Fowler.
'“Goodhart’s law warns us that “when a measure becomes a target, it ceases to be a good measure.” The impact factor has moved in recent years from an obscure bibliometric indicator to become the chief quantitative measure of the quality of a journal, its research papers, the researchers who wrote those papers and even the institution they work in. The impact factor for a journal in a given year is calculated by ISI (Thomson Reuters) as the average number of citations in that year to the articles the journal published in the preceding two years. It is widely used by researchers deciding where to publish and what to read, by tenure and promotion committees laboring under the assumption that publication in a higher impact-factor journal represents better work. However, it has been widely criticized on a variety of grounds (it does not determine paper’s quality, it is a crude and flawed statistic, etc.). Impact factor manipulation can take numerous forms. Let us follow Douglas N. Arnold and Kristine K. Fowler, “Nefarious Numbers,” Notices of the AMS 58: 434-437, Mach 2011 [ArXiv, 1 Oct 2010].
Editors can manipulate the impact factor by means of the following practices: (1) “canny editors cultivate a cadre of regulars who can be relied upon to boost the measured quality of the journal by citing themselves and each other shamelessly;” (2) “authors of manuscripts under review often were asked or required by editors to cite other papers from the journal; this practice borders on extortion, even when posed as a suggestion;” and (3) “editors raise their journals’ impact factors is by publishing review items with large numbers of citations to the journal.” “These unscientific practices wreak upon the scientific literature have raised occasional alarms. A counterexample should confirm the need for alarm.” '
Looking East
Shanghai is planning to persuade two Ivy League schools, Cornell and Columbia, to set up branch campuses there. They already a branch of New York University.
Would anybody like to make a prediction when a new Oxford or Cambridge college will be established in Shanghai (or Singapore or Hong Kong)?
Or when an entire American university will move to China?
Shanghai is planning to persuade two Ivy League schools, Cornell and Columbia, to set up branch campuses there. They already a branch of New York University.
Would anybody like to make a prediction when a new Oxford or Cambridge college will be established in Shanghai (or Singapore or Hong Kong)?
Or when an entire American university will move to China?
More dumbing Down
De Paul University will make it optional for applicants to submit SAT or ACT scores. Instead they can write short essays that demonstrate non-cognitive traits such as "commitment to service", "leadership" and "ability to meet long term goals".
The university says:
'"Admissions officers have often said that you can't measure heart," said Jon Boeckenstedt, associate vice president for enrollment management. "This, in some sense, is an attempt to measure that heart."
Mr. Boeckenstedt expects the change to encourage applicants with high grade-point averages but relatively low ACT and SAT scores to apply—be they low-income students, underrepresented minorities, or otherwise. Moreover, he and his colleagues believe the new admissions option will allow them to better select applicants who are most likely to succeed—and graduate.'
De Paul's administrators are being extremely naive if they think that these attributes cannot be easily coached or faked. Bluntly, how much effort does it take to teach a student what to say on one of these essays compared to squeezing out a few more points on the SAT?
De Paul University will make it optional for applicants to submit SAT or ACT scores. Instead they can write short essays that demonstrate non-cognitive traits such as "commitment to service", "leadership" and "ability to meet long term goals".
The university says:
'"Admissions officers have often said that you can't measure heart," said Jon Boeckenstedt, associate vice president for enrollment management. "This, in some sense, is an attempt to measure that heart."
Mr. Boeckenstedt expects the change to encourage applicants with high grade-point averages but relatively low ACT and SAT scores to apply—be they low-income students, underrepresented minorities, or otherwise. Moreover, he and his colleagues believe the new admissions option will allow them to better select applicants who are most likely to succeed—and graduate.'
De Paul's administrators are being extremely naive if they think that these attributes cannot be easily coached or faked. Bluntly, how much effort does it take to teach a student what to say on one of these essays compared to squeezing out a few more points on the SAT?
Wednesday, February 16, 2011
Another US News Ranking
This one is about the schools where congressmen received their bachelor degrees.
Here are the top 10. What might be more interesting is the party affiliation of the congressmen. D = Democrat, R = Republican, I = Independent.
1. Harvard D 13, R 2
2. Stanford D 9, R 2
3. Yale D 8, R 1, I 1
4. UCLA D 6, R 3
5= Georgetown D 5, R 2
5= Florida D2, R5
5= Georgia D1, R6
5= Wisconsin - Madison D6, R1
9. North carolina -- Chapel Hill D 5, R 1
10= Brigham Young R5
10= George Washington D2, R5
10= Louisiana State D1, R4
10= Berkeley D4, R1
10= Missouri D4, R1
10= Tennessee D2, r3
This one is about the schools where congressmen received their bachelor degrees.
Here are the top 10. What might be more interesting is the party affiliation of the congressmen. D = Democrat, R = Republican, I = Independent.
1. Harvard D 13, R 2
2. Stanford D 9, R 2
3. Yale D 8, R 1, I 1
4. UCLA D 6, R 3
5= Georgetown D 5, R 2
5= Florida D2, R5
5= Georgia D1, R6
5= Wisconsin - Madison D6, R1
9. North carolina -- Chapel Hill D 5, R 1
10= Brigham Young R5
10= George Washington D2, R5
10= Louisiana State D1, R4
10= Berkeley D4, R1
10= Missouri D4, R1
10= Tennessee D2, r3
Subscribe to:
Posts (Atom)