Keeping up with the current surge of global university rankings is becoming next to impossible. Still there are a few niches that have remained unoccupied. One might be a ranking of universities according to their ability to spread new knowledge around the world. So it might be a good idea to have a Research Influence Ranking based on the citations indicator in the Times Higher Education -- Thomson Reuters World University Rankings.
Thomson Reuters are the world's leading collectors and analysts of citations data so such an index ought to provide invaluable data source for governments, corporations and other stakeholders deciding where to place research funding. Data for 400 universities can be found on the THE iPhone/iPad app.
The top place in the world would be jointly held by Rice University in Texas and Moscow State Engineering Physics Institute, closely followed by MIT and the University of California Santa Cruz.
Then there are the first places in various regions and counties. (MEPhI would be first in Europe and Rice in the US and North America.)
Canada
University of Toronto
Latin America
University of the Andes, Colombia
United Kingdom (and Western Europe)
Royal Holloway London
Africa
University of Cape Town
Middle East
Koc University, Turkey
Asia (and Japan)
Tokyo Metropolitan University
ASEAN
King Mongkut's University of Technology, Thailand
Australia and the Pacific
University of Melbourne
On second thoughts, perhaps not such a good idea.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Tuesday, June 25, 2013
Monday, June 24, 2013
Bad Mood Rising
In 2006 I tried to get an article published in the Education section of the Guardian, that fearless advocate of radical causes and scourge of the establishment, outlining the many flaws and errors in the Times Higher Education Supplement -- Quacquarelli Symonds (as they were then) World University Rankings, especially its "peer review". Unfortunately, I was told that they would be wary of publishing an attack on a direct rival. That was how University Ranking Watch got started.
Since then QS and Times Higher Education have had an unpleasant divorce, with the latter now teaming up with Thomson Reuters. New rankings have appeared, some of them to rapidly disappear -- there was one from Wuhan and another from Australia but they seem to have vanished. The established rankings are spinning off subsidiary rankings at a bewildering rate.
As the higher education bubble collapses in the West everything is getting more competitive including rankings and everybody -- except ARWU -- seems to be getting rather bad-tempered.
Rankers and academic writers are no longer wary about "taking a pop" at each other. Recently, there has been an acrimonious exchange between Ben Sowter of QS and Simon Marginson of Melbourne University. This has gone so far as to include the claim that QS has used the threat of legal action to try to silence critics.
"[Ben] Sowter [of QS] does not mention that his company has twice threatened publications with legal action when publishing my bona fide criticisms of QS. One was The Australian: in that case QS prevented my criticisms from being aired. The other case was University World News, which refused to pull my remarks from its website when threatened by QS with legal action.
If Sowter and QS would address the points of criticism of their ranking and their infamous star system (best described as 'rent a reputation'), rather than attacking their critics, we might all be able to progress towards better rankings. That is my sole goal in this matter. As long as the QS ranking remains deficient in terms of social science, I will continue to criticise it, and I expect others will also continue to do so."
Meanwhile the Leiter Reports has a letter from "a reader in the UK".
THES DID drop QS for methodological reasons. The best explanation is here: http://www.insidehighered.com/views/2010/03/15/baty
But there may have been more to it? Clearly QS's business practices leave an awful lot to be desired. See: http://www.computerweekly.com/news/1280094547/Quacquarelli-Symonds-pays-80000-for-using-unlicensed-software
Also I understand that the "S" from QS -- Matt Symonds -- walked out on the company due to exasperation with the business practices. He has been airbrushed from QS history, but can be foud at: https://twitter.com/SymondsGSB
And as for the reputation survey, there was also this case of blantant manipulation: http://www.insidehighered.com/news/2013/04/08/irish-university-tries-recruit-voters-improve-its-international-ranking
And of course there's the high-pressure sales: http://www.theinternationalstudentrecruiter.com/how-to-become-a-top-500-university/
And the highly lucrative "consultancy" to help universities rise up the rankings: http://www.iu.qs.com/projects-and-services/consulting/
There are "opportunities" for branding -- a snip at just $80,000 -- with QS Showcase: http://qsshowcase.com/main/branding-opportunities/
Or what about some relaxing massage, or a tenis tournament and networking with the staff who compile the rankings: http://www.qsworldclass.com/6thqsworldclass/
Perhaps most distribing of all is the selling of dubious Star ratings: http://www.nytimes.com/2012/12/31/world/europe/31iht-educlede31.html?pagewanted=all&_r=0
Keep up the good work. Its an excellent blog.
All of this is true although I cannot get very excited about using pirated software and the bit about relaxing massage is rather petty -- I assume it is something to do with having a conference in Thailand. Incidentally, I don't think anyone from THE sent this since the reader refers to THES (The S for Supplement was removed in 2008).
This is all a long way from the days when journalists refused to take pops at their rivals, even when they knew the rankings were a bit rum.
Since then QS and Times Higher Education have had an unpleasant divorce, with the latter now teaming up with Thomson Reuters. New rankings have appeared, some of them to rapidly disappear -- there was one from Wuhan and another from Australia but they seem to have vanished. The established rankings are spinning off subsidiary rankings at a bewildering rate.
As the higher education bubble collapses in the West everything is getting more competitive including rankings and everybody -- except ARWU -- seems to be getting rather bad-tempered.
Rankers and academic writers are no longer wary about "taking a pop" at each other. Recently, there has been an acrimonious exchange between Ben Sowter of QS and Simon Marginson of Melbourne University. This has gone so far as to include the claim that QS has used the threat of legal action to try to silence critics.
"[Ben] Sowter [of QS] does not mention that his company has twice threatened publications with legal action when publishing my bona fide criticisms of QS. One was The Australian: in that case QS prevented my criticisms from being aired. The other case was University World News, which refused to pull my remarks from its website when threatened by QS with legal action.
If Sowter and QS would address the points of criticism of their ranking and their infamous star system (best described as 'rent a reputation'), rather than attacking their critics, we might all be able to progress towards better rankings. That is my sole goal in this matter. As long as the QS ranking remains deficient in terms of social science, I will continue to criticise it, and I expect others will also continue to do so."
Meanwhile the Leiter Reports has a letter from "a reader in the UK".
THES DID drop QS for methodological reasons. The best explanation is here: http://www.insidehighered.com/views/2010/03/15/baty
But there may have been more to it? Clearly QS's business practices leave an awful lot to be desired. See: http://www.computerweekly.com/news/1280094547/Quacquarelli-Symonds-pays-80000-for-using-unlicensed-software
Also I understand that the "S" from QS -- Matt Symonds -- walked out on the company due to exasperation with the business practices. He has been airbrushed from QS history, but can be foud at: https://twitter.com/SymondsGSB
And as for the reputation survey, there was also this case of blantant manipulation: http://www.insidehighered.com/news/2013/04/08/irish-university-tries-recruit-voters-improve-its-international-ranking
And of course there's the high-pressure sales: http://www.theinternationalstudentrecruiter.com/how-to-become-a-top-500-university/
And the highly lucrative "consultancy" to help universities rise up the rankings: http://www.iu.qs.com/projects-and-services/consulting/
There are "opportunities" for branding -- a snip at just $80,000 -- with QS Showcase: http://qsshowcase.com/main/branding-opportunities/
Or what about some relaxing massage, or a tenis tournament and networking with the staff who compile the rankings: http://www.qsworldclass.com/6thqsworldclass/
Perhaps most distribing of all is the selling of dubious Star ratings: http://www.nytimes.com/2012/12/31/world/europe/31iht-educlede31.html?pagewanted=all&_r=0
Keep up the good work. Its an excellent blog.
All of this is true although I cannot get very excited about using pirated software and the bit about relaxing massage is rather petty -- I assume it is something to do with having a conference in Thailand. Incidentally, I don't think anyone from THE sent this since the reader refers to THES (The S for Supplement was removed in 2008).
This is all a long way from the days when journalists refused to take pops at their rivals, even when they knew the rankings were a bit rum.
Sunday, June 23, 2013
Times Higher Education Under 50s Rankings
Times Higher Education has now published its ranking of universities less than fifty years old.
The top five are:
1. Pohang University of Science and Technology
2. EPF Lausanne
3. Korea Advanced Institute of Science and Technology
4. Hong Kong University of Science and Technology
5. University of California, Irvine
They are quite a bit different from the QS young universities rankings. In a while I hope to provide a detailed comparison.
The top five are:
1. Pohang University of Science and Technology
2. EPF Lausanne
3. Korea Advanced Institute of Science and Technology
4. Hong Kong University of Science and Technology
5. University of California, Irvine
They are quite a bit different from the QS young universities rankings. In a while I hope to provide a detailed comparison.
Saturday, June 22, 2013
Citation Cartels
An article by Paul Jump in Times Higher Education describes how Thomson Reuters have been excluding an increasing number of journals from their Journal Citation Reports for "anomalous citation patterns" which now includes not just self-citation but excessive mutual citation.
Surely it is now time for Thomson Reuters to stop counting self-citations for the Research Influence indicator in the THE World University Rankings. The threat of the self-citations of Dr El Naschie "of" Alexandria University has receded but there are others who would have a big impact on the rankings if they ever move to a university with a low volume of publications.
TR may not want to follow QS who no longer count citations for their rankings but excluding excessive mutual citation as well would put them one up again.
Surely it is now time for Thomson Reuters to stop counting self-citations for the Research Influence indicator in the THE World University Rankings. The threat of the self-citations of Dr El Naschie "of" Alexandria University has receded but there are others who would have a big impact on the rankings if they ever move to a university with a low volume of publications.
TR may not want to follow QS who no longer count citations for their rankings but excluding excessive mutual citation as well would put them one up again.
Wednesday, June 12, 2013
Uncanny Insight into Ranker Psychology
I just said that QS would announce its Young University Rankings now that THE has indicated the launch date for its rankings at Wellington College next week.
Actually it was just a few hours.
Anyway, here are the top five.
1. Hong Kong University of Science and Technology
2. Nanyang Technological University
3. Warwick
4. KAIST
5. City University of Hong Kong
Actually it was just a few hours.
Anyway, here are the top five.
1. Hong Kong University of Science and Technology
2. Nanyang Technological University
3. Warwick
4. KAIST
5. City University of Hong Kong
Tuesday, June 11, 2013
Prestigious Ranking Watch
Times Higher Education will be launching their Top 100 Under-50 Universities Rankings which, in case you have forgotten, is prestigious, at Wellington College, which is a school not a college, in another eight days.
Does anybody want to bet on the QS under-50 rankings appearing in a few days?
Meanwhile, the THE World Rankings will be published at the THE World Academic Summit in Singapore in October. See here. And yes, they are prestigious.
Monday, June 10, 2013
The QS Latin American Rankings
The QS Latin American Rankings show some interesting variations in methodology. The academic survey has a weight of 30%, compared to 40% in the World Rankings, and the employer survey a weight of 20%, compared to 10%.
Instead of 20% for citations per faculty there is 10% for papers per faculty and 10% for citations per paper. Since there are great variations according to the measures used to count research output and influence, as shown by the recent Leiden Ranking, this is very sensible.
Faculty-student ratio is reduced from 20% to 10% and international students and international faculty are removed. There is now 10% for proportion of staff with Ph Ds and 10% for web impact.
Here are the top five.
1. Universidade de Sao Paulo
2. Pontificia Universida Catolica de Chile
3. Universidade Estadual de Campinas
4. Universidad de Los Andes Colombia
5. Universidad de Chile
Instead of 20% for citations per faculty there is 10% for papers per faculty and 10% for citations per paper. Since there are great variations according to the measures used to count research output and influence, as shown by the recent Leiden Ranking, this is very sensible.
Faculty-student ratio is reduced from 20% to 10% and international students and international faculty are removed. There is now 10% for proportion of staff with Ph Ds and 10% for web impact.
Here are the top five.
1. Universidade de Sao Paulo
2. Pontificia Universida Catolica de Chile
3. Universidade Estadual de Campinas
4. Universidad de Los Andes Colombia
5. Universidad de Chile
Sunday, June 02, 2013
The Global Gender Index
Times Higher Education has just published an article on the Global Gender Index produced in collaboration with Thomson Reuters. This consists of calculating the percentage of female academics among those universities included in the top 400 of the Times Higher Education World University Rankings and producing a percentage for each country.
This is a rather dubious exercise. The data is reported by the institutions themselves and, as the international unit of UK HE has pointed out, such data may not always be reliable. In addition, the universities that volunteer to be ranked by THE and Thomson Reuters may not necessarily be representative of the higher education sector in general. The global research orientated universities that make it into the top 400 may be even less so.
The report finds that everywhere women make up less than half the academic work force and that the numbers are lowest in Japan, followed by Taiwan. Numbers are nearly equal in Turkey.
Predictably, the article includes a call for universities to be ranked according to how far they have achieved gender equity among academic staff and a suggestion that East Asian countries should learn from Turkey.
There is a question that needs to be considered. If universities in countries like Taiwan and Japan are poised to overtake the West, as QS and THE are constantly warning us, should we be so eager to conclude that they have something, or indeed anything, to learn from Turkey or from Northern Europe?
Times Higher Education has just published an article on the Global Gender Index produced in collaboration with Thomson Reuters. This consists of calculating the percentage of female academics among those universities included in the top 400 of the Times Higher Education World University Rankings and producing a percentage for each country.
This is a rather dubious exercise. The data is reported by the institutions themselves and, as the international unit of UK HE has pointed out, such data may not always be reliable. In addition, the universities that volunteer to be ranked by THE and Thomson Reuters may not necessarily be representative of the higher education sector in general. The global research orientated universities that make it into the top 400 may be even less so.
The report finds that everywhere women make up less than half the academic work force and that the numbers are lowest in Japan, followed by Taiwan. Numbers are nearly equal in Turkey.
Predictably, the article includes a call for universities to be ranked according to how far they have achieved gender equity among academic staff and a suggestion that East Asian countries should learn from Turkey.
There is a question that needs to be considered. If universities in countries like Taiwan and Japan are poised to overtake the West, as QS and THE are constantly warning us, should we be so eager to conclude that they have something, or indeed anything, to learn from Turkey or from Northern Europe?
Wednesday, May 29, 2013
Here is the full text of my article on the QS Subject Rankings published in the Philippine Daily Inquirer.
Philippine Daily Inquirer
It is time for the Philippines to think about constructing its own objective and transparent ranking or rating systems for its colleges and universities that would learn from the mistakes of the international rankers.
The ranking of universities is getting to be big business these days. There are quite a few rankings appearing from Scimago, Webometrics, University Ranking of Academic Performance (from Turkey), the Taiwan Rankings, plus many national rankings.
No doubt there will be more to come.
In addition, the big three of the ranking world—Quacquarelli Symonds (QS), Times Higher Education and Shanghai Jiao Tong University’s Academic Ranking of World Universities—are now producing a whole range of supplementary products, regional rankings, new university rankings, reputation rankings and subject rankings.
There is nothing wrong, in principle, with ranking universities. Indeed, it might be in some ways a necessity. The problem is that there are very serious problems with the rankings produced by QS, even though they seem to be better known in Southeast Asia than any of the others.
This is especially true of the subject rankings.
No new data
The QS subject rankings, which have just been released, do not contain new data. They are mostly based on data collected for last year’s World University Rankings—in some cases extracted from the rankings and, in others, recombined or recalculated.
There are four indicators used in these rankings. They are weighted differently for the different subjects and, in two subjects, only two of the indicators are used.
The four indicators are:
A survey of academics or people who claim to be academics or used to be academics, taken from a variety of sources. This is the same indicator used in the world rankings. Respondents were asked to name the best universities for research.
A survey of employers, which seem to comprise anyone who chooses to describe himself or herself as an employer or a recruiter.
The number of citations per paper. This is a change from the world rankings when the calculation was citations per faculty.
H-index. This is something that is easier to give examples for than to define. If a university publishes one paper and the paper is cited once, then it gets an index of one. If it publishes two or more papers and two of them are published twice each, then the index is two and so on. This is a way of combining quantity of research with quality as measured by influence on other researchers.
Out of these four indicators, three are about research and one is about the employability of a university’s graduates.
These rankings are not at all suitable for use by students wondering where they should go to study, whether at undergraduate or graduate level.
The only part that could be of any use is the employer review and that has a weight ranging from 40 percent for accounting and politics to 10 percent for arts and social science subjects, like history and sociology.
But even if the rankings are to be used just to evaluate the quantity or quality of research, they are frankly of little use. They are dominated by the survey of academic opinion, which is not of professional quality.
There are several ways in which people can take part in the survey. They can be nominated by a university, they can sign up themselves, they can be recommended by a previous respondent or they can be asked because they have subscribed to an academic journal or an online database.
Apart from checking that they have a valid academic e-mail address, it is not clear whether QS makes any attempt to check whether the survey respondents are really qualified to make any judgements about research.
Not plausible
The result is that the academic survey and also the employer survey have produced results that do not appear plausible.
In recent years, there have been some odd results from QS surveys. My personal favorite is the New York University Tisch School of the Arts, which set up a branch in Singapore in 2007 and graduated its first batch of students from a three-year Film course in 2010.In the QS Asian University Rankings of that year, the Singapore branch got zero for the other criteria (presumably the school did not submit data) but it was ranked 149th in Asia for academic reputation and 114th for employer reputation.
Not bad for a school that had yet to produce any graduates when the survey was taken early in the year.
In all of the subject rankings this year, the two surveys account for at least half of the total weighting and, in two cases, Languages and English, all of it.
Consequently, while some of the results for some subjects may be quite reasonable for the world top 50 or the top 100, after that they are sometimes downright bizarre.
The problem is that although QS has a lot of respondents worldwide, when it gets down to the subject level there can be very few. In pharmacy, for example, there are only 672 for the academic survey and in materials science 146 for the employer survey. Since the leading global players will get a large share of the responses, this means that universities further down the list will be getting a handful of responses for the survey. The result is that the order of universities in any subject in a single country like the Philippines can be decided by just one or two responses to the surveys.
Another problem is that, after a few obvious choices like Harvard, MIT (Massachusetts Institute of Technology), Tokyo, most respondents probably rely on a university’s general reputation and that can lead to all sorts of distortions.
Many of the subject rankings at the country level are quite strange. Sometimes they even include universities that do not offer courses in that subject. We have already seen that there are universities in the Philippines that are ranked for subjects that they do not teach.
Somebody might say that maybe they are doing research in a subject while teaching in a department with a different name, such as an economic historian teaching in the economics department but publishing in history journals and getting picked up by the academic survey for history.
Maybe, but it would not be a good idea for someone who wants to study history to apply to that particular university.
Another example is from Saudi Arabia, where King Fahd University of Petroleum and Minerals was apparently top for history, even though it does not have a history department or indeed anything where you might expect to find a historian. There are several universities in Saudi Arabia that may not teach history very well but at least they do actually teach it.
These subject rankings may have a modest utility for students who can pick or choose among top global universities and need some idea whether they should study engineering at SUNY (State University of New York) Buffalo (New York) or Leicester (United Kingdom) or linguistics at Birmingham or Michigan.
But they are of very little use for anyone else.
The QS university rankings by subject: Warning needed
By Richard HolmesPhilippine Daily Inquirer
7:37 pm | Monday, May 27th, 2013
107 360 8
The ranking of universities is getting to be big business these days. There are quite a few rankings appearing from Scimago, Webometrics, University Ranking of Academic Performance (from Turkey), the Taiwan Rankings, plus many national rankings.
No doubt there will be more to come.
In addition, the big three of the ranking world—Quacquarelli Symonds (QS), Times Higher Education and Shanghai Jiao Tong University’s Academic Ranking of World Universities—are now producing a whole range of supplementary products, regional rankings, new university rankings, reputation rankings and subject rankings.
There is nothing wrong, in principle, with ranking universities. Indeed, it might be in some ways a necessity. The problem is that there are very serious problems with the rankings produced by QS, even though they seem to be better known in Southeast Asia than any of the others.
This is especially true of the subject rankings.
No new data
The QS subject rankings, which have just been released, do not contain new data. They are mostly based on data collected for last year’s World University Rankings—in some cases extracted from the rankings and, in others, recombined or recalculated.
There are four indicators used in these rankings. They are weighted differently for the different subjects and, in two subjects, only two of the indicators are used.
The four indicators are:
A survey of academics or people who claim to be academics or used to be academics, taken from a variety of sources. This is the same indicator used in the world rankings. Respondents were asked to name the best universities for research.
A survey of employers, which seem to comprise anyone who chooses to describe himself or herself as an employer or a recruiter.
The number of citations per paper. This is a change from the world rankings when the calculation was citations per faculty.
H-index. This is something that is easier to give examples for than to define. If a university publishes one paper and the paper is cited once, then it gets an index of one. If it publishes two or more papers and two of them are published twice each, then the index is two and so on. This is a way of combining quantity of research with quality as measured by influence on other researchers.
Out of these four indicators, three are about research and one is about the employability of a university’s graduates.
These rankings are not at all suitable for use by students wondering where they should go to study, whether at undergraduate or graduate level.
The only part that could be of any use is the employer review and that has a weight ranging from 40 percent for accounting and politics to 10 percent for arts and social science subjects, like history and sociology.
But even if the rankings are to be used just to evaluate the quantity or quality of research, they are frankly of little use. They are dominated by the survey of academic opinion, which is not of professional quality.
There are several ways in which people can take part in the survey. They can be nominated by a university, they can sign up themselves, they can be recommended by a previous respondent or they can be asked because they have subscribed to an academic journal or an online database.
Apart from checking that they have a valid academic e-mail address, it is not clear whether QS makes any attempt to check whether the survey respondents are really qualified to make any judgements about research.
Not plausible
The result is that the academic survey and also the employer survey have produced results that do not appear plausible.
In recent years, there have been some odd results from QS surveys. My personal favorite is the New York University Tisch School of the Arts, which set up a branch in Singapore in 2007 and graduated its first batch of students from a three-year Film course in 2010.In the QS Asian University Rankings of that year, the Singapore branch got zero for the other criteria (presumably the school did not submit data) but it was ranked 149th in Asia for academic reputation and 114th for employer reputation.
Not bad for a school that had yet to produce any graduates when the survey was taken early in the year.
In all of the subject rankings this year, the two surveys account for at least half of the total weighting and, in two cases, Languages and English, all of it.
Consequently, while some of the results for some subjects may be quite reasonable for the world top 50 or the top 100, after that they are sometimes downright bizarre.
The problem is that although QS has a lot of respondents worldwide, when it gets down to the subject level there can be very few. In pharmacy, for example, there are only 672 for the academic survey and in materials science 146 for the employer survey. Since the leading global players will get a large share of the responses, this means that universities further down the list will be getting a handful of responses for the survey. The result is that the order of universities in any subject in a single country like the Philippines can be decided by just one or two responses to the surveys.
Another problem is that, after a few obvious choices like Harvard, MIT (Massachusetts Institute of Technology), Tokyo, most respondents probably rely on a university’s general reputation and that can lead to all sorts of distortions.
Many of the subject rankings at the country level are quite strange. Sometimes they even include universities that do not offer courses in that subject. We have already seen that there are universities in the Philippines that are ranked for subjects that they do not teach.
Somebody might say that maybe they are doing research in a subject while teaching in a department with a different name, such as an economic historian teaching in the economics department but publishing in history journals and getting picked up by the academic survey for history.
Maybe, but it would not be a good idea for someone who wants to study history to apply to that particular university.
Another example is from Saudi Arabia, where King Fahd University of Petroleum and Minerals was apparently top for history, even though it does not have a history department or indeed anything where you might expect to find a historian. There are several universities in Saudi Arabia that may not teach history very well but at least they do actually teach it.
These subject rankings may have a modest utility for students who can pick or choose among top global universities and need some idea whether they should study engineering at SUNY (State University of New York) Buffalo (New York) or Leicester (United Kingdom) or linguistics at Birmingham or Michigan.
But they are of very little use for anyone else.
Tuesday, May 28, 2013
The QS university rankings by subject: Warning needed
My article in the Philippine Daily Inquirer can be accessed here.
My article in the Philippine Daily Inquirer can be accessed here.
Sunday, May 26, 2013
The QS Subject Rankings: Not Everybody is Impressed
The subject rankings just released by QS seem to be a shrewd marketing move. Dozens of universities around the world have learnt that they have been ranked for something by the renown and revered QS, which will look good in their promotional literature.
Some people are not impressed. Brian Leiter, the law scholar and philosopher asks whether they are a fraud on the public. See here for his answer.
The subject rankings just released by QS seem to be a shrewd marketing move. Dozens of universities around the world have learnt that they have been ranked for something by the renown and revered QS, which will look good in their promotional literature.
Some people are not impressed. Brian Leiter, the law scholar and philosopher asks whether they are a fraud on the public. See here for his answer.
Why are they so worried?
The UK HE International Unit represents the views of the British university sector and is cooperating with Times Higher Education and Thomson Reuters in the organising of this week's Global University Summit in London, a Prestigious Event in a Spectacular Setting.
It has just issued a policy statement about the slowly emerging U-Multirank project, which is also discussed by David Jobbins in University World News.
The Unit has expressed a number of concerns. These include the overcrowding of the league table market, reliance on self-reported data which lack validity, the combining of incommensurate variables to create a league table, the risk of " becoming a blunt instrument that would not allow different strengths across an institution to be recognised" and diverting EU funds from other priorities. It claims that U-Multirank may "harm rather than benefit the sector."
It is difficult to see why the International Unit is getting so concerned. I agree that self reported data may lack validity but the QS and Times Higher Education global rankings also include such data. The combining of incommensurate variables is the essence of ranking. Sometimes blunt instruments are appropriate. A scalpel is of little use for hammering nails and a tool like U-Multirank may have uses which existing rankings do not.
As for the 2 million Euros, this is trivial compared with some of the things the EU has been wasting money on in recent years.
The UK HE International Unit represents the views of the British university sector and is cooperating with Times Higher Education and Thomson Reuters in the organising of this week's Global University Summit in London, a Prestigious Event in a Spectacular Setting.
It has just issued a policy statement about the slowly emerging U-Multirank project, which is also discussed by David Jobbins in University World News.
The Unit has expressed a number of concerns. These include the overcrowding of the league table market, reliance on self-reported data which lack validity, the combining of incommensurate variables to create a league table, the risk of " becoming a blunt instrument that would not allow different strengths across an institution to be recognised" and diverting EU funds from other priorities. It claims that U-Multirank may "harm rather than benefit the sector."
It is difficult to see why the International Unit is getting so concerned. I agree that self reported data may lack validity but the QS and Times Higher Education global rankings also include such data. The combining of incommensurate variables is the essence of ranking. Sometimes blunt instruments are appropriate. A scalpel is of little use for hammering nails and a tool like U-Multirank may have uses which existing rankings do not.
As for the 2 million Euros, this is trivial compared with some of the things the EU has been wasting money on in recent years.
Saturday, May 25, 2013
The Efficiency Rankings
Times Higher Education has a story about a study by Dirk Van Damme, head of the Centre for Educational Research and Innovation at the OECD. This will be presented at the Global University Summit held in Whitehall, London from the 28th to the 30th May.
The Summit "brings an invitation-only audience of leaders from the world’s foremost universities, senior policy-makers and international business executives to London in 2013." It is a "prestigious event" held in a "spectacular setting" and is sponsored by the University of Warwick, Times Higher Education, Thomson Reuters and UK Universities International Unit. Speakers include Vince Cable, Boris Johnson, the Russian ambassador and heads of various universities from around the world.
What Professor Van Damme has done is to treat the THE World University Rankings Research Indicator scores as an input and the Research Influence (Citations) scores as an output. The output scores are divided by the input scores and the result is a measure of the efficiency with which the inputs are turned into citations, which, as we all know, is the main function of the modern university.
According to THE:
"The input indicator takes scaled and normalised measures of research income and volume into account, and also considers reputation, while the output indicator looks at citations to institutional papers in Thomson Reuters’ Web of Science database, normalised for subject differences.
Professor van Damme said that the results - which show that university systems outside the Anglo-American elite are able to realise and increase outputs with much lower levels of input - did not surprise him.
“For example, Switzerland really invests in the right types of research. It has a few universities in which it concentrates resources, and they do very well,” he said.
Previous studies have found the UK to have the most efficient research system on measures of citation per researcher and per unit of spending.
But Professor van Damme explained that under his approach, productivity - output per staff member - was included as an input.
“With efficiency I mean the total research capacity of an institution, including its productivity, divided by its impact. The UK is not doing badly at all, but other countries are doing better, such as Ireland, which has a very low research score but a good citations score,” he said.
Given the severity of the country’s economic crisis, Ireland’s success was particularly impressive, he said.
“I think it is really conscious of the effort it has to make to maintain its position and is doing so.”
Low efficiency scores for China and South Korea reflected the countries’ problems in translating their huge investment into outputs, he added."
One hesitates to be negative about a paper presented at a prestigious event in a spectacular setting to an invitation only audience but this is frankly rather silly.
I would accept that income can be regarded as an input but surely not reputation and surely not volume of publications. Also, unless Van Damme's methodology has undisclosed refinements he is treating research scores as having the same value regardless of whether they are composed mainly of scores for reputation or for number of publications or for research income.
Then there is the time period concerned. Research income is income for one year, Publications are drawn from a five year period. These are then compared with citations over a six year period. So the paper is asking how research income for 2010 produces citations in the years 2006 - 2011 of papers published in the years 2006 - 2010. A university is certainly being remarkably efficient if its 2010 income is producing citations in 2006, 2007, 2008 and 2009.
Turning to the citations side of the equation, it should be recalled that the THE citations indicator includes an adjustment by which the citation impact score for universities is divided by the square root of the citation impact score for the country as a whole. In other words a university located in a country where papers are not cited very much gets a big boost and the lower the national citation impact score the bigger the boost for the university. This is why Hong Kong universities suffered reduced scores when Thomson Reuters took them out of China when counting citations and put them in their own separate category.
So, it is not surprising that universities from outside the Anglo-Saxon elite do well for citations and thus appear to be very efficient. Thomson Reuters methodology gives such universities a very substantial weighting just for being located in countries that are less productive in terms of citations.
None of this is new. In 2010 Van Damme did something similar at a seminar in London.
Van Damme is just analysing the top 200 universities in the THE rankings. It would surely be more interesting to analyse the top 400 whose scores are obtainable from an iPad/iPhone app.
So here are the top ten universities in the world according to the efficiency with which they turn income, reputation and publications into citations. The procedure is to divide the citations score from the 2012 THE rankings by the research indicator score.
1. Tokyo Metropolitan University
2. Moscow State Engineering Physics Institute
3. Florida Institute of Technology
4. Southern Methodist University
5. University of Hertfordshire
6. University of Portsmouth
7. King Mongkut's University of Technology
8. Vigo University
9. Creighton University
10. Fribourg University
No doubt the good and the great of the academic world assembled in Whitehall will make a trip to Portsmouth or even to Vigo or Creighton if they can find them on the map.
And now for the hall of shame. Here are the bottom ten of the THE top 400, ranked according to efficiency as measured by citations indicator scores divided by research scores. The heads of these failing institutions will no doubt be packing their bags and looking for jobs as junior administrative assistants at technical colleges in Siberia or the upper Amazon
391. Tsinghua University
392. Chinese University of Hong Kong
393. National Taiwan University
394. National Chiao Tung University
395. Tilburg University
396. Delft University of Technology
397. Seoul National University
398. State University of Campinas
399. Sao Paulo University
400. Lomosonov Moscow State University
In a little while I hope to publish the full 400 after I have finished being sarcastic about the QS subject rankings.
Times Higher Education has a story about a study by Dirk Van Damme, head of the Centre for Educational Research and Innovation at the OECD. This will be presented at the Global University Summit held in Whitehall, London from the 28th to the 30th May.
The Summit "brings an invitation-only audience of leaders from the world’s foremost universities, senior policy-makers and international business executives to London in 2013." It is a "prestigious event" held in a "spectacular setting" and is sponsored by the University of Warwick, Times Higher Education, Thomson Reuters and UK Universities International Unit. Speakers include Vince Cable, Boris Johnson, the Russian ambassador and heads of various universities from around the world.
What Professor Van Damme has done is to treat the THE World University Rankings Research Indicator scores as an input and the Research Influence (Citations) scores as an output. The output scores are divided by the input scores and the result is a measure of the efficiency with which the inputs are turned into citations, which, as we all know, is the main function of the modern university.
According to THE:
"The input indicator takes scaled and normalised measures of research income and volume into account, and also considers reputation, while the output indicator looks at citations to institutional papers in Thomson Reuters’ Web of Science database, normalised for subject differences.
Professor van Damme said that the results - which show that university systems outside the Anglo-American elite are able to realise and increase outputs with much lower levels of input - did not surprise him.
“For example, Switzerland really invests in the right types of research. It has a few universities in which it concentrates resources, and they do very well,” he said.
Previous studies have found the UK to have the most efficient research system on measures of citation per researcher and per unit of spending.
But Professor van Damme explained that under his approach, productivity - output per staff member - was included as an input.
“With efficiency I mean the total research capacity of an institution, including its productivity, divided by its impact. The UK is not doing badly at all, but other countries are doing better, such as Ireland, which has a very low research score but a good citations score,” he said.
Given the severity of the country’s economic crisis, Ireland’s success was particularly impressive, he said.
“I think it is really conscious of the effort it has to make to maintain its position and is doing so.”
Low efficiency scores for China and South Korea reflected the countries’ problems in translating their huge investment into outputs, he added."
One hesitates to be negative about a paper presented at a prestigious event in a spectacular setting to an invitation only audience but this is frankly rather silly.
I would accept that income can be regarded as an input but surely not reputation and surely not volume of publications. Also, unless Van Damme's methodology has undisclosed refinements he is treating research scores as having the same value regardless of whether they are composed mainly of scores for reputation or for number of publications or for research income.
Then there is the time period concerned. Research income is income for one year, Publications are drawn from a five year period. These are then compared with citations over a six year period. So the paper is asking how research income for 2010 produces citations in the years 2006 - 2011 of papers published in the years 2006 - 2010. A university is certainly being remarkably efficient if its 2010 income is producing citations in 2006, 2007, 2008 and 2009.
Turning to the citations side of the equation, it should be recalled that the THE citations indicator includes an adjustment by which the citation impact score for universities is divided by the square root of the citation impact score for the country as a whole. In other words a university located in a country where papers are not cited very much gets a big boost and the lower the national citation impact score the bigger the boost for the university. This is why Hong Kong universities suffered reduced scores when Thomson Reuters took them out of China when counting citations and put them in their own separate category.
So, it is not surprising that universities from outside the Anglo-Saxon elite do well for citations and thus appear to be very efficient. Thomson Reuters methodology gives such universities a very substantial weighting just for being located in countries that are less productive in terms of citations.
None of this is new. In 2010 Van Damme did something similar at a seminar in London.
Van Damme is just analysing the top 200 universities in the THE rankings. It would surely be more interesting to analyse the top 400 whose scores are obtainable from an iPad/iPhone app.
So here are the top ten universities in the world according to the efficiency with which they turn income, reputation and publications into citations. The procedure is to divide the citations score from the 2012 THE rankings by the research indicator score.
1. Tokyo Metropolitan University
2. Moscow State Engineering Physics Institute
3. Florida Institute of Technology
4. Southern Methodist University
5. University of Hertfordshire
6. University of Portsmouth
7. King Mongkut's University of Technology
8. Vigo University
9. Creighton University
10. Fribourg University
No doubt the good and the great of the academic world assembled in Whitehall will make a trip to Portsmouth or even to Vigo or Creighton if they can find them on the map.
And now for the hall of shame. Here are the bottom ten of the THE top 400, ranked according to efficiency as measured by citations indicator scores divided by research scores. The heads of these failing institutions will no doubt be packing their bags and looking for jobs as junior administrative assistants at technical colleges in Siberia or the upper Amazon
391. Tsinghua University
392. Chinese University of Hong Kong
393. National Taiwan University
394. National Chiao Tung University
395. Tilburg University
396. Delft University of Technology
397. Seoul National University
398. State University of Campinas
399. Sao Paulo University
400. Lomosonov Moscow State University
In a little while I hope to publish the full 400 after I have finished being sarcastic about the QS subject rankings.
Friday, May 24, 2013
Update on IREG Approval
- The International Ranking Experts group has also given its approval to the national ranking produced by the Perspektywy Education Foundation of Poland.
- The approval given to the QS World, Asian and Latin American Rankings does not apply to the QS Stars.
Saturday, May 18, 2013
The First IREG Audit
QS is the first ranking organisation to get the seal of approval from the International Ranking Experts Group (IREG) for its World, Asian and Latin American rankings.
The IREG audit process would appear on the surface to be quite rigorous. Take a look at the audit manual. There are a number of criteria some of which sound quite daunting but are not really so. For example, Criterion 8 says:
"If rankings are using composite indicators the weights of the individual indicators have to be published. Changes in weights over time should be limited and due to methodological or conception-related considerations."
Fair enough, but there is nothing about how weighting should be distributed across the indicators in the first place. Forty per cent for the academic survey in the QS rankings?
Some indicators are obvious -- providing a contact address. Others are so vague that they mean very little -- organisational measures that enhance the credibility of rankings.
The basic principle of the audit is that ranking organisations are given scores ranging from 1 (not sufficient/not applied) to 6 (distinguished) for the various criteria, with a double weighting for core criteria. The maximum score is 180 and
QS is the first ranking organisation to get the seal of approval from the International Ranking Experts Group (IREG) for its World, Asian and Latin American rankings.
The IREG audit process would appear on the surface to be quite rigorous. Take a look at the audit manual. There are a number of criteria some of which sound quite daunting but are not really so. For example, Criterion 8 says:
"If rankings are using composite indicators the weights of the individual indicators have to be published. Changes in weights over time should be limited and due to methodological or conception-related considerations."
Fair enough, but there is nothing about how weighting should be distributed across the indicators in the first place. Forty per cent for the academic survey in the QS rankings?
Some indicators are obvious -- providing a contact address. Others are so vague that they mean very little -- organisational measures that enhance the credibility of rankings.
The basic principle of the audit is that ranking organisations are given scores ranging from 1 (not sufficient/not applied) to 6 (distinguished) for the various criteria, with a double weighting for core criteria. The maximum score is 180 and
"On the bases of the assessment scale described
above, the threshold for a positive audit decision will
be 60 per cent of the maximum total score. This
means the average score on the individual criteria
has to be slightly higher than “adequate”. In order
to establish the IREG Ranking Audit as a quality
label none of the core criteria must be assessed
with a score lower than three."
So a positive result could mean that an organisation is distinguished in everything. It could also mean that it is on average slightly higher than adequate. It would be interesting to know which applies to QS.
I do not know whether the auditors had any criticisms to make. If not it is difficult to see the point of the exercise. If they did it would be nice to know what they were.
QS are to be commended for submitting to the audit although it probably was not very searching but it still seems that the ranking world needs more and better monitoring and observation.
Wednesday, May 15, 2013
QS Rankings by Subject
QS have produced their annual subject rankings. At the top there are no real surprises and, while there is certainly room for argument, I do not think that anyone will be shocked by the top ten or twenty in each subject.
The university with the most number ones is Harvard:
Medicine
Biology
Psychology
Pharmacy and Pharmacology
Earth and Marine Sciences
Politics and International Studies
Law
Economics and Econometrics
Accounting and Finance
Education
MIT has seven:
Computer Science
Chemical Engineering
Electrical Engineering
Mechanical Engineering
Phys and Astronomy
Chemistry
Materials Science
Then there is Berkeley with exactly the four you would expect:
Environmental Science
Statistics and Operational Research
Sociology
Communication and Media Studies
Oxford has three:
Philosophy
Modern Languages
Geography
Cambridge another three:
History
Linguistics
Mathematics
Imperial College London is top for Civil Engineering and University of California, Davis for Agriculture and Forestry.
These rankings are based on the academic opinion survey, the employer survey, citations per paper and h-index, a measure of both output and influence that eliminates outliers, in proportions that vary for each subject. They are very research-focused which is unfortunate since there seems to be a consensus emerging at conferences and seminars that the THE-TR rankings are for policy makers, the Shanghai ARWU for researchers and the QS rankings for undergraduate students.
Outside the top fifty there are some oddities resulting form the small number of responses when we leave the top fifty or top one hundred. I will leave it to specialists to find them.
QS have produced their annual subject rankings. At the top there are no real surprises and, while there is certainly room for argument, I do not think that anyone will be shocked by the top ten or twenty in each subject.
The university with the most number ones is Harvard:
Medicine
Biology
Psychology
Pharmacy and Pharmacology
Earth and Marine Sciences
Politics and International Studies
Law
Economics and Econometrics
Accounting and Finance
Education
MIT has seven:
Computer Science
Chemical Engineering
Electrical Engineering
Mechanical Engineering
Phys and Astronomy
Chemistry
Materials Science
Then there is Berkeley with exactly the four you would expect:
Environmental Science
Statistics and Operational Research
Sociology
Communication and Media Studies
Oxford has three:
Philosophy
Modern Languages
Geography
Cambridge another three:
History
Linguistics
Mathematics
Imperial College London is top for Civil Engineering and University of California, Davis for Agriculture and Forestry.
These rankings are based on the academic opinion survey, the employer survey, citations per paper and h-index, a measure of both output and influence that eliminates outliers, in proportions that vary for each subject. They are very research-focused which is unfortunate since there seems to be a consensus emerging at conferences and seminars that the THE-TR rankings are for policy makers, the Shanghai ARWU for researchers and the QS rankings for undergraduate students.
Outside the top fifty there are some oddities resulting form the small number of responses when we leave the top fifty or top one hundred. I will leave it to specialists to find them.
Tuesday, May 07, 2013
Unsolicited Advice
Next, the respondents should be divided into clearly defined categories, presented with appropriate questions and appropriately verified.
There has been a lot of debate recently about the reputation survey
component in the QS World University Rankings.
The president of University College Cork asked faculty to find friends at other universities who "understand the importance of UCC
improving its university world ranking". The reason for the reference to other universities is that the QS survey very sensibly does not permit respondents to vote for their own universities, those that they
list as their affiliation.
This request appears to violate QS's guidelines which permit universities to
inform staff about the survey but not to encourage them to nominate or refrain from nominating any particular university. According to an article in Inside Higher Ed QS are considering whether
it is necessary to take any action.
This report has given Ben Sowter of QS sufficient concern to argue that it is not
possible to effectively manipulate the survey. He has
set out a reasonable case why it is unlikely that any institution could succeed in marching graduate students up to their desktops to vote for favoured
institutions to avoid being sent to a reeducation camp or to teach
at a community college.
However, some of his reasons sound a little unconvincing: signing up, screening, an advisory board with years of experience. It would help if he were a little more
specific, especially about the sophisticated anomaly detection algorithm, which sounds rather intimidating.
The problem with the academic survey is not that an institution like University College Cork is going to push its way into the global top twenty or top one hundred but that there could be a systematic bias towards those who are ambitious or
from certain regions. It is noticeable that some universities in East and Southeast Asia do very much better on the academic survey than on other indicators.
The QS academic survey is getting overly complicated and incoherent. It began as a fairly simple exercise. Its
respondents were at first drawn form the subscription lists of World Scientific, an academic publishing company based in Singapore. Not
surprisingly, the first academic survey produced a strong, perhaps too strong, showing for Southeast
and East Asia and Berkeley.
The survey turned out to be unsatisfactory, not least because of an
extremely small response rate. In succeeding years QS has added respondents drawn from the subscription
lists of Mardev, an academic database, largely replacing those from World Scientific, lists supplied by
universities, academics nominated by respondents to the survey and those joining the online sign up facility. It is not clear how many academics are included in these groups or what the various response rates are. In addition, counting responses for three years unless overwritten by the respondent might enhance the stability of the indicator but it also means that some of the responses might be from people who have died or retired.
The reputation survey does not have a good reputation and it is time for QS to think about revamping the methodology. But changing the methodology
means that rankings cannot be used to chart the progress or decline of
universities over time. The solution to this dilemma might be to launch a new
ranking and keep the old one, perhaps issuing it later in the year or giving it less prominence.
My suggestion to QS is that they keep the current methodology but call it the
Original QS Rankings or the QS Classic Rankings. Then they could introduce the QS Plus or New QS rankings or something similar which would address the issues
about the academic survey and introduce some other changes. Since QS are now offering a wide range of products, Latin American
Rankings, Asian Rankings, subject rankings, best student cities and probably more to come, this
should not impose an undue burden.
First, starting with the academic survey, 40 percent is too much for any
indicator. It should be reduced to 20 per cent.
It should be recognised that subscribing to an online database or being
recommended by another faculty member is not really a qualification for judging
international research excellence. Neither is getting one’s name listed
as corresponding author. These days that can have as much to do with faculty politics as
with ability. I suggest that the academic survey should be sent to:
(a) highly cited researchers or those with a high h-index who
should be asked about international research excellence;
(b) researchers drawn from the Scopus database who should be asked to
rate the regional or national research standing of universities.
Responses should be weighted according to the number of researchers per
country.
This could be supplemented with a survey of student satisfaction with
teaching based on a student version of the sign up facility and requiring a
valid academic address with verification.
Also, a sign up facility could be established for anyone interested and
asking a question about general perceived quality.
If QS ever do change the academic survey they might as well review the other indicators. Starting with the employer review, this should be kept since, whatever its flaws, it is an
external check on universities. But it might be easier to manipulate than the academic
survey. Something was clearly going on in the 2011 ranking when there appeared
to be a disproportionate number of respondents from some Latin American
countries, leading QS to impose caps on universities exceeding the national
average by a significant amount.
"QS received a dramatic level of response from Latin America in
2011, these counts and all subsequent analysis have been adjusted by applying a
weighting to responses from countries with a distinctly disproportionate level
of response."
It seems that this problem was sorted out in 2012. Even so, QS might consider giving half the weighting for this survey to an invited panel of employers. Perhaps they could also broaden their database by asking NGOS and non-profit groups about their preferences.
There is little evidence that
overall the number of international students has anything to do with any
measure of quality and it also may have undesirable backwash effects as
universities import large numbers of less able students. The problem is that QS
are doing a good business moving graduate students across international borders
so it is unlikely that they will ever consider doing away this indicator.
Staff student ratio is by all accounts a very crude indicator of quality of
teaching. Unfortunately, at the moment there does appear to be any practical alternative.
One thing that QS could do is to remove research staff from the
faculty side of the equation. At the moment a university that hires an army of
underpaid research assistants and sacks a few teaching staff, or packs them off to a branch campus, would
be recorded as having brought about a great improvement in teaching quality.
Citations are a notoriously problematical way of
measuring research influence or quality. The Leiden Ranking shows that there are many ways of measuring research output and influence. It would be a good idea
to combine several different ways of counting citations. QS have already started to use the h- index in their subject rankings starting this year and have used citations per
paper in the Asian University Rankings.
With the 20 per cent left over from reducing the weighting for the academic survey QS might consider introducing a measure of research output rather than quality since this would help distinguish among universities outside the elite and perhaps use internet data from Webometrics as in the Latin American rankings.
Thursday, April 25, 2013
Asian higher education revolution a long way off
My article in University World News can be accessed here.
Saturday, April 20, 2013
The Leiden Ranking
The Leiden ranking for 2013 is out. This is produced by the Centre for Science and Technology Studies (CWTS) at Leiden University and represents pretty much the state of the art in assessing research publications and citations.
A variety of indicators are presented with several different settings but no overall winner is declared which means that these rankings are not going to get the publicity given to QS and Times Higher Education.
Here are top universities, using the default settings provided by CWTS.
Total Publications: Harvard
Citations per Paper: MIT
Normalised Citations per Paper: MIT
Quality of Publications: MIT
There are also indicators for international and industrial collaboration that I hope to discuss later.
It is also noticeable that high flyers in the Times Higher Education citations indicator, Alexandria University, Moscow Engineering Physics Institute (MEPhI), Hong Kong Baptist University, Royal Holloway, do not figure at all in the Leiden Ranking. What happened to them?
How could MEPhI, equal first in the world for research influence according to THE and Thomson Reuters, fail to even show up in the normalised citation indicator in the Leiden Ranking?
Firstly, Leiden have collected data for the top 500 universities in the world according to number of publications in the Web of Science. That would have been sufficient to keep these institutions out of the rankings.
In addition, Leiden use fractionalised counting as a default setting so that the impact of mutiple-author publications is divided by the number of university addresses. This would drastically reduce the impact of publications like the Review of Particle Physics.
Also, by field Leiden mean five broad subject groups whereas Thomson Reuters appears to use a larger number (21 if they use the same system as they do for highly cited researchers.) There is accordingly more chance of anomalous cases having a great influence in the THE rankings.
THE and Thomson Reuters would do well to look at the multi-authored, and most probably soon to be multi-cited, papers that were published in 2012 and look at the universities that could do well in 2014 if the methodology remains unchanged.
The Leiden ranking for 2013 is out. This is produced by the Centre for Science and Technology Studies (CWTS) at Leiden University and represents pretty much the state of the art in assessing research publications and citations.
A variety of indicators are presented with several different settings but no overall winner is declared which means that these rankings are not going to get the publicity given to QS and Times Higher Education.
Here are top universities, using the default settings provided by CWTS.
Total Publications: Harvard
Citations per Paper: MIT
Normalised Citations per Paper: MIT
Quality of Publications: MIT
There are also indicators for international and industrial collaboration that I hope to discuss later.
It is also noticeable that high flyers in the Times Higher Education citations indicator, Alexandria University, Moscow Engineering Physics Institute (MEPhI), Hong Kong Baptist University, Royal Holloway, do not figure at all in the Leiden Ranking. What happened to them?
How could MEPhI, equal first in the world for research influence according to THE and Thomson Reuters, fail to even show up in the normalised citation indicator in the Leiden Ranking?
Firstly, Leiden have collected data for the top 500 universities in the world according to number of publications in the Web of Science. That would have been sufficient to keep these institutions out of the rankings.
In addition, Leiden use fractionalised counting as a default setting so that the impact of mutiple-author publications is divided by the number of university addresses. This would drastically reduce the impact of publications like the Review of Particle Physics.
Also, by field Leiden mean five broad subject groups whereas Thomson Reuters appears to use a larger number (21 if they use the same system as they do for highly cited researchers.) There is accordingly more chance of anomalous cases having a great influence in the THE rankings.
THE and Thomson Reuters would do well to look at the multi-authored, and most probably soon to be multi-cited, papers that were published in 2012 and look at the universities that could do well in 2014 if the methodology remains unchanged.
Tuesday, April 02, 2013
Combining Rankings
Meta University Ranking has combined the latest ARWU, QS and THE World Rankings. Universities are ordered by place so that Harvard gets the highest (i.e. lowest score) with an average of 2.67 (1st in ARWU, 3rd in QS and 4th in THE).
After that there is MIT, Cambridge, Caltech and Oxford.
Meta University Ranking has combined the latest ARWU, QS and THE World Rankings. Universities are ordered by place so that Harvard gets the highest (i.e. lowest score) with an average of 2.67 (1st in ARWU, 3rd in QS and 4th in THE).
After that there is MIT, Cambridge, Caltech and Oxford.
Tuesday, March 19, 2013
Some More on the THE World University Rankings 2
I have calculated the mean scores for the indicator groups in the 2012-13 Times Higher education. world university rankings. The mean scores are for the 400 universities included in the published 2012-13 rankings are:
Teaching 41.67
International Outlook 52.35
Industry Income 50.74
Research 40.84
Citations 65.25
For Industry Income, N is 363 since 37 universities, mainly in the US, did not submit data. This might be a smart move if the universities realized that they were likely to receive a low score. N is 400 for the others.
There are considerable differences between the indicators which are probably due to Thomson Reuters' methodology. Although THE publishes data for 200 universities on its website and another 200 on an iPad/iPhone app there are in fact several hundred more universities that are not included in the published rankings but whose scores are used to calculate the overall mean from which scores for the ranked universities are derived.
A higher score on an indicator means a greater distance from all the institutions in the Thomson Reuters database.
The high scores for citations mean that there is a large gap between the top 400 and the lesser places outside the top 400.
I suspect that the low scores for teaching and research are due to the influence of the academic survey which contributes to both indicator clusters. We have already seen that after the top six, the curve for the survey is relatively flat.
The citations indicator already has a disproportionate influence contributing 30 % to the overall weighing. That 30 % is of course a maximum. Since universities on average are getting more for citations than the the indicators, it has in practice a correspondingly greater weighting.
I have calculated the mean scores for the indicator groups in the 2012-13 Times Higher education. world university rankings. The mean scores are for the 400 universities included in the published 2012-13 rankings are:
Teaching 41.67
International Outlook 52.35
Industry Income 50.74
Research 40.84
Citations 65.25
For Industry Income, N is 363 since 37 universities, mainly in the US, did not submit data. This might be a smart move if the universities realized that they were likely to receive a low score. N is 400 for the others.
There are considerable differences between the indicators which are probably due to Thomson Reuters' methodology. Although THE publishes data for 200 universities on its website and another 200 on an iPad/iPhone app there are in fact several hundred more universities that are not included in the published rankings but whose scores are used to calculate the overall mean from which scores for the ranked universities are derived.
A higher score on an indicator means a greater distance from all the institutions in the Thomson Reuters database.
The high scores for citations mean that there is a large gap between the top 400 and the lesser places outside the top 400.
I suspect that the low scores for teaching and research are due to the influence of the academic survey which contributes to both indicator clusters. We have already seen that after the top six, the curve for the survey is relatively flat.
The citations indicator already has a disproportionate influence contributing 30 % to the overall weighing. That 30 % is of course a maximum. Since universities on average are getting more for citations than the the indicators, it has in practice a correspondingly greater weighting.
Friday, March 15, 2013
Some More on the THE World University Rankings 2012-13
Here are some observations based on a simple analysis of the Times Higher Education World University Rankings of 2012-13.
First, calculating the Pearson correlation between the indicator groups produces some interesting points. If a ranking is valid we would expect the correlations between indicators to be fairly high but not too high. If the correlations between indicators are above .800 this suggests that they are basically measuring the same thing and that there is no point in having more than one indicator. On the other hand it is safe to assume that if an indicator does measure quality or desired characteristics in some way it will have a positive relationship with other valid indicators.
One thing about the 2012-13 rankings is that the relationship between international outlook (international faculty, students and research collaboration) and the other indicators is negative or very slight. With teaching it is .025 (not significant), industry income .003 (not significant), research .156 and citations 158. This adds to my suspicion that internationalisation, at least among those universities that get into the world rankings, does not per se say very much about quality.
Industry income correlates modestly with teaching (.350) and research (.396), insignificantly with international outlook (.003) and negatively and insignificantly with citations (-.008).
The correlation between research and teaching is very high at .905. This may well be because the survey of academic opinion contributes to the teaching and the research indicators. There are different questions -- one about research and one about postgraduate supervision -- but the difference between the responses is probably quite small.
It is also very interesting that the correlation between scores for research and citations is rather modest at .410. Since volume of publications, funding and reputation should contribute to research influence, which is what citations are supposed to measure, this suggests that the citations indicator needs a careful review.
Teaching, research and international outlook are composites of several indicators. It would be very helpful if THE or Thomson Reuters released the scores for the separate indicators.
Here are some observations based on a simple analysis of the Times Higher Education World University Rankings of 2012-13.
First, calculating the Pearson correlation between the indicator groups produces some interesting points. If a ranking is valid we would expect the correlations between indicators to be fairly high but not too high. If the correlations between indicators are above .800 this suggests that they are basically measuring the same thing and that there is no point in having more than one indicator. On the other hand it is safe to assume that if an indicator does measure quality or desired characteristics in some way it will have a positive relationship with other valid indicators.
One thing about the 2012-13 rankings is that the relationship between international outlook (international faculty, students and research collaboration) and the other indicators is negative or very slight. With teaching it is .025 (not significant), industry income .003 (not significant), research .156 and citations 158. This adds to my suspicion that internationalisation, at least among those universities that get into the world rankings, does not per se say very much about quality.
Industry income correlates modestly with teaching (.350) and research (.396), insignificantly with international outlook (.003) and negatively and insignificantly with citations (-.008).
The correlation between research and teaching is very high at .905. This may well be because the survey of academic opinion contributes to the teaching and the research indicators. There are different questions -- one about research and one about postgraduate supervision -- but the difference between the responses is probably quite small.
It is also very interesting that the correlation between scores for research and citations is rather modest at .410. Since volume of publications, funding and reputation should contribute to research influence, which is what citations are supposed to measure, this suggests that the citations indicator needs a careful review.
Teaching, research and international outlook are composites of several indicators. It would be very helpful if THE or Thomson Reuters released the scores for the separate indicators.
Sunday, March 10, 2013
The California Paradox
Looking at the Times Higher Education reputation rankings, I noticed that there
were two Californian universities in the
superbrand six and seven in the top 50. This is not an anomaly. A
slightly different seven can be found in the THE World University Rankings. California
does even better in the Shanghai ARWU with three in the top six and 11 in the
top 50. This is a slight improvement on 2003 when there were ten. According to
ARWU, California would be the second best country in the world for higher
education if it became independent.
California’s performance is not so spectacular according to
QS who have just four Californian institutions in their top fifty, a fall from
2004 when they had five (I am not counting the University of California at San
Francisco which, being a single subject medical school, should not have
been there). Even so it is still a creditable performance.
But, if we are to believe many commentators, higher education in
California, at least public higher education, is dying if not already dead.
According to Andy Kroll in Salon:
"California’s public higher education system is, in other
words, dying a slow death. The promise of a cheap, quality education is
slipping away for the working and middle classes, for immigrants, for the very
people whom the University of California’s creators held in mind when they began
their grand experiment 144
years ago. And don’t think the slow rot of public education is unique to
California: that state’s woes are the nation’s".
The villains according to Kroll are Californian taxpayers
who refuse to accept
adding to a tax burden that is the among the highest in the world.
It is surprising that the death throes of higher education in
California have gone unnoticed by the well known international rankers.
It is also surprising that public and private universities that are still
highly productive and by international standards still lavishly funded exist in
the same state as secondary and elementary schools that are close to being the worse in the nation in terms of student performance. The relative and absolute decline in educational achievement is matched by a similar decline in the overall economic performance of the state.
It may be just a matter of time and in the coming decades Californian universities will follow primary and secondary education into irreversible decline.
It may be just a matter of time and in the coming decades Californian universities will follow primary and secondary education into irreversible decline.
Preserving data
Times Higher and QS have both renovated their ranking pages recently and both seem to have removed access to some data from previous years. THE used to provide links to the Times Higher Education (Supplement) - Quacquarelli Symonds rankings of 2004-2010 but apparently not any more. QS do not seem to give access to these rankings before 2007. In both cases, I will update if it turns out that there is a way to get to these rankings.
There is, however, a site which has the rankings for the top 200 of the THES - QS Rankings of 2004-2007.
Times Higher and QS have both renovated their ranking pages recently and both seem to have removed access to some data from previous years. THE used to provide links to the Times Higher Education (Supplement) - Quacquarelli Symonds rankings of 2004-2010 but apparently not any more. QS do not seem to give access to these rankings before 2007. In both cases, I will update if it turns out that there is a way to get to these rankings.
There is, however, a site which has the rankings for the top 200 of the THES - QS Rankings of 2004-2007.
Subscribe to:
Posts (Atom)