Showing posts sorted by relevance for query MIT. Sort by date Show all posts
Showing posts sorted by relevance for query MIT. Sort by date Show all posts

Wednesday, May 04, 2011

New QS Rankings

QS have just released their Life Sciences rankings based on their employer and academic surveys and citations per paper.

Here are the top five for medicine, biology and psychology.

Medicine

1.  Harvard
2.  Cambridge
3.  MIT
4.  Oxford
5.  Stanford

Biological Sciences

1.  Harvard
2.  MIT
3.  Cambridge
4.  Oxford
5.  Stanford

Psychology

1.  Harvard
2.  Cambridge
3.  Stanford
4.  Oxford
5.  UC Berkeley

Sunday, September 09, 2012

Will There be a New Number One?

One reason why QS and Times Higher Education get more publicity than the Shanghai ARWU, HEEACT and other rankings is that they periodically produce interesting surprises. Last year Caltech replaced Harvard as number one in the THE rankings and Tokyo overtook Hong Kong as the best Asian university. Two years ago Cambridge pushed Harvard aside at the top of the QS rankings.

Will there be another change this year?

There is an intriguing interview with Les Ebdon, the UK government's "university access tsar", in the Daily Telegraph. Ebdon claims that leading British universities are in danger  of losing their world class status unless they start admitting more students from state schools who may be somewhat less academically qualified. Perhaps he knows something.

So if Cambridge slips and is replaced by Harvard, MIT or Yale as QS number one (if it is Oxford or Imperial QS will lose all credibility) we can expect comments that Cambridge should start listening to him before its too late.

I suspect that if there is a new number one it might have something to do with the QS employer review. Since this is a sign up survey and since the numbers are quite small it would not take many additional responses to push Harvard or MIT into first place.

With regard to THE, the problem there is that normalising everything by country, year and/or field is a potential source of instability. If there is a vigorous debate with lots of  citations about an obscure article by a Harvard researcher in a little cited field it could dramatically boost the score on the citations indicator.

Getting a good score in the THE rankings also depends on what a school is being compared to. Last year, Hong Kong universities slumped because they were taken out of China (with low average scores) and classified as a separate country (with high average scores), so that their relative scores were lower. If they are put back in China they will go up this year and there will be a new number one Asian university.

So anybody want to bet on Harvard making a come back this year? Or Hong Kong regaining the top Asian spot from Tokyo in the THE rankings?

Wednesday, September 22, 2010

Selected Comments from Times Higher Education


Mike Reddin 17 September, 2010
World university rankings take national ranking systems from the ridiculous to the bizarre. Two of the most glaring are made more so by these latest meta analyses.
Number One: R&D funding is scored not by its quality or contribution to learning or understanding but by the amount of money spent on that research; it ranks expensive research higher than cheap research; it ranks a study of 'many things' better than the study of a 'few things'; it ranks higher the extensive and expensive pharmacological trial than the paper written in tranquility over the weekend. I repeat, it does not score 'contribution to knowledge'.

Number Two. Something deceptively similar happens in the ranking of citations. We rank according to number alone - not 'worth' - not whether the paper merited writing in the first place, not whether we are the better for or the worse without it, not whether it adds to or detracts from the sum of human knowledge. Write epic or trash .... as long as it is cited, you score. Let me offer utter rubbish - the more of you that denounce me the better; as long as you cite my name and my home institution.

Which brings me full circle: the 'rankings conceit' equates research / knowledge / learning / thinking / understanding with institutions - in this case, universities and universities alone. Our ranking of student 'outcomes' (our successes/failure as individuals on many scales) wildly presumes that they flow from 'inputs' (universities). Do universities *cause* these outcomes - do they add value to those they have admitted? Think on't. Mike Reddin http://www.publicgoods.co.uk/



jorge Sanchez 18 September, 2010
this is ridiculous~ LSE was placed 67 in the previous year and THE decided to end relations with QS because of this issue. now since THE is no longer teaming up with QS, how could you possibly explain this anomaly by placing LSE ranked 86 in the table????


Mark 18 September, 2010
where is the "chinese university of Hong Kong in the table??? it is no longer in the top 200 best universities....

last year was in the top 50 now is off the table??? is this a serious ranking?????


Of course it's silly 18 September, 2010
Just look at the proposition that teaching is better if you have a higher proportion of doctoral students to undergraduate students.

This is just plainly silly, as 10 seconds thinking about the reputation of teaching in the US will tell you: liberal arts colleges offer extraordinary teaching in the absence of PhD programmes.



Matthew H. Kramer 18 September, 2010
Though some tiers of these rankings are sensible, there are some bizarre anomalies. Mirabile dictu, the University of Texas doesn't appear at all; the University of Virginia is ridiculously low at 72; NYU is absurdly low at 60; the University of Hong Kong is preposterously overrated at 21. Moreover, as has been remarked in some of the previous comments -- and as is evident from a glance at the rankings -- the criteria hugely favor technical institutes. The rank of MIT at 3 is credible, because MIT is outstanding across the board. However, Cal Tech doesn't belong at 2, and Imperial (which has no programs at all in the humanities and social sciences) certainly doesn't belong at 9. Imperial and especially Cal Tech are outstanding in what they do, but neither of them is even close to outstanding across the gamut of subjects that are covered by any full-blown university. I hope that some of these anomalies will be eliminated through further adjustments in the criteria. The exclusion of Texas is itself sufficiently outlandish to warrant some major modifications in those criteria.



Matthew H. Kramer 18 September, 2010
Weird too is the wholesale exclusion of Israeli universities. Hebrew University, Tel Aviv University, and Technion belong among the top 200 in any credible ranking of the world's universities.


Neil Fazel 19 September, 2010
No Sharif, no U. Texas, no Technion. Another ranking to be ignored.


OZ academic 20 September, 2010
While the criteria seem to be OK, although they might be debated, how to carry out the statistical analyses and how to collect the data are the issues for the validity of the poll. The omission of Chinese University of Hong Kong, in the inclusion of the Hong Kong Baptist University and Hong Kong Polytechnic University in the world's top 200 universities, seems to be very "mysterious" to me. As I understand the Chinese University of Hong Kong is more or less of a similar standard in teaching and research in comparison to the Hong Kong University and the Hong Kong University of Science and Technology, but they have some slight edges over the Hong Kong Baptist University and the Hong Kong Polytechnic University. I wonder if there are mix-ups in the data collection processes. If this is true, then there are disputes in this poll not only in the criteria of assessment but also in the accuracy in data collections and analyses.

Wednesday, November 01, 2006

The Best Universities for Biomedicine?

THES has published a list of the world's 100 best universities for biomedicine. This is based, like the other subject rankings, on peer review . Here are the top twenty according to the THES reviewers.

1. Cambridge
2. Harvard
3. Oxford
4. Imperial College London
5. Stanford
6. Johns Hopkins
7. Melbourne
8. Beijing (Peking)
9. National University of Singapore
10. Berkeley
11. Yale
12. Tokyo
13. MIT
14. University of California at San Diego
15. Edinburgh
16. University College London
17. Kyoto
18. Toronto
19. Monash
20. Sydney

Here are the top twenty according to citations per paper, a measure of the quality of research.


1. MIT
2. Caltech
3. Princeton
4. Berkeley
5. Stanford
6. Harvard
7. Oxford
8. University of California at San Diego
9. Cambridge
10. Yale
11. Washington (St Louis)
12. Johns Hopkins
13. ETH Zurich
14. Duke
15. Dundee
16. University of Washington
17. Chicago
18. Vanderbilt
19. Columbia
20. UCLA

The two lists are quite different. Here are the positions according to citations per paper of some of the universities that were in the top twenty for the peer review;

University College London -- 24
Edinburgh -- 25
Imperial College London -- 28
Tokyo -- 34
Toronto -- 35
Kyoto -- 36
Monash -- 52
Melbourne -- 58
Sydney -- 67
National University of Singapore -- 74
Beijing -- 78=

Again, there is a consistent pattern of British, Australian and East Asia universities doing dramatically better in the peer review than in citations per paper. How did they acquire such a remarkable reputation if their research was of such undistinguished quality? Did they acquire a reputation for producing a large quantity of mediocre research?

Notice that Cambridge with the top score for peer review produces research of a quality inferior to, according to QS's data, eight universities, seven of which are in the US and four in California.

There are also 23 universities that produced insufficient papers to be counted by the consultants. Thirteen are in Asia, 5 in Australia and New Zealand, 4 in Europe and one in the US. How did they acquire such a remarkable reputation while producing so little research? Was the little research they did of a high quality?

Saturday, October 28, 2006

The Best Universities for Technology?

The Times Higher Education Supplement (THES) have published a list of the supposed top 100 universities in the world in the field of technology. The list purports to be based on opinion of experts in the field. However, like the ranking for science, it cannot be considered valid. First, let us compare the top 20 universities according to peer review and then the top 20 according to the data provided by THES for citations per paper, a reasonable measure of the quality of research.

First, the peer review:

1. MIT
2. Berkeley
3. Indian Institutes of Technology (all of them)
4. Imperial College London
5. Stanford
6. Cambridge
7. Tokyo
8. National University of Singapore
9. Caltech
10. Carnegie-Mellon
11. Oxford
12. ETH Zurich
13. Delft University of Technology
14. Tsing Hua
15. Nanyang Technological University
16. Melbourne
17. Hong Kong University of science and Technology
18. Tokyo Institute of Technology
19. New South Wales
20. Beijing (Peking University)

Now, the top twenty ranked according to citations per paper:

1. Caltech
2. Harvard
3. Yale
4. Stanford
5. Berkeley
6. University of California at Santa Barbara
7. Princeton
8. Technical University of Denmark
9. University of California at San Diego
10. MIT
11. Oxford
12. University of Pennsylvania
13. Pennsylvania State University
14. Cornell
15. Johns Hopkins
16. Boston
17. Northwestern
18. Columbia
19. Washington (St. Louis)
20. Technion (Israel)

Notice that the Indian Institutes of Technology, Tokyo, National University of Singapore, Nanyang Technological University, Tsing Hua, Melbourne, New South Wales and Beijing are not ranked in the top 20 according to quality of published research. Admittedly, it is possible that in this field a substantial amount of research consists of unpublished reports for state organizations or private companies but this would surely be more likely to affect American rather than Asian or Australian universities.

Looking a bit more closely at some of the universities in the top twenty for technology according to the peer review, we find that, when ranked for citations per paper, Tokyo is in 59th place, National University of Singapore 70th, Tsing Hua 86th, Indian Institutes of Technology 88th, Melbourne 35th, New South Wales 71st, and Beijing 76th. Even Cambridge, sixth in the peer review, falls to 29th.

Again, there are a large number of institutions that did not even produce enough papers to be worth counting, raising the question of how they could be sufficiently well known for there to be peers to vote for them. This is the list:

Indian Institutes of Technology
Korean Advanced Institute of Science and Technology
Tokyo Institute of Technology
Auckland
Royal Institute of Technology Sweden
Indian Institutes of Management
Queensland University of Technology
Adelaide
Sydney Technological University
Chulalongkorn
RMIT
Fudan
Nanjing

Once again there is a very clear pattern of the peer review massively favoring Asian and Australasian universities. Once again, I can see no other explanation than an overrepresentation of these regions, and a somewhat less glaring one of Europe, in the survey of peers combined with questions that allow or encourage respondents to nominate universities from their own regions or countries.

It is also rather disturbing that once again Cambridge does so much better on the peer review than on citations. Is it possible that THES and QS are manipulating the peer review to create an artificial race for supremacy – “Best of British Closing in on Uncle Sam’s finest”. Would it be cynical to suspect that next year Cambridge and Harvard will be in a circulation-boosting race for the number one position?

According to citations per faculty Harvard was 4th for science, second for technology and 6th for biomedicine while Cambridge was 19th, 29th and 9th.

For the peer review, Cambridge was first for science, 6th for technology and first for biomedicine. Harvard was 4th, 23rd and second.

Overall, there is no significant relationship between the peer review and research quality as measured by citations per paper. The correlation between the two is .169, which is statistically insignificant. For the few Asian universities that produced enough research to be counted, the correlation is .009, effectively no better than chance.

At the risk of being boringly repetitive, it is becoming clearer and clearer that that the THES rankings, especially the peer review component, are devoid of validity.

Thursday, October 26, 2006

The World’s Best Science Universities?

The Times Higher Education Supplement (THES) has now started to publish lists of the world’s top 100 universities in five disciplinary areas. The first to appear were those for science and technology.

THES publishes scores for its peer review by people described variously as “research-active academics” or just as “smart people” of the disciplinary areas along with the number of citations per paper. The ranking is, however, based solely on the peer review, although a careless reader might conclude that the citations were considered as well.

We should ask for a moment what a peer review, essentially a measure of a university’s reputation, can accomplish that an analysis of citations cannot. A citation is basically an indication that another researcher has found something of interest in a paper. The number of citations of a paper indicates how much interest a paper has aroused among the community of researchers. It coincides closely with the overall quality of research, although occasionally a paper may attract attention because there is something very wrong with it.

Citations then are a good measure of a university’s reputation for research. For one thing, votes are weighted. A researcher who publishes a great deal has more votes and his or her opinion will have more weight than someone who publishes nothing. There are abuses of course. Some researchers are rather too fond of citing themselves and journals have been known to ask authors to cite papers by other researchers whose work they have published but such practices do not make a substantial difference.

In providing the number of citations per paper as well as the score for peer review, THES and their consultants, QS Quacquarelli Symonds, have really blown their feet off. If the scores for peer review and the citations are radically different it almost certainly means that there is something wrong with the review. The scores are in fact very different and there is something very wrong with the review.

This post will review the THES rankings for science.

Here are the top twenty universities for the peer review in science:

1. Cambridge
2. Oxford
3. Berkeley
4. Harvard
5. MIT
6. Princeton
7. Stanford
8. Caltech
9. Imperial College, London
10. Tokyo
11. ETH Zurich
12. Beijing (Peking University)
13. Kyoto
14. Yale
15. Cornell
16. Australian National University
17. Ecole Normale Superieure, Paris
18. Chicago
19. Lomonosov Moscow State University
20. Toronto


And here are the top 20 universities ranked by citations per paper:


1. Caltech
2. Princeton
3. Chicago
4. Harvard
5. John Hopkins
6. Carnegie-Mellon
7. MIT
8. Berkeley
9. Stanford
10. Yale
11. University of California at Santa Barbara
12. University of Pennsylvania
13. Washington (Saint Louis?)
14. Columbia
15. Brown
16. University of California at San Diego
17. UCLA
18. Edinburgh
19. Cambridge
20. Oxford


The most obvious thing about the second list is that it is overwhelmingly dominated by American universities with the top 17 places going to the US. Cambridge and Oxford, first and second in the peer review, are 19th and 20th by this measure. Imperial College London. Beijing, Tokyo, Kyoto and the Australian National University are in the top 20 for peer review but not for citations.

Some of the differences are truly extraordinary. Beijing is 12th for peer review and 77th for citations, Kyoto13th and 57th, the Australian National University 16th and 35th Ecole Normale Superieure, Paris 17th and 37th, Lomsonov State University, Moscow 18th and 82nd National University of Singapore, 25th and 75th, Sydney 35th and 70th , Toronto 20th and 38th. Bear in mind that there are almost certainly several universities that were not in the peer review top 100 but have more citations per paper than some of these institutions.

It is no use saying that citations are biased against researchers who do not publish in English. For better or worse, English is the lingua franca of the natural sciences and technology and researchers and universities that do not publish extensively in English will simply not be noticed by other academics. Also, a bias towards English does not explain the comparatively poor performance by Sydney, ANU and the National University of Singapore and their high ranking on the peer review.

Furthermore, there are some places for which no citation score is given. Presumably, they did not produce enough papers to be even considered. But if they produce so few papers, how could they become so widely known that their peers would place them in the world’s top 100? These universities are:

Indian Institutes of Technology (all of them)
Monash
Auckland
Universiti Kebangsaan Malaysia
Fudan
Warwick
Tokyo Institute of Technology
Hong Kong University of Science and Technology
Hong Kong
St. Petersburg
Adelaide
Korean Advanced Institute of Science and Technology
New York University
King’s College London
Nanyang Technological University
Vienna Technical University
Trinity College Dublin
Universiti Malaya
Waterloo

These universities are overwhelmingly East Asian, Australian and European. None of them appear to be small, specialized universities that might produce a small amount of high quality research.

The peer review and citations per paper thus give a totally different picture. The first suggests that Asian and European universities are challenging those of the United States and that Oxford and Cambridge are the best in the world. The second indicates that the quality of research of American universities is still unchallenged, that the record of Oxford and Cambridge is undistinguished and that East Asian and Australian universities have a long way to go before being considered world class in any meaningful sense of the word.

A further indication of how different the two lists are can be found by calculating their correlation. Overall, the correlation is, as expected, weak (.390). For Asia-Pacific (.217) and for Europe (.341) it is even weaker and statistically insignificant. If we exclude Australia from the list of Asia-Pacific universities and just consider the remaining 25, there is almost no association at all between the two measures. The correlation is .099, for practical purposes no better than chance. Whatever criteria the peer reviewers used to pick Asian universities, quality of research could not have been among them.

So has the THES peer review found out something that is not apparent from other measures? Is it possible that academics around the world are aware of research programmes that have yet to produce large numbers of citations? This, frankly, is quite implausible since it would require that nascent research projects have an uncanny tendency to concentrate in Europe, East Asia and Australia.

There seems to be no other explanation for the overrepresentation of Europe, East Asia and Australia in the science top 100 than a combination of a sampling procedure that included a disproportionate number of respondents from these regions, allowing or encouraging respondents to nominate universities in their own regions or even countries and a disproportionate distribution of forms to certain countries within regions.

I am not sure whether this is the result of extreme methodological naivety, with THES and QS thinking that they are performing some sort of global affirmative action by rigging the vote in favour of East Asia and Europe or whether it is a cynical attempt to curry favour with those regions that are involved in the management education business or are in the forefront of globalization.

Whatever is going on, the peer review gives a very false picture of current research performance in science. If people are to apply for universities or accept jobs or award grants in the belief that Beijing is better at scientific research than Yale, ANU than Chicago, Lomonosov than UCLA, Tsinghua than Johns Hopkins then they are going to make bad decisions.

If this is unfair then there is no reason why THES or QS should not indicate the following:

The universities and institutions to which the peer review forms were sent.
The precise questions that were asked.
The number of nominations received by universities from outside their own regions and countries.
The response rate.
The criteria by which respondents were chosen.

Until THES and /or QS do this, we can only assume that the rankings are an example of how almost any result can be produced with the appropriate, or inappropriate, research design.

Tuesday, September 05, 2006

The Fastest Way into the THES TOP 200

In a little while the latest edition of the THES rankings will be out. There will be protests from those who fail to make the top 200, 300 or 500 and much self-congratulation from those included. Also, of course, THES and QS, THES’s consultants, directly or indirectly, will make a lot of money from the whole business.

If you search through the web you will find that QS and THES have been quite busy over the last year or so promoting their rankings and giving advice about what to do to get into the top 200. Some of their advice is not very helpful. Thus, Nunzio Quacquarelli, director of QS told a seminar in Kuala Lumpur in November 2005, that producing more quality research was one way of moving up in the rankings. This is not necessarily a bad thing but it will be a least a decade before any quality research can be completed, written up, submitted for publication, revised, finally accepted, published, and then cited by another researcher whose work goes through the same processes. Only then will research will start to push a university into the top 200 or 100 by boosting their score for citations per faculty.

Something less advertised is that once a university has got onto the list of 300 universities (so far this has been decided by peer review) there is a very simple way of boosting a university’s position in the rankings. It is also not unlikely that several universities have already realized this.

Pause for a minute and review the THES methodology. They gave a weighing of 40 per cent to a review of universities by other academics, 10 per cent to a rating by employers, 20 per cent to the ratio of faculty to students, 10 per cent to the proportion of international faculty and students, and 20 per cent to the number of citations per faculty. In 2005 the top scoring institution in each category was given a score of 100 and then the scores of the others were calibrated accordingly.

Getting back to boosting ratings, first take a look at the 2004 and 205 scores for citations per faculty. Comparison is a bit difficult because in 2004 the top scorer is given a score of 400 and then one of 100 in 2005 (it’s MIT in both cases.) What immediately demands attention is that there are some very dramatic changes between 2004 and 2005.

For example Ecole Polytechnique in Paris fell from 14.75 (dividing the THES figures by four because top ranked MIT was given a score of 400 in 2004) to 4, ETH Zurich from 66.5 to 8, and McGill in Canada from 21 to 8.

This at first sight is more a bit strange. The figures are supposed to refer to ten-year periods, so that in 2005 citations for the earliest year would be dropped and then those for another year added. You would not expect very much change from year to year since the figures for 2004 and 2005 overlap a great deal.

But it is not only citations that we have to consider. The score is actually based on citations per faculty member. So, if the number of faculty goes up and the number of citations remains the same then the score for citations per faculty goes down.

This in fact is what happened to a lot of universities. If we look at the score for citations per faculty and then the score for faculty-student ratio there are several cases where they change proportionately but in opposite directions.

So, going back to the three examples given above between 2004 and 2005 Ecole Polytechnique went up from 23 to 100, to become the top scorer for faculty-student ratio, ETH Zurich from 4 to 37, and Mc Gill from 23 to 42. Notice the rise in the faculty student ratio score is roughly proportionate to the fall in the number of citations per faculty.

I am not the first person to notice the apparent dramatic collapse of research activity at ETH Zurich. Norbert Staub in ETH Life International was puzzled by this. It looks as though it wasn’t that ETH Zurich stopped doing research but that apparently it acquired something like eight times as many teachers.

It seems pretty obvious that what happened to these institutions is that the apparent number of faculty went up between 2004 and 2005. This led to a rise in the score for faculty student ratio and a fall in the number of citations per faculty.

You might ask, so what? If a university goes up on one measure and goes down on another surely the total score will remain unchanged.

Not always. THES has indexed the scores to the top scoring university so that in 2005 the top scorer gets 100 for both faculty-student ratio and citations per faculty. But the gap between the top university for faculty student ration and run of the mill places in, say, the second hundred is much less than it is for citations per faculty. For example take a look at the faculty-student scores of the universities starting at position number 100. We have 15, 4, 13, 10, 23, 16, 13, 29, 12, 23. Then look at the scores for citations per faculty, 7, 1, 8, 6, 0, 12, 9, 14, 12, 7.

That means that many universities can, like Ecole Polytechnique, gain much more by increasing their faculty student ratio than they lose by reducing the citations per faculty. Not all of course. ETH Zurich suffered badly as a result of this faculty inflation.

So what is going on? Are we really to believe that in 2005 Ecole Polytechnique quadrupled its teaching staff, ETH Zurich increased its eightfold and that of McGill nearly doubled. This is totally implausible. The only explanation that makes any sort of sense is that either QS or the institutions concerned were counting their teachers differently in 2004 and 2005.

The likeliest explanation for Ecole Polytechnique’s s remarkable change is simply that in 2004 only full time staff were counted but in 2005 part-time staff were counted as well. It is well known that many staff of the Grandes Ecoles of France are employed by neighbouring research institutes and universities, although exactly how many is hard to find out. If anyone can suggest any other explanation please let me know.

Going through the rankings we find that are quite a few universities that are affected by what we might call “faculty inflation”. EPF Lausanne from 13 to 64, Eindhoven from 11 to 54, University of California at San Francisco from 39 to 91, Nagoya from 19 to 35, Hong Kong from 8 to 17.

So, having got through the peer review, this is how to get a boost in the rankings. Just inflate the number of teachers and deflate the number of students.

Here are some ways to do it. Wherever possible, hire part-time teachers but don’t differentiate between full and part-time. Announce that every graduate student is a teaching assistant, even if they just have to do a bit of marking, and count them as teaching staff. Make sure anyone who leaves is designated emeritus or emerita and kept on the books. Never sack anyone but keep him or her suspended. Count everybody in branch campuses and off -campus programmes. Classify all administrative appointees as teaching staff.

It will also help to keep the official number of students down. A few possible ways are not counting part-time students, not counting branch campuses, counting at the end of the semester when some have dropped out.

Tuesday, February 21, 2017

Never mind the rankings, THE has a huge database



There has been a debate, or perhaps the beginnings of a debate, about international university rankings following the publication of Bahram Bekhradnia's report to the Higher Education Policy Institute with comments in University World News by Ben SowterPhil BatyFrank Ziegele and Frans van Vought  and Philip Altbach and Ellen Hazelkorn and a guest post by Bekhradnia in this blog.

Bekhradnia argued that global university rankings were damaging and dangerous because they encourage an obsession with research, rely on unreliable or subjective data, and emphasise spurious precision. He suggests that governments, universities and academics should just ignore the rankings.

Times Higher Education (THE) has now published a piece by THE rankings editor Phil Baty that does not really deal with the criticism but basically says that it does not matter very much because the THE database is bigger and better than anyone else's. This he claims is "the true purpose and enduring legacy" of the THE world rankings.

Legacy? Does this mean that THE is getting ready to abandon rankings, or maybe just the world rankings, and go exclusively into the data refining business? 

Whatever Baty is hinting at, if that is what he is doing, it does seem a rather insipid defence of the rankings to say that all the criticism is missing the point because they are the precursor to a big and sophisticated database.

The article begins with a quotation from Lydia Snover, Director of Institutional Research, at MIT:

“There is no world department of education,” says Lydia Snover, director of institutional research at the Massachusetts Institute of Technology. But Times Higher Education, she believes, is helping to fill that gap: “They are doing a real service to universities by developing definitions and data that can be used for comparison and understanding.”

This sounds as though THE is doing something very impressive that nobody else has even thought of doing. But Snover's elaboration of this point in an email gives equal billing to QS and THE as definition developers and suggests the definitions and data that they provide will improve and expand in the future, implying that they are now less than perfect. She says:

"QS and THE both collect data annually from a large number of international universities. For example, understanding who is considered to be “faculty” in the EU, China, Australia, etc.  is quite helpful to us when we want to compare our universities internationally.  Since both QS and THE are relatively new in the rankings business compared to US NEWS, their definitions are still evolving.  As we go forward, I am sure the amount of data they collect and the definitions of that data will expand and improve."

Snover, by the way , is a member of 
the QS advisory board, as is THE's former rankings  "masterclass" partner, Simon Pratt.

Baty offers a rather perfunctory defence of the THE rankings. He talks about rankings bringing great insights into the shifting fortunes of universities. If we are talking about year to year changes then the fact that THE purports to chart shifting fortunes is a very big bug in their methodology. Unless there has been drastic restructuring universities do not change much in a matter of months and any ranking that claims that it is detecting massive shifts over a year is simply advertising its deficiencies.

The assertion that the THE rankings are the most comprehensive and balanced is difficult to take seriously. If by comprehensive it is meant that the THE rankings have more indicators than QS or Webometrics that is correct. But the number of indicators does not mean very much if they are bundled together and the scores hidden from the public and if some of the indicators, the teaching survey and research survey for example, correlate so closely that they are effectively the same thing. In any case, The Russian Round University Rankings have 20 indicators compared with THE's 13 in the world rankings.

As for being balanced, we have already seen Bekhradnia's analysis showing that even the teaching and international outlook criteria in the THE rankings are really about research. In addition, THE gives almost a third of its weighting to citations. In practice that is often even more because the effect of the regional modification, now applied to half the indicator, is to boost in varying degrees the scores of everybody except those in the best performing country. 

After offering a scaled down celebration of the rankings, Baty then dismisses critics while announcing that THE "is quietly [seriously?] getting on with a hugely ambitious project to build an extraordinary and truly unique global resource." 


Perhaps some elite universities, like MIT, will find the database and its associated definitions helpful but whether there is anything extraordinary or unique about it remains to be seen.







Monday, September 05, 2011

Commentary on the 2011 QS World University Rankings

From India

"University of Cambridge retains its number one spot ahead of Harvard, according to the QS World University Rankings 2011, released today. Meanwhile, MIT jumps to the third position, ahead of Yale and Oxford.

While the US continues to dominate the world ranking scenario, taking 13 of top 20 and 70 of top 300 places, 14 of 19 Canadian universities have ranked lower than 2010. As far as Europe is concerned, Germany, one of the emerging European destinations in recent times, has no university making it to the top 50 despite its Excellence Initiative.

Asian institutions - particularly those from Japan, Korea, Singapore, Hong Kong and China - have fared well at a discipline level in subject rankings produced by QS this year - this is particularly true in technical and hard science fields.

Despite the Indian government's efforts to bring about a radical change in the Indian higher education sector, no Indian university has made it to the top 200 this year. However, China has made it to the top 50 and Middle East in the top 200 for the first time.

According to Ben Sowter, QS head of research, "There has been no (relative) improvement from any Indian institution this year. The international higher education scene is alive with innovation and change, institutions are reforming, adapting and revolutionising. Migration amongst international students and faculty continues to grow with little sign of slowing. Universities can no longer do the same things they have always done and expect to maintain their position in a ranking or relative performance.""

Friday, May 11, 2018

Ranking Insights from Russia

The ranking industry is expanding and new rankings appear all the time. Most global rankings measure research publications and citations. Others try to add to the mix indicators that might have something to do with teaching and learning. There is now a  ranking that tries to capture various third missions.

The Round University Rankings published in Russia are in the tradition  of  holistic rankings. They give a 40 % weighting to research, 40 % to teaching, 10% to international diversity and 10% to financial sustainability. Each group contains five equally weighted indicators. The data is derived from Clarivate Analytics who also contribute to the US News Best Global Universities Rankings.

These rankings are similar to the THE rankings in that they attempt to assess quality rather than quantity but they have 20 indicators instead of 13 and assign sensible weightings. Unfortunately, they receive only a fraction of the attention given to the THE rankings.

They are, however, very  valuable since they dig deeper into the data than  other global rankings. They also show that there is a downside to measures of quality and that data submitted directly by institutions should be  treated with caution and perhaps scepticism.

Here are the top universities for each of the RUR indicators.

Teaching
Academic staff per students: VIB (Flemish Institute of Biotechnology), Belgium
Academic staff per bachelor degrees awarded: University of Valladolid, Spain
Doctoral degrees per academic staff: Kurdistan University of Medical Science, Iran
Doctoral degrees per bachelor degrees awarded:  Jawaharlal Nehru University, India
World teaching reputation  Harvard University, USA.

Research
Citations per academic and research staff: Harvard
Doctoral degrees per admitted PhD: Al Farabi Kazakh National University
Normalised citation impact: Rockefeller University, USA
Share of international co-authored papers: Free University of Berlin
World research reputation: Harvard.

International diversity
Share of international academic staff: American University of Sharjah, UAE
Share of international students: American University of Sharjah
Share of international co-authored papers: Innopolis University, Russia
Reputation outside region: Voronezh State Technical University, Russai
International Level: EPF Lausanne, Switzerland.

Financial sustainability:
Institutional income per academic staff: Universidade Federal Do Ceara, Brazil
Institutional income per student: Rockefeller University
Papers per research income:  Novosibersk State University of Economics and Management, Russia
Research income per academic and research staff: Istanbul Technical University, Turkey
Research income per institutional income: A C Camargo Cancer Center, Brazil.

There are some surprising results here. The most obvious is Voronezh State Technical University which is first for reputation outside its region (Asia, Europe and so on), even though its overall scores for reputation and for international diversity are very low. The other top universities for this metric are just what you would expect, Harvard, MIT, Stanford, Oxford and so on. I wonder whether there is some sort of bug in the survey procedure, perhaps something like the university's supporters being assigned to Asia and therefore out of region. The university is also in second place in the world for papers per research income despite very low scores for the other research indicators.

There are other oddities such as Novosibersk State University of Economics and Management placed first for papers per research income and Universidade Federal Do Ceara for institutional income per academic staff. These may result from anomalies in the procedures for reporting and analysing data, possibly including problems in collecting data on income and staff.

It also seems that medical schools and specialist or predominantly postgraduate institutions such as Rockefeller University, the Kurdistan University of Medical Science, Jawarhalal Nehru University and VIB have a big advantage with these indicators since they tend to have favourable faculty student ratios, sometimes boosted by large numbers of clinical and research only staff, and a large proportion of doctoral students.

Jawaharlal Nehru University is a mainly postgraduate university so a high placing for academic staff per bachelor degrees awarded is not unexpected although I am surprised that it is ahead of Yale and Princeton. I must admit that the third place here for the University of Baghdad needs some explanation.

The indicator doctoral degrees per admitted PhD might identify universities that do a good job of selection and training and get large numbers of doctoral candidates through the system. Or perhaps it identifies universities where doctoral programmes are so lacking in rigour that nearly everybody can get their degree once admitted. The top ten of this indicator includes De Montfort University, Shakarim University, Kingston University, and the University of Westminster, none of which are famous for research excellence across the range of disciplines.

Measures of international diversity have become a staple of global rankings since they are fairly easy to collect. The problem is that international orientation may have something do with quality but it may also simply be a necessary attribute of being in a small country next to larger countries with the same or similar language and culture. The top ten for the international student indicators includes the Central European University and the American university of Sharjah. For international faculty it includes the University of Macau and Qatar University.

To conclude, these indicators suggest that self submitted institutional data should be used sparingly and that data from third party sources may be preferable. Also, while ranking by quality instead of quantity is sometimes advisable it also means that anomalies and outliers are more likely to appear.







Wednesday, July 29, 2009

Highlights from the New Webometrics Rankings

Top USA and Canada

1. MIT
2. Harvard
3. Stanford
4. Berkeley
5. Cornell


Top Europe

1. Cambridge
2. Oxford
3. Federal Institute of Technology ETH Zurich
4. University College London
5. University of Helsinki

Top Oceana

1. Australian National University
2. University of Queensland
3. Monash University
4. University of Melbourne
5. University of Sydney

Top South East Asia

1. National University of Singapore
2. Prince of Songkhla University
3. Chulalongkorn University
4. Kasetsart Universiy
5. Mahidol University

Top Arab World

1. King Saud University
2. King Fahd University of Petroleum and Minerals
3. Imam Muhamed bin Saud University
4. King Faisal University
5. King Abdulaziz university

Wednesday, October 05, 2011

THE Rankings Out


Here is the top 10.

1. Caltech
2. Harvard
3. Stanford
4. Oxford
5. Princeton
6. Cambridge
7. MIT
8. Imperial College London
9. Chicago
10. Berkeley

Tuesday, September 05, 2017

Highlights from THE citations indicator


The latest THE world rankings were published yesterday. As always, the most interesting part is the field- and year- normalised citations indicator that supposedly measures research impact.

Over the last few years, an array of implausible places have zoomed into the top ranks of this metric, sometimes disappearing as rapidly as they arrived.

The first place for citations this year goes to MIT. I don't think anyone would find that very controversial.

Here are some of the institutions that feature in the top 100 of THE's most important indicator which has a weighting of 30 per cent.

2nd     St. George's, University of London
3rd=    University of California Santa Cruz, ahead of Berkeley and UCLA
6th =   Brandeis University, equal to Harvard
11th=   Anglia Ruskin University, UK, equal to Chicago
14th=   Babol Noshirvani University of Technology, Iran, equal to Oxford
16th=   Oregon Health and Science University
31st     King Abdulaziz University, Saudi Arabia
34th=   Brighton and Sussex Medical School, UK, equal to Edinburgh
44th     Vita-Salute San Raffaele University, Italy, ahead of the University of Michigan
45th=   Ulsan National Institute of Science and Technology, best in South Korea
58th=   University of Kiel, best in Germany and equal to King's College London
67th=   University of Iceland
77th=   University of Luxembourg, equal to University of Amsterdam












Wednesday, August 15, 2007

News from Shanghai

Shanghai Jiaotong University has just released its 2007 Academic Ranking of World universities. The top 100 can be found here. The top 500 are here.

I shall add a few comments in a day or so. Meanwhile here are the top 20.

1 Harvard Univ
2 Stanford Univ
3 Univ California - Berkeley
4 Univ Cambridge
5 Massachusetts Inst Tech (MIT)
6 California Inst Tech
7 Columbia Univ
8 Princeton Univ
9 Univ Chicago
10 Univ Oxford
11 Yale Univ
12 Cornell Univ
13 Univ California - Los Angeles
14 Univ California - San Diego
15 Univ Pennsylvania
16 Univ Washington - Seattle
17 Univ Wisconsin - Madison
18 Univ California - San Francisco
19 Johns Hopkins Univ
20 Tokyo Univ

Thursday, December 13, 2007

Comment on the THES-QS Rankings

There is an excellent article by Andrew Oswald of Warwick University in yesterday's Independent. It is worth quoting a large chunk of it here.

First, 2007 saw the release, by a UK commercial organisation, of an unpersuasive world university ranking. This put Oxford and Cambridge at equal second in the world. Lower down, at around the bottom of the world top-10, came University College London, above MIT. A university with the name of Stanford appeared at number 19 in the world. The University of California at Berkeley was equal to Edinburgh at 22 in the world.

Such claims do us a disservice. The organisations who promote such ideas should be unhappy themselves, and so should any supine UK universities who endorse results they view as untruthful. Using these league table results on your websites, universities, if in private you deride the quality of the findings, is unprincipled and will ultimately be destructive of yourselves, because if you are not in the truth business what business are you in, exactly?

Worse, this kind of material incorrectly reassures the UK government that our universities are international powerhouses.

Let us instead, a bit more coolly, do what people in universities are paid to do. Let us use reliable data to try to discern the truth. In the last 20 years, Oxford has won no Nobel Prizes. (Nor has Warwick.) Cambridge has done only slightly better. Stanford University in the United States, purportedly number 19 in the world, garnered three times as many Nobel Prizes over the past two decades as the universities of Oxford and Cambridge did combined. Worryingly, this period since the mid 1980s coincides precisely with the span over which UK universities have had to go through government Research Assessment Exercises (RAEs). To hide away from such inconvenient data is not going to do our nation any good. If John Denham, the Secretary of State for Innovation, Universities and Skills, is reading this, perhaps, as well as doing his best to question the newspapers that print erroneous world league tables, he might want to cut out these last sentences, blow them up to 100 point font, and paste them horizontally in a red frame on his bedroom ceiling, so that he sees them every time he wakes up or gets distracted from other duties. In his shoes, or out of them, this decline would be my biggest concern.

Since the 1980s the UK's Nobel-Prize performance has fallen off. Over the last 20 years, the US has been awarded 126 Nobel Prizes compared to Britain's nine.


The THES-QS rankings have done great damage to university education in Asia and Australia where they have distorted national education policies, promoted an emphasis on research at the expenses of teaching and induced panic about non-existent decline in some countries while encouraging false complacency about quality in others.

In the United Kingdom they have generally been taken as proof that British universities are the equals of the Ivy League and Californian universities, a claim that is plausible only if the rankings' numerous errors, biases and fluctuations are ignored.

I hope that Chris Patten and others who are in denial about the comparative of British universities will read Professor Oswald's article.

Monday, March 05, 2007

Top Universities Ranked by Research Impact

The THES – QS World Universities Rankings, and its bulky offspring, Guide to the World’s Top Universities (London: QS Quacquarelli Symonds), are strange documents, full of obvious errors and repeated contradictions. Thus, we find that the Guide has data about student faculty ratios that are completely different from those used in the top 200 rankings published in the THES while talking about how robust such a measure is. Also, if we look at the Guide we notice that for each of the top 100 universities it provides a figure for research impact, that is the number of citations divided by the number of papers. In other words it indicates how interesting other researchers found the research of each institution. These figures completely undermine the credibility ot the “peer review” as a measure of research expertise.

The table below is a re-ranking of the THES top 100 universities for 2006 by research impact and therefore by overall quality of research. This is not by any means a erefect measure. For a start, the natural sciences and medicine do a lot more citing than other disciplines and this might favor some universities more than others. Nonetheless it is very suggestive and it is so radically different from the THES-QS peer review and the overall ranking that it provides further evidence of the invalidity of the latter.

Cambridge and Oxford, ranked second and third by THES-QS, only manage to achieve thirtieth and twenty-first places for research impact.

Notice that in comparison to their research impact scores the following universities are overrated by THES-QS: Imperial College London, Ecole Normale Superieure, Ecole Polytechnique, Peking, Tsing Hua,Tokyo, Kyoto, Hong Kong, Chinese University of Hong Kong, National University of Singapore, Nanyang Technological University, Australian National University, Melbourne, Sydney, Monash, Indian Institutes of Technology, Indian Institutes of Management.

The following are underrated by THES-QS: Washington University in St Louis,
Pennsylvania State University, University of Washington, Vanderbilt, Case Western Reserve, Boston, Pittsburgh, Wisconsin, Lausanne, Erasmus, Basel, Utrecht, Munich, Wageningen, Birmingham.

The number on the left is the ranking by research impact, i.e. the number of citations divided by the number of papers. The number to the right of the universities is the research impact. The number in brackets is the overall ranking in the THES-QS 2006 rankings.

1 Harvard 41.3 (1 )
2 Washington St Louis 35.5 (48 )
3 Yale 34.7 (4 )
4 Stanford 34.6 (6 )
5 Caltech 34 (7 )
6 Johns Hopkins 33.8 (23 )
7 UC San Diego 33 (44)
8 MIT 32.8 (4)
9= Pennsylvania State University 30.8 (99)
9= Princeton 30.8 (10)
11 Chicago 30.7 (11)
12= Emory 30.3 (56)
12= Washington 30.3 (84)
14 Duke 29.9 (13 )
15 Columbia 29.7 (12 )
16 Vanderbilt 29.4 (53)
17 Lausanne 29.2 (89 )
18 University of Pennsylvania 29 (26)
19 Erasmus 28.3 (92)
20 UC Berkeley 28 (8)
21= UC Los Angeles 27.5 (31)
21= Oxford 27.5 (3
23 Case Western Reserve 27.4 (60)
24 Boston 27.2 (66)
25 Pittsburgh 27.1 (88 )
26 Basel 26.7 (75 )
27= New York University 26.4 (43)
27= Texas at Austin 26.4 (32 )
29 Geneva 26.2 (39 )
30= Northwestern 25.8 (42 )
30= Cambridge 25.8 (2)
32 Dartmouth College 25.6 (61
33 Cornell 25.5 (15 )
34 Rochester 25.1 (48 )
35 Michigan 25 (29)
36 University College London 24.9 (25 )
37 Brown 24.1 (54)
38 McGill 23.6 (21)
39 Edinburgh 23.4 (33 )
40 Toronto 23 (27 )
41 Amsterdam 21.6 (69 )
42 Wisconsin 21.5 (79 )
43= Utrecht 21.4 (95
43= Ecole Normale Superieure Lyon 21.4 (72)
45 ETH Zurich 21.2 (24 )
46 Heidelberg 20.8 (58 )
47 British Columbia 20.6 (50 )
48 Carnegie Mellon 20.5 (35 )
49= Imperial College London 20.4 (9)
49= Ecole Normale Superieure Paris 20.4 (18 )
51 King’s College London 20.1 (48 )
52 Bristol 20 (64)
53= Trinity College Dublin 19.9 (78 )
53= Copenhagen 19.9 (54 )
53= Glasgow 19.9 (81 )
56 Munich 19.8 (98)
57 Technical University Munich 19.4 (82 )
58= Birmingham 19.1 (90)
58= Catholic University of Louvain 19.1 (76 )
60 Tokyo 18.7 (19)
61 Illinois 18.6 (77 )
62 Osaka 18.4 (70
63 Wageningen 18.1 (97 )
64 Kyoto 18 (29 )
65 Australian National University 17.9 (16 )
66 Vienna 17.9 (87)
67 Manchester 17.3 (40 )
68 Catholic University of Leuven 17 (96)
69= Melbourne 16.8 (22 )
69= New South Wales 16.8 (41 )
71 Nottingham 16.6 (85 )
72 Sydney 15.9 (35)
73= Pierre-et-Marie-Curie 15.7 (93 )
73= Monash 15.7 (38)
75 Otago 15.5 (79 )
76 Queensland 15.3 (45)
77 Auckland 14.8 (46 )
78= EPF Lausanne 14.3 (64 )
78= MacQuarie 14.3 (82 )
78= Leiden 14.3 (90 )
81 Eindhoven University of Technology 13,4 (67 )
82= Warwick 13.3 (73 )
82= Delft University of Technology 3.3 (86)
84 Ecole Polytechnique 13.2 (37 )
85 Hong Kong 12.6 (33 )
86 Hong Kong Uni Science and Technology 12.2 (58)
87 Chinese University of Hong Kong 11.9 (50 )
88 Seoul National University 10.9 (63)
89 National University of Singapore 10.4 (19 )
90 National Autonomous University of Mexico 9.8 (74)
91 Peking 8 (14)
92 Lomonosov Moscow State 6 (93 )
93 Nanyang Technological University 5.6 (61)
94 Tsing Hua 5.4 (28 )
95 LSE 4.4 (17 )
96 Indian Institutes of Technology 3 (57 )
97 SOAS 2.5 (70 )
98 Indian Institutes of Management 1.9 (68)
Queen Mary London -- (99 )
Sciences Po -- (52)

Wednesday, August 10, 2011

America's Best Colleges

As the world waits anxiously for the publication of Princeton Review's Stone Cold Sober School Rankings (like everybody else I am praying the US Air Force Academy stays in the top ten), there are a few less important rankings like Shanghai's ARWU or the Forbes/CCAP (Center for College Affordabilty and Productivity) Rankings to study.

The latter, which have just been released, are designed for student consumers. The components are student satisfaction, post-graduate success, student debt, four-year graduation rate and competitive awards. They clearly fulfill a need that other rankings do not. It is possible that some of the indicators could be adopted for an international ranking. The top 10, a mix of Ivy League schools, small liberal arts colleges and service academies, are:

1.  Williams College
2.  Princeton
3.  US Military Academy
4.  Amherst College
5. Stanford
6.  Harvard
7.  Haverford College
8.  Chicago
9.  MIT
10. US Air Force Academy

Wednesday, May 29, 2013

Here is the full text of my article on the QS Subject Rankings published in the Philippine Daily Inquirer.

 

The QS university rankings by subject: Warning needed

By

 107  360  8
It is time for the Philippines to think about constructing its own objective and transparent ranking or rating systems for its colleges and universities that would learn from the mistakes of the international rankers.

The ranking of universities is getting to be big business these days. There are quite a few rankings appearing from Scimago, Webometrics, University Ranking of Academic Performance (from Turkey), the Taiwan Rankings, plus many national rankings.

No doubt there will be more to come.

In addition, the big three of the ranking world—Quacquarelli Symonds (QS), Times Higher Education and Shanghai Jiao Tong University’s Academic Ranking of World Universities—are now producing a whole range of supplementary products, regional rankings, new university rankings, reputation rankings and subject rankings.

There is nothing wrong, in principle, with ranking universities. Indeed, it might be in some ways a necessity. The problem is that there are very serious problems with the rankings produced by QS, even though they seem to be better known in Southeast Asia than any of the others.
This is especially true of the subject rankings.

No new data

The QS subject rankings, which have just been released, do not contain new data. They are mostly based on data collected for last year’s World University Rankings—in some cases extracted from the rankings and, in others, recombined or recalculated.

There are four indicators used in these rankings. They are weighted differently for the different subjects and, in two subjects, only two of the indicators are used.

The four indicators are:
A survey of academics or people who claim to be academics or used to be academics, taken from a variety of sources. This is the same indicator used in the world rankings. Respondents were asked to name the best universities for research.
A survey of employers, which seem to comprise anyone who chooses to describe himself or herself as an employer or a recruiter.
The number of citations per paper. This is a change from the world rankings when the calculation was citations per faculty.
H-index. This is something that is easier to give examples for than to define. If a university publishes one paper and the paper is cited once, then it gets an index of one. If it publishes two or more papers and two of them are published twice each, then the index is two and so on. This is a way of combining quantity of research with quality as measured by influence on other researchers.

Out of these four indicators, three are about research and one is about the employability of a university’s graduates.

These rankings are not at all suitable for use by students wondering where they should go to study, whether at undergraduate or graduate level.

The only part that could be of any use is the employer review and that has a weight ranging from 40 percent for accounting and politics to 10 percent for arts and social science subjects, like history and sociology.

But even if the rankings are to be used just to evaluate the quantity or quality of research, they are frankly of little use. They are dominated by the survey of academic opinion, which is not of professional quality.

There are several ways in which people can take part in the survey. They can be nominated by a university, they can sign up themselves, they can be recommended by a previous respondent or they can be asked because they have subscribed to an academic journal or an online database.
Apart from checking that they have a valid academic e-mail address, it is not clear whether QS makes any attempt to check whether the survey respondents are really qualified to make any judgements about research.

Not plausible

The result is that the academic survey and also the employer survey have produced results that do not appear plausible.

In recent years, there have been some odd results from QS surveys. My personal favorite is the New York University Tisch School of the Arts, which set up a branch in Singapore in 2007 and graduated its first batch of students from a three-year Film course in 2010.In the QS Asian University Rankings of that year, the Singapore branch got zero for the other criteria (presumably the school did not submit data) but it was ranked 149th in Asia for academic reputation and 114th for employer reputation.

Not bad for a school that had yet to produce any graduates when the survey was taken early in the year.

In all of the subject rankings this year, the two surveys account for at least half of the total weighting and, in two cases, Languages and English, all of it.

Consequently, while some of the results for some subjects may be quite reasonable for the world top 50 or the top 100, after that they are sometimes downright bizarre.

The problem is that although QS has a lot of respondents worldwide, when it gets down to the subject level there can be very few. In pharmacy, for example, there are only 672 for the academic survey and in materials science 146 for the employer survey. Since the leading global players will get a large share of the responses, this means that universities further down the list will be getting a handful of responses for the survey. The result is that the order of universities in any subject in a single country like the Philippines can be decided by just one or two responses to the surveys.

Another problem is that, after a few obvious choices like Harvard, MIT (Massachusetts Institute of Technology), Tokyo, most respondents probably rely on a university’s general reputation and that can lead to all sorts of distortions.

Many of the subject rankings at the country level are quite strange. Sometimes they even include universities that do not offer courses in that subject. We have already seen that there are universities in the Philippines that are ranked for subjects that they do not teach.

Somebody might say that maybe they are doing research in a subject while teaching in a department with a different name, such as an economic historian teaching in the economics department but publishing in history journals and getting picked up by the academic survey for history.
Maybe, but it would not be a good idea for someone who wants to study history to apply to that particular university.

Another example is from Saudi Arabia, where King Fahd University of Petroleum and Minerals was apparently top for history, even though it does not have a history department or indeed anything where you might expect to find a historian. There are several universities in Saudi Arabia that may not teach history very well but at least they do actually teach it.

These subject rankings may have a modest utility for students who can pick or choose among top global universities and need some idea whether they should study engineering at SUNY (State University of New York) Buffalo (New York) or Leicester (United Kingdom) or linguistics at Birmingham or Michigan.

But they are of very little use for anyone else.


Thursday, August 24, 2017

Guest Post by Pablo Achard

This post is by Pablo Achard of the University of Geneva. It refers to  the Shanghai subject rankings. However, the problem of outliers in subject and regional rankings is one that affects all the well known rankings and will probably become more important over the next few years


How a single article is worth 60 places

We can’t repeat it enough: an indicator is bad when a small variation in the input is overly amplified in the output. This is the case when indicators are based on very few events.

I recently came through this issue (again) with Shanghai’s subject ranking of universities. The universities of Geneva and Lausanne (Switzerland) share the same School of Pharmacy and a huge share of published articles in this discipline are signed under the name of both institutions. But in the “Pharmacy and pharmaceutical sciences” ranking, one is ranked between the 101st and 150th position while the other is 40th. Where does this difference come from?

Comparing the scores obtained under each category gives a clue

Geneva
Lausanne
Weight in the final score
PUB
46
44.3
1
CNCI
63.2
65.6
1
IC
83.6
79.5
0.2
TOP
0
40.8
1
AWARD
0
0
1
Weighted sum
125.9
166.6


So the main difference between the two institutions is the score in “TOP”. Actually, the difference in the weighted sum (40.7) is almost equal to the value of this score (40.8). If Geneva and Lausanne had the same TOP score, they would be 40th and 41st

Surprisingly, a look at other institutions for that TOP indicator show only 5 different values : 0, 40.8, 57.7, 70.7 and 100. According to the methodology page of the ranking, “TOP is the number of papers published in Top Journals in an Academic Subject for an institution during the period of 2011-2015. Top Journals are identified through ShanghaiRanking’s Academic Excellence Survey […] The list of the top journals can be found here  […] Only papers of ‘Article’ type are considered.”
Looking deeper, there is just one journal in this list for Pharmacy: NATURE REVIEWS DRUG DISCOVERY. As its name indicates, this recognized journal mainly publishes ‘reviews’. A search on Web of Knowledge shows that in the period 2011-2015, only 63 ‘articles’ were published in this journal. That means a small variation in the input is overly amplified.

I searched for several institutions and rapidly found this rule: Harvard published 4 articles during these five years and got a score of 100 ; MIT published 3 articles and got a score of 70.7 ; 10 institutions published 2 articles and got a 57.7 and finally about 50 institutions published 1 article and got a 40.8.

I still don’t get why this score is so unlinear. But Lausanne published one single article in NATURE REVIEWS DRUG DISCOVERY and Geneva none (they published ‘reviews’ and ‘letters’ but no ‘articles’) and that small difference led to at least a 60 places gap between the two institutions.


This is of course just one example of what happens too often: rankers want to publish sub-rankings and end up with indicators where outliers can’t be absorbed into large distributions. One article, one prize or one  co-author in a large and productive collaboration all of the sudden makes very large differences in final scores and ranks.