Tuesday, September 05, 2006

The Fastest Way into the THES TOP 200

In a little while the latest edition of the THES rankings will be out. There will be protests from those who fail to make the top 200, 300 or 500 and much self-congratulation from those included. Also, of course, THES and QS, THES’s consultants, directly or indirectly, will make a lot of money from the whole business.

If you search through the web you will find that QS and THES have been quite busy over the last year or so promoting their rankings and giving advice about what to do to get into the top 200. Some of their advice is not very helpful. Thus, Nunzio Quacquarelli, director of QS told a seminar in Kuala Lumpur in November 2005, that producing more quality research was one way of moving up in the rankings. This is not necessarily a bad thing but it will be a least a decade before any quality research can be completed, written up, submitted for publication, revised, finally accepted, published, and then cited by another researcher whose work goes through the same processes. Only then will research will start to push a university into the top 200 or 100 by boosting their score for citations per faculty.

Something less advertised is that once a university has got onto the list of 300 universities (so far this has been decided by peer review) there is a very simple way of boosting a university’s position in the rankings. It is also not unlikely that several universities have already realized this.

Pause for a minute and review the THES methodology. They gave a weighing of 40 per cent to a review of universities by other academics, 10 per cent to a rating by employers, 20 per cent to the ratio of faculty to students, 10 per cent to the proportion of international faculty and students, and 20 per cent to the number of citations per faculty. In 2005 the top scoring institution in each category was given a score of 100 and then the scores of the others were calibrated accordingly.

Getting back to boosting ratings, first take a look at the 2004 and 205 scores for citations per faculty. Comparison is a bit difficult because in 2004 the top scorer is given a score of 400 and then one of 100 in 2005 (it’s MIT in both cases.) What immediately demands attention is that there are some very dramatic changes between 2004 and 2005.

For example Ecole Polytechnique in Paris fell from 14.75 (dividing the THES figures by four because top ranked MIT was given a score of 400 in 2004) to 4, ETH Zurich from 66.5 to 8, and McGill in Canada from 21 to 8.

This at first sight is more a bit strange. The figures are supposed to refer to ten-year periods, so that in 2005 citations for the earliest year would be dropped and then those for another year added. You would not expect very much change from year to year since the figures for 2004 and 2005 overlap a great deal.

But it is not only citations that we have to consider. The score is actually based on citations per faculty member. So, if the number of faculty goes up and the number of citations remains the same then the score for citations per faculty goes down.

This in fact is what happened to a lot of universities. If we look at the score for citations per faculty and then the score for faculty-student ratio there are several cases where they change proportionately but in opposite directions.

So, going back to the three examples given above between 2004 and 2005 Ecole Polytechnique went up from 23 to 100, to become the top scorer for faculty-student ratio, ETH Zurich from 4 to 37, and Mc Gill from 23 to 42. Notice the rise in the faculty student ratio score is roughly proportionate to the fall in the number of citations per faculty.

I am not the first person to notice the apparent dramatic collapse of research activity at ETH Zurich. Norbert Staub in ETH Life International was puzzled by this. It looks as though it wasn’t that ETH Zurich stopped doing research but that apparently it acquired something like eight times as many teachers.

It seems pretty obvious that what happened to these institutions is that the apparent number of faculty went up between 2004 and 2005. This led to a rise in the score for faculty student ratio and a fall in the number of citations per faculty.

You might ask, so what? If a university goes up on one measure and goes down on another surely the total score will remain unchanged.

Not always. THES has indexed the scores to the top scoring university so that in 2005 the top scorer gets 100 for both faculty-student ratio and citations per faculty. But the gap between the top university for faculty student ration and run of the mill places in, say, the second hundred is much less than it is for citations per faculty. For example take a look at the faculty-student scores of the universities starting at position number 100. We have 15, 4, 13, 10, 23, 16, 13, 29, 12, 23. Then look at the scores for citations per faculty, 7, 1, 8, 6, 0, 12, 9, 14, 12, 7.

That means that many universities can, like Ecole Polytechnique, gain much more by increasing their faculty student ratio than they lose by reducing the citations per faculty. Not all of course. ETH Zurich suffered badly as a result of this faculty inflation.

So what is going on? Are we really to believe that in 2005 Ecole Polytechnique quadrupled its teaching staff, ETH Zurich increased its eightfold and that of McGill nearly doubled. This is totally implausible. The only explanation that makes any sort of sense is that either QS or the institutions concerned were counting their teachers differently in 2004 and 2005.

The likeliest explanation for Ecole Polytechnique’s s remarkable change is simply that in 2004 only full time staff were counted but in 2005 part-time staff were counted as well. It is well known that many staff of the Grandes Ecoles of France are employed by neighbouring research institutes and universities, although exactly how many is hard to find out. If anyone can suggest any other explanation please let me know.

Going through the rankings we find that are quite a few universities that are affected by what we might call “faculty inflation”. EPF Lausanne from 13 to 64, Eindhoven from 11 to 54, University of California at San Francisco from 39 to 91, Nagoya from 19 to 35, Hong Kong from 8 to 17.

So, having got through the peer review, this is how to get a boost in the rankings. Just inflate the number of teachers and deflate the number of students.

Here are some ways to do it. Wherever possible, hire part-time teachers but don’t differentiate between full and part-time. Announce that every graduate student is a teaching assistant, even if they just have to do a bit of marking, and count them as teaching staff. Make sure anyone who leaves is designated emeritus or emerita and kept on the books. Never sack anyone but keep him or her suspended. Count everybody in branch campuses and off -campus programmes. Classify all administrative appointees as teaching staff.

It will also help to keep the official number of students down. A few possible ways are not counting part-time students, not counting branch campuses, counting at the end of the semester when some have dropped out.

Wednesday, August 30, 2006

Comparing the Newsweek and THES Top 100 Universities

It seems to be university ranking season again. Shanghai Jiao Tong University has just come out with their 2006 edition and it looks like there will be another Times Higher Education Supplement (THES) ranking quite soon. Now, Newsweek has joined in with its own list of the world’s top 100 universities.

The Newsweek list is, for the most part, not original but it does show something extremely interesting about the THES rankings.

What Newsweek did was to combine bits of the THES and Shanghai rankings (presumably for 2005 although Newsweek does not say). They took three components from the Shanghai index, the number of highly cited researchers, number of articles in Nature and Science, and the number of articles in the ISI Social Sciences and Arts and Humanities Indices (the SJTU ranking actually also included the Science Citation Index.) and gave them a weighting of 50 per cent. Then, they took four components from the THES rankings, percentage of international faculty, percentage of international students, faculty-student ratio and citations per faculty. They also added a score derived from the number of books in the university library.

Incidentally, it is a bit irritating that Newsweek, like some other commentators, refers to the THES as The Times of London. The THES has in fact long been a separate publication and is no longer even owned by the same company as the newspaper.

The idea of combining data from different rankings is not bad, although Newsweek does not indicate why they assign the weightings that they do. It is a shame, though, that they keep THES’s data on international students and faculty and faculty-student ratio, which do not show very much and are probably easy to manipulate.

Still, it seems that this ranking, as far as it goes, is probably better than either the THES or the Shanghai ones, considered separately. The main problem is that it includes only 100 universities and therefore tells us nothing at all about the thousands of others.

The Newsweek ranking is also notable for what it leaves out. It does not include the THES peer review which accounted for 50 per cent of the ranking in 2004 and 40 per cent in 2005 and the rating by employers which contributed 10 per cent in 2005. If we compare the top 100 universities in the THES ranking with Newsweek’s top 100, some very interesting patterns emerge. Essentially, the Newsweek ranking tells us what happens if we take the THES peer review out of the equation.

First, a lot of universities have a much lower position on the Newsweek ranking that they do on the THES’s and some even disappear altogether from the former. But the decline is not random by any means. All four French institutions suffer a decline. Of the 14 British universities, 2 go up, 2 have the same place and 10 go down. Altogether 26 European universities fall and five (three of them from Switzerland) rise.

The four Chinese (PRC) universities in the THES top 100 disappear altogether from the Newsweek top 100 while most Asian universities decline. Ten Australian universities go down and one goes up.


There are some truly spectacular tumbles. They include Peking University (which THES likes to call Beijing University), the best university in Asia and number 15 in the world, according to THES, which is out altogether. The Indian Institutes of Technology have also gone. Monash falls from 33 to 73, Ecole Polytechnique in Paris from 10 to 43, and Melbourne from 19 to 53.

So what is going on? Basically, it looks as though the function of the THES peer and employer reviews was to allow universities from Australia, Europe, especially France and the United Kingdom, and Asia, especially China, to do much better that they would on any other possible measure or combination of measures.

Did THES see something that everybody else was missing? It is unlikely. The THES peer reviewers are described as experts in their fields and as being research-active academics. They are not described as experts in teaching methodology or as involved in teaching or curricular reform. So it seems that this is supposed a review of the research standing of universities and not of teaching quality or anything else. And for some countries it is quite a good one. For North America, the United Kingdom, Germany, Australia and Japan, there is a high correlation between the scores for citations per faculty and the peer review. For other places it is not so good. There is no correlation between the peer review and citations for Asia overall, China, France, and the Netherlands. For the whole of the THES top 200 there is only a weak correlation.

So a high score on the peer review does not necessarily reflect a high research profile and it is hard to see that it reflects anything else.

It appears that the THES peer review, and therefore the ranking as a whole, was basically a kind of ranking gerrymandering in which the results were influenced by the method of sampling. QS assigned took about a third each of its peers from North America, Europe and Asia and then asked them to name the top universities in their geographic areas. No wonder that we have large numbers of European, Asian and especially Australian universities in the top 200. Had the THES surveyed an equal number of reviewers from Latin America and Africa (“major cultural regions”?) the results would have been different. Had they asked reviewers to nominate universities outside their own countries (surely quality means being known in other countries or continents?) they would have been even more different.

Is it entirely a coincidence that the regions that are disproportionately favoured by the peer review, the UK, France, China and Australia, are precisely those where QS, the consultants who carried the survey, have offices and are precisely those regions that are active in the production of MBAs and the lucrative globalised trade in students, teachers and researchers?

Anyway, it will be interesting to see if THES is going to do the same sort of thing this year.