Sunday, September 19, 2010

More on the THE Citations Indicator

See this comment on a previous post:


As you can see from the following paragraph(http://www.timeshighereducation.co.uk/world-university-rankings/2010-2011/analysis-methodology.html) Thomson has normalised citations against each of their 251 subject categories (it's extremely difficult to get this data directly from WOS).. They have great experience in this kind of analysis.. to get an idea, check their in-cites website http://sciencewatch.com/about/met/thresholds/#tab3 where they have citations thresholds for the last 10 years against broad fields.

Paragraph mentioned above:
"Citation impact: it's all relative
Citations are widely recognised as a strong indicator of the significance and relevance — that is, the impact — of a piece of research.
However, citation data must be used with care as citation rates can vary between subjects and time periods.
For example, papers in the life sciences tend to be cited more frequently than those published in the social sciences.
The rankings this year use normalised citation impact, where the citations to each paper are compared with the average number of citations received by all papers published in the same field and year. So a paper with a relative citation impact of 2.0 is cited twice as frequently as the average for similar papers.
The data were extracted from the Thomson Reuters resource known as Web of Science, the largest and most comprehensive database of research citations available.
Its authoritative and multidisciplinary content covers more than 11,600 of the highest-impact journals worldwide. The benchmarking exercise is carried out on an exact level across 251 subject areas for each year in the period 2004 to 2008.
For institutions that produce few papers, the relative citation impact may be significantly influenced by one or two highly cited papers and therefore it does not accurately reflect their typical performance. However, institutions publishing fewer than 50 papers a year have been excluded from the rankings.
There are occasions where a groundbreaking academic paper is so influential as to drive the citation counts to extreme levels — receiving thousands of citations. An institution that contributes to one of these papers will receive a significant and noticeable boost to its citation impact, and this reflects such institutions' contribution to globally significant research projects."


The quotation is from the bottom of the methodology page. It is easy to miss since it is separate from the general discussion of the citations indicator.

I will comment on Simon Pratt's claim that "An institution that contributes to one of these papers will receive a significant and noticeable boost to its citation impact, and this reflects such institutions' contribution to globally significant research projects."

First, were self-citations included in the analysis?

Second, do institutions receive the same credit for contributing to a research project by providing one out of twenty co-authors that they would for contributing all of them.

Third, since citation scores vary from one subject field to another, a paper will get a higher impact score if it is classified as a subject that typically receives few citations than as one in which citations are plentiful.

Fourth, the obvious problem that undermines the entire indicator is that the impact scores are divided by the total number of papers. A groundbreaking paper with thousands of citations would would make little difference to Harvard. Change the affiliation to a small college somewhere and it would stand out (providing the college could reach 50 papers a year)>

This explains something rather odd about the data for Alexandria University. Mohamed El Naschie has published many papers with several different affiliations. Yet the many citations to these papers produced a dramatic effect only for Alexandria. This, it seems, was because his Alexandria papers had a big effect because the total number of papers was so low.
Highlights from the Research Impact (Citations) Indicator


One very good thing to emerge from the current round of rankings is the iphone/ipad apps from THE and QS. The THE app is especially helpful since it contains the scores for the various indicators for each of 400 universities. It is possible then to construct a ranking for research imact as measured by citations, which gets nearly one third of the weighting.

Some highlights

1st Caltech
4th Alexandria University
9th Harvard
10th UC Santa Barbara
13th Hong Kong Baptist University
20th Bilkent
23rd Oxford
27th Royal Holloway
31st Johns Hopkins
41st University of Adelaide
45TH Imperial College
65th Australian National University
84th Kent State
11oth Mcgill
143th Tokyo Metropolitan University
164th Tokyo University
285th Warwick
302nd Delft University of Technology
368th South Australia
Perverse Incentives

Until we get a clear statement from Thomson Reuters we have to assume that the citations indicator in the recent THE rankings was constructed by counting citations to articles published in the period 2004 - 2008, dividing these by the expected number of articles and then dividing again by the total number of articles.

It seems then that universities could improve their score on this indicator by getting cited more often or by reducing the number of papers published in ISI indexed journals. Doing both could bring remarkable results.

This seems to be what has happened in the case of Alexandria University, which according to the new THE ranking, is fourth best in the world for research impact.

The university has accumulated a large number of citations to papers published by Mohamed El Naschie, mainly in two journals, Chaos, Solitons and Fractals, published by Elsevier, and the International Journal of Nonlinear Mathematics and Numerical Simulation, published by the Israeli company Freund. El Naschie was editor of the first until recently and is co-editor of the second. Many of the citations are by himself.

I am unable to judge the merits of El Naschie's work. I assume that since he has been a Professor at Cambridge, Cornell and the University of Surrey and publishes in journals produced by two very reputable companies that his papers are of a very high quality.

It is not enough, however, to simply get lots of citations. The actual citation/expected citation number will -- if this is what happened -- be divided by the total number of papers. And this is where there is a problem. If a university has very few papers in ISI journals in the relevant period they will end by getting a very good score. A lot of papers and your score goes way down. This probably explains why Warwick ranks 285th for research impact and LSE 193rd: there were just too many people writing papers that were above average but not way above average.

An article by David Glenn in the Chronicle of Higher Education talks about the perverse incentives of the new rankings. Here, we have another. If a university simply stopped publishing for a year its score on this indicator would go up since it would still be accumulating citations for articles published in previous years.

Saturday, September 18, 2010

More on the Citations Indicator in the THE Rankings
I am copying the whole of a comment to the previous post since it might be the key to the strange results of the citations indicator.

Perhaps someone from Thomson Reuters can confirm that this is method they were using.
Pablo said...

My bet is that TR uses the Leiden "Crown indicator" since this is what is embodied in their product InCites.

To cut it short, each paper is linked to a subdiscipline, a type of publication (letter, review, ...) and a year of publication. With this data for the whole world, it is easy to calculate the expected number of citations for a paper of a given type, in a given discipline, in a given year.
For a set of papers (e.g. all the papers of Alexandria university), the indicator is calculated as Sum(received citations)/Sum(expected citations).

This number can become very high if you have a small number of paper or if you look only at recent papers (if, on average you expect 0.1 citations for a recent paper in math, a single citation will give you a score of 10 for this paper!)

Note that Leiden as recently decided to change its favorite indicator for a mean(citations received/citations expected) which gives less weight for a few highly cited papers in a set. But it seems that TR has not implemented yet this new indicator.

Note also that, in order to avoid the overweight given to few papers in a small set, Leiden publish its own ranking of universities with thresholds on the total number of papers published.

Friday, September 17, 2010

The Citations Indicator in the THE World University Rankings

I am sure that many people waited for the new Times Higher Education rankings in the hope that they would be a significant improvement over the old THE-QS rankings.

In some ways there have been improvements and if one single indicator had been left out the new rankings could have been considered a qualified success.

However there is a problem and it is a big problem. This is the citations indicator, which consists of the number of citations to articles published between 2004 and 2008 in ISI indexed journals divided by the number of articles. It is therefore a measure of the average quality of articles since we assume that the more citations a paper receives the better it is.

Giving nearly a third of the total weighting to research impact is questionable. Giving nearly a third to just one of several possible indicators of research impact is dangerous. Apart from anything else, it means that any errors or methodological flaws might undermine the entire ranking.

THE have been at pains to suggest that one of the flaws in the old rankings was that the failure to take account of different citation patterns in different disciplines meant that universities with strengths in disciplines such as medicine where citation is frequent do much better than those that are strong in disciplines such as philosophy where citation is less common. We were told that the new data would be normalized by disciplinary group so that a university with a small number of citations in the arts and humanities could still do well if the number of citations was relatively high compared to the number of citations for the highest scorer in that disciplinary cluster.

I think we can assume that this means that in each of the six disciplinary groups, the number of citations per paper was calculated for each university. Then the mean for all universities in the group was calculated. Then the top scoring university was given a score of 100. Then Z scores were calculated, that is the number of standard deviations from the mean. Then the score for the whole indicator was found by calculating the mean score for the six disciplinary groups.

The crucial point here is the rather obvious one that no university can get more that 100 for each disciplinary group. If it were otherwise then Harvard, MIT and Caltech would be getting scores well in excess of 100.

So, let us look at some of the highest scores for citations per paper . First the university of Alesxandria, which is not listed in the ARWU top 500 and is not ranked by QS.and which is ranked 5,882nd in the world by Webometrics.

The new rankings put Alexandria in 4th place in the world for citations per paper. This meant that with the high weighting given to the citations indicator the university achieved a very respectable overall place of 147th.

How did this happen? For a start I would like to compare Alexandria with Cornell, an Ivy League university with a score of 88.1 for citations, well below Alexandria’s

I have used data from the Web of Science to analyse citation patterns according to the disciplinary groups indicated by Thomson Reuters. These scores may not be exactly those calculated by TR since I have made some instantaneous decisions about allocating subjects to different groups and TR may well have done it differently. I doubt though that it would make any real difference if I put biomedical engineering in clinical and health subjects and TR put it in engineering and technology or life sciences. Still I would welcome it if Thomson Reuters could show how their classification of disciplines into various groups produced the score that they have published.

So where did Alexandria’s high score come from. It was not because Alexandria does well in the arts and humanities. Alexandria had an average of 0.44 citations per paper and Cornell 0.85.

I was not because Alexandria is brilliant in the social sciences. It had 4.21 citations per paper and Cornell 7.98.

Was it medicine and allied disciplines? No. Alexandria had 4.97 and Cornell 11.53.

Life sciences? No. Alexandra had 5.30 and Cornell 13.49.

Physical Sciences? No. Alexandria had 6.54 and Cornell 16.31.

Engineering, technology and computer science? No. Alexandria had 6.03 and Cornell 9.59.

In every single disciplinary group Cornell is well ahead of Alexandria. Possibly, TR did something differently. Maybe they counted citations to papers in conference proceedings but that would only affect papers published in 2008 and after. At the moment, I cannot think of anything that would substantially affect the relative scores.

Some further investigation showed that while Alexandria’s citation record is less than stellar in all respects there is precisely one discipline, or subdiscipline or even subsubdiscipline, where it does very well. Looking at the disciplines one by one, I found that there is one where Alexandria does seem to have an advantage, namely mathematical physics. Here it has 11.52 citations per paper well ahead of Cornell with 6.36.

Phil Baty in THE states:

“Alexandria University is Egypt's only representative in the global top 200, in joint 147th place. Its position, rubbing shoulders with the world's elite, is down to an exceptional score of 99.8 in the "research-influence" category, which is virtually on a par with Harvard University.

Alexandria, which counts Ahmed H. Zewail, winner of the 1999 Nobel Prize for
Chemistry, among its alumni, clearly produces some strong research. But it is a
cluster of highly cited papers in theoretical physics and mathematics - and more
controversially, the high output from one scholar in one journal - that gives it
such a high score.

Mr Pratt said: "The citation rates for papers in these fields may not appear exceptional when looking at unmodified citation counts; however, they are as high as 40 times the benchmark for similar papers. "The effect of this is particularly strong given the relatively low number of papers the university publishes overall."

This is not very convincing. Does Alexandria produce strong research? Overall, No. It is ranked 1014 in the world for total papers over a ten year period by SCImago.

Let us assume, however, that Alexandria’s citations per paper were such that it was the top scorer not just in mathematical or interdisciplinary physics, but also in physics in general and in the physical sciences, including maths (which, as we have seen it was not anyway)

Even if the much cited papers in mathematical physics did give a maximum score of 100 for the physical sciences and maths group, how could that compensate for the low scores that the university should be getting for the other five groups? To attain a score of 99.8 Alexandria would have to be near the top for each of the six disciplinary groups. This is clearly not the case. I would therefore like to ask someone from Thomson Reuters to explain how they got from the citation and paper counts in the ISI database to an overall score.

Similarly we find that Bilkent University in Turkey had a score for citations of 91.7, quite a bit ahead of Cornell.

The number of citations per paper in each disciplinary group is as follows:

Arts and Humanities: Bilkent 0.44, Cornell 0.85
Social Sciences: Bilkent 2.92, Cornell 7.98
Medicine etc: Bilkent 9.42 Cornell 11.53
Life Sciences: Bilkent 5.44 Cornell 13.49
Physical Sciences: Bilkent 8.75 Cornell 16.31
Engineering and Computer Science: Bilkent 6.15 Cornell 9.59

Again, it is difficult to see how Bilkent could have surpassed Cornell. I did notice that one single paper in Science had received over 600 citations. Would that be enough to give Bilkent such a high score?

It has occurred to me that since this paper was listed under “multidisciplinary sciences” that maybe its citations have been counted more than once. Again, it would be a good idea for TR to explain step by step exactly what they did.

Now for Hong Kong Baptist University. It is surprising that this university should be in the top 200 since in all other rankings it has lagged well behind other Hong Kong universities. Indeed it lags behind on the other indicators in this ranking.

The number of citations per paper in the various disciplinary groups is as follows:

Arts and Humanities: HKBU 0.34, Cornell 0.85
Social Sciences: HKBU 4.50 Cornell 7.98
Medicine etc: 7.82 Cornell 11.53
Life Sciences: 10.11 Cornell 13.49
Physical Sciences: HKBU 10.78 Cornell 16.31
Engineering and Computer Science: HKBU 8.61 Cornell 9.59

Again, there seems to be a small group of prolific and highly accomplished and reputable researchers especially in chemistry and engineering who have boosted HKBU’s citations. But again, how could this affect the overall score. Isn’t this precisely what normalization by discipline was supposed to prevent?

There are other universities with suspiciously high scores for this indicator. Also one wonders whether among the universities that did not make it into the top 200 there were some unfairly penalized. Now that the iphone app is being downloaded across the world this may soon become apparent.

Once again I would ask TR to go step by step through the process of calculating these scores and to assure us that they are not the result of an error or series of errors. If they can do this I would be very happy to start discussing more fundamental questions about these rankings.
From the Chronicle of Higher Education

An article by Aishah Laby contains this news that I have not heard anywhere else:

Quacquarelli Symonds has continued to produce those rankings, now called the QS World University Rankings, and is partnering with U.S. News and World Report for their publication in the United States.

The relationship between the former collaborators has deteriorated into
barely veiled animosity. QS has accused Times Higher Education of unfairly
disparaging the tables they once published together. This week the company
threatened legal action against the magazine over what Simona Bizzozero, a QS
spokeswoman, described as "factually inaccurate" and misleading statements by
representatives of Times Higher Education. She said THE's role in the
collaboration was limited to publishing the rankings based on a methodology that
QS had developed. "What they're producing now is a brand-new exercise. A totally
brand-new exercise, with absolutely no links whatsoever to what QS produced and
is producing," she said. "So when they refer to their old methodology, that is
not correct."

Phil Baty, editor of the rankings for Times Higher Education, declined to respond to QS's complaints: "We are now looking forward, not looking backward."

I didn't know that the animosty was veiled, even barely.

There are some comments from Ellen Hazelkorn

"Really, nothing has changed," said Ellen Hazelkorn, executive director of the Higher Education Policy Research Unit at the Dublin Institute of Technology, whose book "Rankings and the Battle for Worldclass Excellence: The Reshaping of Higher Education" is due to be published in March.

Despite Times Higher Education's assurances that the new tables represent a much more rigorous and reliable guide than the previous rankings, the indicators on which the new rankings are based are as problematic in their own way, she believes. The heavily weighted measure of teaching, which she described as subjective and based on reputation, introduces a new element of
unreliability.

Gauging research impact through a subjective, reputation-based measure is troublesome enough, and "the reputational aspect is even more problematic once you extend it to teaching," she said.

Ms. Hazelkorn is also troubled by the role Thomson Reuters is playing through its
Global Institutional Profiles Project, to which institutions provide the data
used in the tables. She dislikes the fact that institutions are going to great
effort and expense to compile data that the company could then sell in various
ways.

"This is the monetarization of university data, like Bloomberg
made money out of financial data," she said.


Powered by Thomson Reuters

Thanks to Kris Olds in Global higher Ed for noticing the above in the THE World UniversityRankings banner.

A quotation from his article.

Thomson Reuters is a private global information services
firm, and a highly respected one at that. Apart from ‘deep pockets’, they have
knowledgeable staff, and a not insignificant number of them. For example, on 14
September Phil Baty, of Times Higher Education sent out this fact via their
Twitter feed:

2 days to #THEWUR. Fact: Thomson Reuters involved more
than 100 staff members in its global profiles project, which fuels the rankings

The incorporation of Thomson Reuters into the rankings games by Times
Higher Education was a strategically smart move for this media company for it
arguably (a) enhances their capacity (in principle) to improve ranking
methodology and implementation, and (b) improves the respect the ranking
exercise is likely to get in many quarters. Thomson Reuters is, thus, an
analytical-cum-legitimacy vehicle of sorts.

Thursday, September 16, 2010

Comment on the THE rankings

From The Age (Australia)

Les Field, the deputy vice-chancellor (research) at the University of NSW, said the new Times methodology had produced some curious results, such as Hong Kong Baptist University ranking close behind Harvard on citations.

''There are some anomalies which to my mind don't pass the reasonableness test,'' he said.

And Alexandria University, UC Santa Cruz, UC Santa Barbara, Pohang University of Science and Technology, Bilkent University, William & Mary, Royal Holloway, University of Barcelona, University of Adelaide.
Alexandria University

According to the THE rankings Alexandria University in Egypt (no. 147 overall) is the fourth university in the world for research impact, surpassed only by Caltech, MIT and Princeton.

Alexandria is not ranked by Shanghai Jiao Tong University or HEEACT. It is way down the SCImago rankings. Webometrics puts it in 5,882nd place and 7,253rd for the "Scholar" indicator.

That is not the only strange result for this indicator, which looks as though it will spoil the rankings as a whole.

More on Alexandria and some other universities in a few hours.
The Good News

There are some worthwhile improvements in the new THE World University Rankings.

First the weighting given to the subjective opinion survey has been reduced although probably not by enough. Very sensibly, the survey asked respondents to evaluate teaching as well as research.

The task ahead for THE now is to refine the sample of respondents and the questions they are invited to answer. It would make sense to exclude those with a non-university affiliation from answering questions about teaching. Similarly, there ought to be some way of eliciting the views of university teachers who do not do research, perhaps by some sort of rigorously validated sign up system. Something like this might also be developed to discover the views of students, at least graduate students.

The weighting given to international students has been reduced from five to two per cent.

There is a substantial weighting for a mixed bag of teaching indicators, including the survey. Some of these are questionable though such as the ratio of doctoral to undergraduate students.

For most indicators, the present rankings represent a degree of progress.

The problem with these rankings is the Citations Indicator, which has produced results that, to say the least, are bizarre.
First the Bad News about the THE Rankings

There is something seriously wrong with the citations indicator data. I am doing some checking right now.
Highlights of the THE Rankings

The top ten are:
1. Harvard
2. Caltech
3. MIT
4. Stanford
5. Princeton
6. Cambridge
6. Oxford
8. UC Berkeley
9. Imperial College
10. Yale

The best Asian university is the University of Hong Kong. Sao Paulo is best in South America and Melbourne in Australia. Cape Town is top in Africa followed by the University of Alexandria which is ranked 149th, a rather surprising result.

Wednesday, September 15, 2010

The new THE World University Rankings are out. Discussion follows in a few hours.
HEEACT Rankings Out

The Taiwan Rankings are out. They are based on articles and citations over the last eleven years and the last year, highly cited articles, articles in high impact journals and the H-index. Essentially they measure research productivity, research excellence and research impact.

The top 10 are:

1. Harvard
2. Stanford
3. Johns Hopkins
4. University of Washington -- Seattle
5. UCLA
6. UC Berkeley
7. MIT
8. University of Michigan -- Ann Arbor
9. Tornto
10. Oxford

Tokyo is 14th, Cambridge 16th, University College London 17th, Yale 18th, Imperial 21st, Caltech 31st, Melbourne 43rd, Seoul National University 67th.
Academic Fraud in China

An article by Sam Geall in The New Humanist shows something of the other side of China's rapid scientific development in recent years.

Tuesday, September 14, 2010

Access to Rankings Data

Global Higher Ed has an excellent article by Kris Olds and Susan Robertson about the need for transparency in the collection and distribution of ranking data.
Yet More Reactions to the QS

As the world holds its breath waiting for the THE World University Rankings here are a few more reactions to the QS World University Rankings.

AUSTRALIAN universities have responded with a deafening silence to their contentious downgrading in last week's Quacquarelli Symonds World University Rankings.

The Australian


World's top universities: Four IITs slip in rankings

Sify Finance


Ranking is not everything

The Nation (Thailand)


University climbs fourteen places in world rankings

leedsstudent


A slow but steady climb

Malaysia Star Online






















Saturday, September 11, 2010

Ranking Research Impact

A team at the University of Western Australia has ranked the world' top 500 universities by research impact. The table is based on citations data derived from Scopus and covers the period 2000 to 2009. The ranking seems technically to be very competent.

The top five are:

1. Harvard
2. Stanford
3. MIT
4. UCLA
5. UC Berkeley

Cambridge is 13th and University College London 36th.



.

Thursday, September 09, 2010

More Reactions to the QS Rankings

Australian Higher Education Sector Down in Rankings and Nervous on International Enrolments
AIEC QUEST Australian International Education

4 Chinese universities rank among world's top 50
Peoples Daily Online

Wednesday, September 08, 2010

Some Reactions to the 2010 QS Rankings

Cambridge Knocks Harvard Off Top in University League

Nine Taiwan universities listed among the world's top 500
Radio Taiwan International


Israeli universities drop in international rankings



Trinity and UCD slip down rankings of top universities


Cambridge Beats Harvard -- Sort of

The big news from the QS World University Rankings today is that Cambridge is finally top after trailing Harvard for six years.

This seems a little odd since Cambridge is way behind Harvard, and a few other places, on all the indicators in the Shanghai rankings. So what happened? Looking at the indicator scores we find that on the "Academic Peer Review" -- more accurately called an Academic Reputation Index elsewhere on the site -- Cambridge is first and Harvard second. For the Employer Review Cambridge is third and Harvard first, reversing their places last year. For citations per faculty Harvard was third and Cambridge 36th, behind Tufts, Emory and UC Santa Cruz among others. For student faculty ratio, Cambridge was 18th and Harvard 40th. At the time of writing data was not available for International Faculty and Students.

It seems that the main factor in Cambridge's success was the academic survey. QS indicates the sources of the survey.
  • 1,648 previous respondents who returned. If QS have continued the practice of previous years , they also counted respondents from 2009 and 2008 even if they did not submit a form.
  • 180,00 out of 300,000 persons on the mailing list of World Scientific, a Singapore-based publishing company with links to Imperial College London. World Scientific, by the way, claim to have 400,000 subscribers.
  • 48,125 records from Mardev-DM2
  • 2,000 academics who signed up at the QS site
  • Lists provided by institutions. In 2010 160 universities provided more than 40,000 names.

I will let readers decide how representative or accurate such a survey can be.

Incidentally, QS should be given credit for the detailed description of the methodology of this criterion.

Saturday, September 04, 2010

QS announces Date

Times Higher Education have already announced that their World University Rankings will be published on September 16th.

This morning QS indicated on their topuniversities site that theirs will be out on September 8th.

Friday, September 03, 2010

World Class Universities as a Measure of System Quality

This a list of the percentage of each country's universities that are included in the top 500 of the Academic Ranking of World Universities produced by Shanghai Jiao Tong University. It might be considered a limited indicator of the overall quality of a country's higher education system.

The number of universities in each country included in the ARWU Top 500 is from ARWU . The total number of universities in each country is from Webometrics. A university is simply defined by the possession of a distinct URL.

It is of course easier to start a university in the US than in Israel where the country's first Arabic speaking university has only just been approved. However, this table does put the large number of American universities in global rankings in a different perspective.


1. Israel 21.21
2. Sweden 20.37
3. Australia 19.77
4. UK 16.17
5= Finland 11.76
5= Singapore 11.76
7. South Africa 11.54
8. Canada 11.33
9. New Zealand 11.11
10. Italy 10.89
11. Austria 10.61
12. Germany 9.75
13. Netherlands 8.21
14. Belgium 7.14
15. Switzerland 6.67
16. Ireland 6.00
17. Norway 5.89
18. USA 4.70
19. Spain 4.59
20. Saudi Arabia 4.44
21. Hungary 3.85
22. France 3.77
23. Denmark 3.57
24. Japan 3.50
25= Greece 3.125
25= Slovenia 3.125
27. South Korea 2.55
28. China 2.52
29. Chile 2.47
30. Portugal 1.79
31. Czech Republic 1.75
32. Argentina 0.95
33. Turkey 0.67
34. Poland 0.46
35. Brazil 0.40
36. Russia 0.30
37. Iran 0.19
38. India 0.13
39. Mexico 0.11

Thursday, September 02, 2010

New Rankings on the Way

Times Higher Education have announced that their new rankings will be published on September 16th and have revealed the outline of their methodology.

The rankings will include five groups of indicators as follows:


A new broad category, called "Teaching - the learning environment", will be
given a weighting of 30 per cent.

Using five separate indicators, this category will use data on an institution's income, staff-student ratios and undergraduate-postgraduate mix, as well as the results of the first-ever global academic reputation survey examining the quality of teaching.

A further 30 per cent of the final rankings score will be based on another new indicator, "Research - volume, income and reputation".

This category will use four separate indicators, including data on research income, research output (measured by publications in leading peer-reviewed journals) and the results of the academic reputation survey relating to research.

The highest-weighted category is "Citations - research influence".

This category will examine a university's research influence, measured by the number of times its published work is cited in other academics' papers.

Based on the 12,000 journals indexed by Thomson Reuters' Web of Science, and taken over a five-year period, the citations data will be normalised to take account
of different volumes of citations between disciplines.

Reflecting the high levels of correlation between citations data and research excellence, this category will be given a weighting of 32.5 per cent.

A fourth category, "International mix - staff and students", will use data on the proportion of international staff and students on campus. This indicator will be given a 5 per cent weighting.

Knowledge transfer activities will be reflected in "Industry income - innovation", a new category worth 2.5 per cent of the total rankings score. This will be based on just one measure in 2010 - research income from industry.

There is still a lot apparently left undecided such as the distribution of indicators within the groups and exactly what faculty will count for scaling. In general, though, the broad outlines of the new ranking look promising with the exception of the large weighting -- nearly one third -- assigned to a single indication, citations. Certainly citations are a good measure of research impact and more difficult to manipulate than some others but putting so much emphasis on just one indicator will be a problem for face validity and will also amplify any data entry errors should they occur.

Finally, I wonder if it is a good idea to refer to the "seventh annual survey". Wouldn't it better to start all over again with the First THE Rankings?

Saturday, August 28, 2010

From THE

I am reproducing Phil Baty's column from Times Higher Education in its entirety


One of the things that I have been keen to do as editor of the Times Higher
Education World University Rankings is to engage as much as possible with our
harshest critics.

Our editorial board was trenchant in its criticism of our old rankings. In particular, Ian Diamond, principal of the University of Aberdeen and former chief executive of the Economic and Social Research Council, was scathing about our use of research citations.

The old system failed to normalise data to take account of the dramatically different citation volumes between different disciplines, he said - unfairly hitting strong work in fields with lower average figures. We listened, learned and have corrected this
weakness for the 2010 rankings.

Another strong critic is blogger Richard Holmes, an academic at the Universiti Teknologi MARA in Malaysia. Through his University Ranking Watch blog, he has perhaps done more than anyone to highlight the weaknesses in existing systems: indeed, he highlighted many of the problems that helped convince us to develop a new methodology with a new data provider, Thomson Reuters.

He has given us many helpful suggestions as we develop our improved methodology. For example, he advised that we should reduce the weighting given to the proportion of international students on campus, and we agreed. He added that we should increase the weighting given to our new teaching indicators, and again we concurred.

Of course, there are many elements that he and others will continue to disagree with us on, and we welcome that. We are not seeking anyone's endorsement. We simply ask for open engagement - including criticism - and we expect that process will continue long after the new tables are published.


There are still issues to be resolved but it does appear that the new THE rankings are making progress on several fronts. There is a group of indicators that attempts to measure teaching effectiveness. The weighting given to international students, an indicator that is easily manipulable and that has had very negative backwash effects, has been reduced. The inclusion of funding as a criterion, while obviously favouring wealthy regions, does measure an important input. The weighting assigned to the subjective academic survey has been reduced and it is now drawn from a clearly defined and at least moderately qualified set of respondents.



There are still areas where questions remain. I am not sure that citations per paper is the only way to measure impact. At the very least, the h-index could be added, which would add another ingredient to the mix.



Also, there are details that need to be sorted out. Exactly what sort of faculty will be counted in the various scalings? Is self-citation be counted? I also suspect that not everybody will be enthusiastic about using statistics from UNESCO for weighting the results of the reputational survey. That is not exactly the most efficient organization in the world. There is also a need for a lot more information about the workings of the reputational survey. What was the response rate and exactly how many responses were there from individual countries?

Something that may well cause problems in the future is the proposed indicator of the ratio of doctoral degrees to undergraduate degrees. if this is retained it is easy to predict that universities everywhere will be encouraging or coercing applicants to master's programs to switch to doctoral programs.

Still, it does seem that THE is being more open and honest about the creation of the new rankings than other ranking organizations and that the final result will be a significant improvement.