Oxford and Cambridge and THES
George Best used to tell a story about being asked by a waiter in a five star hotel "where did it all go wrong?" Best, who was signing a bill for champagne, with his winnings from the casino scattered around the room and Miss World waiting for him, remarked "he must have seen something that I'd missed". It looks like The Times Higher Education Supplement (THES) has seen something about Oxford and Cambridge that everybody else has missed.
The THES world university rankings have proved to be extraordinarily influential. One example is criticism of the president of Yonsei University in Korea for his institution's poor performance on the rankings.
Another is the belief of Terence Kealey, Vice-Chancellor of the University of Buckingham, that since Oxford and Cambridge are the best universities in the world apart from Harvard, according to THES, they are in no need of reform. He argues that Oxford should reject proposals for administrative change since Oxford and Cambridge are the best run universities in the world.
Kealey evidently has complete confidence in the reliability of the THES rankings and if they were indeed reliable then he would have a very good point. But if they are not then the rankings would have done an immense disservice to British higher education by promoting a false sense of superiority leading to a rejection of attempts that might reverse a steady decline.
Let's have a look at the THES rankings. On most components the record of Oxford and Cambridge is undistinguished. For international faculty, international students, and faculty-student ratio they have scores of 54 and 58, 39 and 43, 61 and 64 respectively, compared to top scores of 100, although these scores are perhaps not very significant and are easily manipulated. More telling is the score for citations per faculty, a measure of the significance of the institutions' research output. Here, the record is rather miserable with Oxford and Cambridge coming behind many institutions including the Indian Institutes of Technology, Helsinki and the University of Massachusetts at Amherst.
I would be the first to admit that the latter measure has to be taken with a little bit of salt. Science and technology are more citation-heavy than the humanities and social sciences, which would help to explain why the Indian Institutes of Technology apparently do so well, but they are suggestive.
Of course, this measure also depends on the number of faculty as well as the number of citations. If there has been an error in counting the number of faculty then the citations per faculty score would also be affected. I am wondering whether something like that happened to the Indian Institutes. THES refers to institutes but their consultants, QS, refer to institute and provide a link to the institute in Delhi. Can we be confident that QS did not count the faculty for Delhi but citations for all the IITs?
When we look at the data provided by THES for citations per paper, a measure of research quality, we find that the record of Oxford and Cambridge is equally unremarkable. For Science, Oxford is 20th and Cambridge 19th. For technology, Oxford is 11th and Cambridge 29th. For biomedicine, Oxford is seventh and Cambridge ninth. For Social Sciences, Oxford is 19th and Cambridge is 22nd.
The comparative performance of Oxford and Cambridge is just as unimpressive when we look at the data provided by Shanghai Jiao Tong University. Cambridge is second on alumni and awards, getting credit for Nobel prizes awarded early in the last century but 15th for highly cited researchers, 6th for publications in Nature and Science and 12th for citations in the Science Citation Index and Social Science Citation Index. Oxford is ninth for awards, 20th for highly cited researchers , seventh for papers in Nature and Science and 13th for citations in the SCI and SSCI.
So how did Oxford and Cambridge do so well on the overall THES rankings? It was solely because of the peer review. Even on the recruiter ratings they were only 8th and 6th. On the peer review, Cambridge was first and Oxford second. How is this possible? How can reviewers give such a high rating to universities that produce research that in most fields is inferior in quality to that of a dozen or more US universities, that now produce relatively few Nobel prize winners or citations or papers in leading journals.
Perhaps like the waiter in the George Best the THES reviewers have seen something that everybody else has missed.
Or is it simply a product of poor research design? I suspect that QS sent out a disproportionate number of surveys to European researchers and also to those in East Asia and Australia. We know that respondents were invited to pick universities in geographical areas with which they were familiar. This in itself is enough to render the peer review invalid as a survey of international academic opinion even if we could be sure that an appropriate selection procedure was used.
George Best used to tell a story about being asked by a waiter in a five star hotel "where did it all go wrong?" Best, who was signing a bill for champagne, with his winnings from the casino scattered around the room and Miss World waiting for him, remarked "he must have seen something that I'd missed". It looks like The Times Higher Education Supplement (THES) has seen something about Oxford and Cambridge that everybody else has missed.
The THES world university rankings have proved to be extraordinarily influential. One example is criticism of the president of Yonsei University in Korea for his institution's poor performance on the rankings.
Another is the belief of Terence Kealey, Vice-Chancellor of the University of Buckingham, that since Oxford and Cambridge are the best universities in the world apart from Harvard, according to THES, they are in no need of reform. He argues that Oxford should reject proposals for administrative change since Oxford and Cambridge are the best run universities in the world.
Oxford's corporate madnessby Terence Kealey
THIS YEAR'S rankings of world
universities reveal that Oxford is one of the three best in the world. The other
two are Cambridge and Harvard.
It is obvious that Oxford and Cambridge are the best
managed universities in the world when you consider that Harvard has endowments
of $25 billion (many times more than Oxford or Cambridge's); that Princeton,
Yale and Stanford also have vast endowments; and that US universities can charge
huge fees which British universities are forbidden to do by law.
Kealey evidently has complete confidence in the reliability of the THES rankings and if they were indeed reliable then he would have a very good point. But if they are not then the rankings would have done an immense disservice to British higher education by promoting a false sense of superiority leading to a rejection of attempts that might reverse a steady decline.
Let's have a look at the THES rankings. On most components the record of Oxford and Cambridge is undistinguished. For international faculty, international students, and faculty-student ratio they have scores of 54 and 58, 39 and 43, 61 and 64 respectively, compared to top scores of 100, although these scores are perhaps not very significant and are easily manipulated. More telling is the score for citations per faculty, a measure of the significance of the institutions' research output. Here, the record is rather miserable with Oxford and Cambridge coming behind many institutions including the Indian Institutes of Technology, Helsinki and the University of Massachusetts at Amherst.
I would be the first to admit that the latter measure has to be taken with a little bit of salt. Science and technology are more citation-heavy than the humanities and social sciences, which would help to explain why the Indian Institutes of Technology apparently do so well, but they are suggestive.
Of course, this measure also depends on the number of faculty as well as the number of citations. If there has been an error in counting the number of faculty then the citations per faculty score would also be affected. I am wondering whether something like that happened to the Indian Institutes. THES refers to institutes but their consultants, QS, refer to institute and provide a link to the institute in Delhi. Can we be confident that QS did not count the faculty for Delhi but citations for all the IITs?
When we look at the data provided by THES for citations per paper, a measure of research quality, we find that the record of Oxford and Cambridge is equally unremarkable. For Science, Oxford is 20th and Cambridge 19th. For technology, Oxford is 11th and Cambridge 29th. For biomedicine, Oxford is seventh and Cambridge ninth. For Social Sciences, Oxford is 19th and Cambridge is 22nd.
The comparative performance of Oxford and Cambridge is just as unimpressive when we look at the data provided by Shanghai Jiao Tong University. Cambridge is second on alumni and awards, getting credit for Nobel prizes awarded early in the last century but 15th for highly cited researchers, 6th for publications in Nature and Science and 12th for citations in the Science Citation Index and Social Science Citation Index. Oxford is ninth for awards, 20th for highly cited researchers , seventh for papers in Nature and Science and 13th for citations in the SCI and SSCI.
So how did Oxford and Cambridge do so well on the overall THES rankings? It was solely because of the peer review. Even on the recruiter ratings they were only 8th and 6th. On the peer review, Cambridge was first and Oxford second. How is this possible? How can reviewers give such a high rating to universities that produce research that in most fields is inferior in quality to that of a dozen or more US universities, that now produce relatively few Nobel prize winners or citations or papers in leading journals.
Perhaps like the waiter in the George Best the THES reviewers have seen something that everybody else has missed.
Or is it simply a product of poor research design? I suspect that QS sent out a disproportionate number of surveys to European researchers and also to those in East Asia and Australia. We know that respondents were invited to pick universities in geographical areas with which they were familiar. This in itself is enough to render the peer review invalid as a survey of international academic opinion even if we could be sure that an appropriate selection procedure was used.
It is surely time for THES to provide more information about how the peer review was conducted.
No comments:
Post a Comment