Wednesday, July 16, 2008

Comments on Ben Sowter's Comments

This is in reply to comments by Ben Sowter, head of research at QS, on my post of July 10th.

First of all, I was mistaken about the BlackBerry. At the end of the survey there is a box to tick if respondents want a chance to win one.

Next, I am glad to learn that Ben is a frequent reader of this blog. Thank you also for the comment that "Much of your analysis is detailed and shrewd. In fact, whilst I have declined to comment in the past we have kept an eye on your blog and, in some areas, chosen to make improvements based on your observations".

The clarification that the current survey is of prior respondents only and that later e-mails will also be sent to World Scientific subscribers is also welcome.

As for wondering why Malaysian universities were not on the first list, the introductory messages says that "You'll notice some slight differences to the survey this year - mainly in that we have included a lot more universities - as a result there are two questions about universities - one asking about those around the world, and a second asking about your own country specifically". I assumed that "around the world" meant that every country in the world would be included. The information should have read "around the world excluding your your own country". Still, I accept that I jumped to an unwarranted conclusion.

I think that there are serious questions remaining about the implications of this change. If QS are going to use only the responses to the first list when calculating the score for the academic opinion criterion then this would surely mean a noticeable reduction in the number of votes for universities that got a large number of responses from other places in that country. I suspect that this would have a disproportionate effect on American and Japanese universities. Wouldn't it also have a relatively beneficial impact on British universities with lots of graduates in the Commonwealth and Australia with graduates in South East Asia and China, countries with a lot of names in the World Scientific database?

Ben's comment that the purpose of having two lists is so that "we can do some more analysis on response dynamics and, potentially, more effectively counteract country biases" suggests that this might be the case since the implication is that the second list will not be used to construct the scores for the "peer review".

Is it possible that such a change would give Cambridge, Oxford and Imperial College London a further advantage over Harvard and push one of the former three into the number one spot?
Comments on "What's Happened to Malaysia?"

I recently posted on what I thought was the removal of Malaysian universities from the THES-QS survey of academic opinion.

It appears, as comments on the post and my own fiddling about with the survey indicate, that what has happened is that the survey is divided into two parts: the first in which all universities in the country where the respondent works are excluded and a second in which only those universities -- minus the one where the respondent works -- are presented.

It seems that I have been unfair to QS on this occasion. There are, however, some issues about the survey that I will discuss in a little while.

Two comments refer to the absence on the State University of New York from the list. In fact, Stony Brook is there but not the other three campuses at Binghamton, Albany and Buffalo.

Thursday, July 10, 2008


How to get into the Top 50

Malaysia has declared that it wants to get a couple of local universities into the world's top 50. This obviously means the THES-QS and not the Shanghai Jiao Tong index.

So what should a Malaysian university do to get into the 50 on the THES-QS ranking? I assume that the likeliest candidate is Universiti Malaya (UM) and that it should try to equal the score of the University of Auckland, which was 50th in last year's index.

In 2007, Auckland got a total score of 77.5 made up of 95 for the "peer review", 83 for the employer review, 38 for student faculty ratio, 61 for citations per faculty, 100 for international faculty, and 99 for international students.

UM got a total of 49.4 made up 66 for the "peer review", 66 for the employer review, 38 for the student faculty ratio, 14 for citations per faculty, 63 for international faculty and 41 for international students.

So what does UM have to do to get a score equal to Auckland's?

  • First of all , it should make sure that UM is actually included in the survey of academic opinion (see previous post). If it is not, then UM will probably not even make it into the top 500. Then it has to improve its score on this criterion by about half (the use of Z scores means that we can't be more exact than this) .
  • Then the score on the employer review would have to increase by about a third.
  • No need to worry about the student faculty ratio. UM is as good as Auckland.
  • The number of citations per faculty would have to improve four-fold.
  • The proportion of international students would have to increase by two thirds.
  • The proportion of international students would have to more than double.

I doubt that it is worth trying to do better on the "peer review" and the "employer review" since these are opaque and biased. Increasing the number of international faculty and students would probably cause more trouble than its worth without strict controls over quality.

Could the citations per faculty be improved? This is not totally impossible. Over ten years Cardiff University managed to triple total research output. According to an interview with the Star, noted in an earlier post, this is how it was done.

To encourage productivity, Prof Smith switched the promotion system from a quota-based system (where the total number of professorial positions in a faculty were pre-determined) to a performance-based one.
He even offered an attractive retirement package to faculty members who were not producing much research.
However, in order for universities to be able to do that, Prof Smith said they need autonomy.
“The university has to be free to offer different contracts (to academics and scientists).
“And within the university, a lot of power needs to be devolved to the young people.
“It's all about having decisions taken at the lowest level practicable.
“That’s a major change,” he said.




What's Happened to Malaysia?


It's that time of year again. A few days ago I received an invitation to take part in the THES-QS "peer review" Last year it came via World Scientific, the Singapore-based publishing company whose subscription list has been used by QS to construct their list of "smart people " who pass judgement on the world's universities. This year it came directly from QS. I do not know whether this means that QS is now using the THES subscription list as its database. If so, we can expect to see some wild fluctuations in the "peer review" and hence the overall rankings in October.



Anyway, here is message from QS.





It's that time of year again. Each year the response to the academic peer review for the Times Higher - QS World University Rankings goes from strength to strength.
QS and Times Higher Education are committed to making these rankings as strong and robust as they can be. Many enhancements have been made to the system in the last 12 months (you can read about them on http://www.topuniversities.com/) but amongst the most important has been your help in increasing the response to our academic peer review questionnaire.
Put simply... your opinion counts. Please share it with us.
You'll notice some slight differences to the survey this year - mainly in that we have included a lot more universities - as a result there are two questions about universities - one asking about those around the world, and a second asking about your own country specifically.
Please be as accurate and honest as possible. Help us make sure that your university contributes a representative response to the survey this year.
http://research.qsnetwork.com/academic
The deadline for response is July 15th.
At the end of the survey, from a selection of offers, you will have the chance to either...
Opt for a $100 discount on delegate's fee for QS APPLE (Asia Pacific Professional Leaders in Education Conference & Exhibition) 2009 in Kuala Lumpur
Enter a draw to win your university a free exhibition table at the QS World Grad School Tour (the world's leading circuit of Masters & PhD recruitement fairs) in a city of your choice
Receive a month month trial subscription to Times Higher Education
Thank you for taking the time to share your opinions with us, and please look out for the results of the Times Higher - QS World University Rankings 2008 - due to be published on October 9th.
Many thanks,
Ben SowterHead of ResearchQS



I was disappointed that I would not have a chance of getting a BlackBerry this time around. I started to fill out the form but stopped when I noticed something very odd. There are no Malaysian universities listed this year. Possible explanations are:



A. For some reason, QS have decided that no Malaysian university is of sufficient quality to even be included in the survey. This is unlikely considering some of the others that are included.



B. QS state at the start of the survey that the respondent's own university will be excluded from consideration. Since I work at a Malaysian university it is to be expected that that particular university would not show up. Perhaps some sort of error has meant that all Malaysian universities have been excluded from the list presented to me.



C. QS have made a principled decision that respondents are not allowed to choose any university in the country in which they work. This would be a good idea and therefore can probably be ruled out straightaway.



D. QS just forgot about Malaysia.



E. A computer error that affected me and nobody else.



Based on past experience, D seems the most likely, followed by B. If D, then in October Malaysian universities are going to get zero on this year's "peer review" and therefore will fall even further in the rankings. There will no doubt be an mass outbreak of soul searching in Malaysian universities and jeering by the opposition. If B, The fall will not be so great but Malaysian universities would still suffer if they are the only ones who cannot receive votes from within the country.



I appeal to any reader of this blog who has completed or is about to complete the THES-QS survey to let me know whether they have also noticed the omission of Malaysian universities or of any other country.

Friday, June 27, 2008

Why are German universities so bad?

A recent post by Steve Sailer compares the Shanghai Jiao Tong and the QS-Times Higher Education Supplement rankings (although he refers to the London Times, a completely different publication).

He asserts

"This Chinese list seems less chauvinistically biased than the London Times rankings I cited in tonight's VDARE article (Harvard #1 in both, but Stanford is #2 on the Chinese list vs. #19 on the English list, behind a number of obscure provincial colleges in England). Because it's a better list, it supports the point I made in VDARE even more strongly than the previous list did: that America's exclusive universities are now enormously prestigious relative to Germany's and the rest of the world's.

German colleges that would have dominated the list 100 years ago have been hit hard by sincere, leftist anti-elitism"

That the Chinese rankings are less chauvinistic than QS-THES is absolutely correct. And if he is being deliberately offensive by describing Oxford, Cambridge and Imperial and University Colleges, London as "obscure provincial colleges in England", I suppose that I couldn't really argue.

But surely the decline of German universities began well before leftist anti-elitism appeared on the scene? Didn't it begin with the mass expulsion of Jewish students and academics after 1933?

Tuesday, June 24, 2008

Resumption of Posting

Teaching and family affairs have kept me away from this blog for a few months. I hope to start posting regularly again soon.


QS’s Greatest Hits: Part One

For the moment, it might be interesting to review some of the more spectacular errors of QS Quacquarelli Symonds Ltd (QS), the consultants who collect the data for the Times Higher Education Supplement’s (THES) World University Rankings.

During its short venture into the ranking business QS has shown a remarkable flair for error. In terms of quantity and variety they have no peers. All rankers make mistakes now and then but so far there has been nobody quite like QS.

Here is a list of what I think are QS’s ten best errors, mainly from the rankings and their book, Guide to the World’s Top Universities (2007). Most of them have been discussed in earlier posts on this blog. The date of these posts is given in brackets. There is one error, relating to Washington University in St. Louis, from last year’s rankings,

It has to be admitted that QS seem to be doing better recently. Or perhaps I have not been looking as hard as I used to. I hope that another ten errors will follow shortly.


One: Faculty Student Ratio in Guide to the World’s Top Universities (2007). (July 27, 2007; May 11, 2007)

This is a beautiful example of the butterfly effect, with a single slip of the mouse leading to literally hundreds of mistakes.

QS’s book, Guide to the World’s Top Universities, was produced at the end of 2006 after the publication of the rankings for that year and contained data about student faculty ratios of over 500 ranked universities. It should have been obvious immediately that there was something wrong with this data. Yale is given a ratio of 34.1, Harvard 18, Cambridge 18.9 and Pretoria 590 .3. On the other hand, there are some ridiculously low figures such as 3.5 for Dublin Institute of Technology and 6.1 for the University of Santo Tomas (Philippines).

Sometimes the ratios given flatly contradict information given on the same page. So, on page 127 in the FACTFILE, we are told that Yale has a student faculty ratio of 34.3. Over on the left we are informed that Yale has around 10,000 students and 3,333 faculty.

There is also no relationship between the ratios and the scores out of 100 in the THES QS rankings for student faculty ratio, something that Matt Rayner asked about, without ever receiving a reply, on QS’s topuniversities web site.

So what happened? It’s very simple. Someone slipped three rows when copying and pasting data and every single student faculty ratio in the book, over 500 of them, is wrong. Dublin Institute of Technology was given Duke’s ratio (more about that later), Pretoria got Pune’s, Aachen RWT got Aberystwyth’s (Wales). And so on. Altogether over 500 errors.


Two: International Students and Faculty in Malaysian Universities.

In 2004 there was great jubilation at Universiti Malaya (UM) in Malaysia. The university had reached 89th place in the THES-QS world rankings. Universiti Sains Malaysia (USM) also did very well. Then in 2005 came disaster. UM crashed 100 places, seriously damaging the Vice-Chancellor’s career, and USM disappeared from the top 200 altogether. The Malaysian political opposition had a field day blasting away at the supposed incompetence of the university leadership.

The dramatic decline should have been no surprise at all. A Malaysian blogger had already noticed that the figures for international students and faculty in 2004 were unrelated to reality. What happened was that in 2004 QS were under the impression that larger numbers of foreigners were studying and teaching at the two Malaysian universities. Actually, there were just a lot of Malaysian citizens of Indian and Chinese descent. In 2005 the error was corrected causing the scores for international faculty and students to fall precipitously.

Later, THES referred to this as “a clarification of data”, a piece of elegant British establishment obfuscation that is almost as good as “being economical with the truth”


Three: Duke’s student faculty ratio 2005 ( October 30, 2006 )

Between 2004 and 2005 Duke rose dramatically in the rankings. It did so mainly because it had been given a very low and incredible student faculty ratio in the latter year, less than two students per faculty. This was not the best ratio in the rankings. That supposedly belonged to Ecole Polytechnique in Paris (more of that later). But it was favourable enough to give Duke a powerful boost in the rankings.

The ratio was the result of a laughable error. QS listed Duke as having 6,244 faculty, well in excess of anything claimed on the university’s web site. Oddly enough, this was exactly the number of undergraduate students enrolled at Duke in the fall of 2005. Somebody evidently had copied down the figure for undergraduate students and counted them as faculty, giving Duke four times the number of faculty it should have.


Four: Duke’s student faculty ratio 2006 (December 16, 2006)

Having made a mess of Duke’s student faculty ratio in 2005, QS pulled off a truly spectacular feat in 2006 by making an even bigger mess. The problem, I suspect, was that Duke’s public relations office had its hands full with the Lacrosse rape hoax and that the web site had not been fully updated since the fall of 2005. For students, QS apparently took undergraduate student enrollment in the fall of 2005, subtracted the number of undergraduate degrees awarded and added the 2005 intake. This is a bit crude because some students would leave without taking a degree, Reade Seligmann and Colin Finnerty for example, but probably not too inaccurate. Then, there was a bit of a problem because while the number of postgraduate degrees awarded was indicated on the site there was no reference to postgraduate admissions. So, QS seem to have deducted the degrees awarded and added what they thought was number of postgraduate students admitted, 300 of them, to the Pratt School of Engineering, which is an undergraduate, not a graduate school. Then, in a final flourish they calculated the number of faculty by doubling the figure on the Duke site, apparently because Duke listed the same number classified first by department and then by status.

The result was that the number of students was undercounted and the number of faculty seriously overcounted, giving Duke the best student faculty ratio for the year. Although the ratio was higher than in 2005 Duke was now in first place for this section because QS had calculated more realistic ratios for the Ecole Polytechnique and the Ecole Normale Superieure.


Five: Omission of Kenan Flagler from the Fortune business school rankings. (March 05, 2007)

On the surface this was a trivial error compared to some that QS has committed. They got the business school at the University of North Carolina mixed up with that of North Carolina State University. The grossness of this error is that while most American universities seem unconcerned about the things that QS writes or does not write about them, business schools evidently feel that more is at stake and also have considerable influence over the magazines and newspaper that publish rankings. Kenan-Flagler protested vociferously over its omission, Fortune pulled the ranking off its site, Nunzio Quacquarelli, director of QS, explained that it was the result of a lapse by a junior employee and stated that this sort of thing had never happened before and would never happen again.


Six: "Beijing University"

China’s best or second best university is Peking University. The name has not been changed to Beijing University apparently to avoid confusion with Beijing Normal University. There are also over twenty specialist universities in Beijing: Traditional Chinese Medicine, Foreign Languages, Aeronautics and so on.

In 2004 and 2005 THES and QS referred to Beijing University finally correcting it to Peking University in 2006.

This was perhaps not too serious an error except that it revealed something about QS’s knowledge of its own sources and procedures.

In November 2005. Nunzio Quacquarelli went to a meeting in Kuala Lumpur, Malaysia. Much of the meeting was about the international students and faculty at UM and USM. There was apparently also a question about how Beijing University could have got such a magnificent score on the peer review while apparently producing almost no research. The correct answer would have been that QS was trying to find research written by scholars at Beijing University, which does not exist. Quacquarelli, however, answered that “we just couldn’t find the research” because Beijing University academics published in Mandarin (Kuala Lumpur New Straits Times 20/11/05).

This is revealing because QS’s “peer review” is actually nothing more than a survey of the subscribers to World Scientific, a Singapore-based company that publishes academic books and journals, many of them Asia-orientated and mostly written in English. World Scientific has very close ties with Peking University. If Quacquarelli knew very much about the company that produces his company’s survey he would surely have known that it had a cozy relationship with Peking University and that Chinese researchers, in the physical sciences at least, do quite a lot of publishing in English.


Seven: Student faculty ratios at Yonsei and Korea universities (November 08, 2006)

Another distinguished university administrator whose career suffered because of a QS error was of Yonsei University. This university is a rival of Korea University and was on most measures its equal or superior. But on the THES – QS rankings it was way behind, largely because of a poor student faculty ratio. As it happened, the figure given for Korea University was far too favourable and much better even than the ratio admitted by the university itself. This did not, however, help Jung Chang-Young who had to resign.


Eight: Omission of SUNY – Binghamton, Buffalo and Albany

THES and QS have apologized for omitting the British universities of Lancaster, Essex and Royal Holloway. A more serious omission is the omission of the State University of New York’s (SUNY) University Centres at Buffalo, Albany and Binghamton. SUNY has four autonomous university centres which are normally treated as independent and are now often referred to as the Universities of Buffalo and Albany and Binghamton University. THES-QS does refer to one university centre as Stony Brook University, probably being under the impression that this is the entirety of the SUNY system. Binghamton is ranked 82nd according to the USNWR and 37th among public national universities (2008). It can boast several internationally known scholars such as Melvin Dubofsky in labour history and Immanuel Wallerstein in sociology. To exclude it from the rankings while including the likes of Dublin Institute of Technology and the University of Pune is ridiculous.


Nine: Student faculty ratio at Ecole Polytechnique (September 08, 2006)

In 2005 the Ecole Polytechnique went zooming up the rankings to become the best university in continental Europe. Then in 2006 it went zooming down again. All this was s because of extraordinary fluctuations in the student faculty ratio. What happened could be determined by looking at the data on QS’s topgraduate site. Clicking on the rankings for 2005 led to the data that was used for that year (it is no longer available). There were two sets of data for students and faculty for that year, evidently one containing part-time faculty and another with only full time faculty. It seems that in 2005 part-time faculty were counted but not in 2006.


Ten: Washington University in St Louis (November 11, 2007)

This is a leading university in every respect. Yet in 2007, QS gave it a score of precisely one for citations per faculty, behind Universitas Gadjah Mada, the Dublin Institute of Technology and Politecnico di Milano and sent it falling from 48th to 161st in the overall rankings. What happened was that QS got mixed up with the University of Washington (in Seattle) and gave all WUSL’s citations to the latter school.

Friday, February 22, 2008

More on Cambridge and Harvard

Alejandro Pisanty has an interesting comment on the previous post. I will reproduce a large part of here.

“In particular for Harvard it's darn tricky. "Harvard University" will
yield only a fraction of the production and the citations from there.
There's also Harvard Medical School, Harvard Business School, Harvard
Law School, etc., and nifty arrangements like Harvard-Smisthsonian
Astronomy Project (also in Cambridge MA; surely a half-floor of a physics or
astronomy unit in the best of cases) and so on.

QS and THES admit quite cynically that they don't really know too well
how to treat "children institutions". One can be sure that officials from
Harvard and Cambridge, and all British universities, have been well on
top of this by constant contact with QS and their staff. And, it all
happens in English.

One would reasonably excpect that QS does not apply the same care to
Malaysian or Mexican universities...”


The number of papers produced by the Harvard Business and Law Schools is relatively small although still a lot more than the Judge School of Business at Cambridge or Addenbrookes Hospital. Harvard Medical School, however, does produce a massive number of papers, over 35,000 according to Scopus between 2002 and 2006. Compare this with 12, 736 for "Harvard University" over the same period.

If QS did indeed count the papers produced by authors with a Harvard Medical School affiliation this would be an adequate -- probably more than adequate -- explanation for Harvard’s superiority over Cambridge in terms of citations. But another problem now arises. The number of citations per faculty would now be much larger than Caltech which does a bit better than Harvard in the THES-QS citations per faculty section.

It is possible that QS included the papers produced by HMS and then also counted " about [sic}10,674 medical school faculty". Not to do so would be absurd since any other procedure would mean that linguists, sociologists and engineers were getting credit for producing medical research.

But if QS counted papers with a Harvard Medical School affiliation and also counted all the medical faculty then we would be back where we started.

It still seems to me that the most plausible reconstruction of Harvard’s citations per faculty score is that QS did not count papers produced by the various schools, or least not by the Harvard Medical School, and that for the faculty figure they used the number given on the Harvard website or in QS’s school profile.

All this speculation would be unnecessary if QS told us exactly what they did but I wouldn’t bother waiting for that to happen.

Monday, December 24, 2007

Cambridge and Harvard

The THES-QS rankings can be viewed as a collection of complex interweaving narratives. There is the rise of China and its diaspora, the successful response of Australian universities to financial crisis, the brave attempts of Africa, spearheaded by the University of Cape Town, to break into the top 200.

The most interesting narrative is that of British universities -- Oxford, Cambridge and Imperial and University Colleges, London -- steadily coming closer to Harvard and pulling ahead of Princeton, Caltech and the rest.

This particular narrative requires rather more suspension of disbelief than most. By all accounts, including the Shanghai rankings and THES’s own count of citations per faculty, the research record of Cambridge and Oxford has been less than spectacular for several years.

Until this year Cambridge’s apparent near equality with Harvard was largely the result of its performance on QS’s survey of academic opinion, the so-called peer review. Since this has such an astonishingly low response rate, since it is noticeably biased against the US, since its relationship with research proficiency measured by citations per faculty or per paper is very limited, it should not be taken seriously.

This year methodological changes mean that the differences between Cambridge and Harvard on most measures are virtually obliterated. Both universities get 100 or 99 for the “peer review”, employer review and student faculty ratio. Both get 91 for international students.

Harvard stays ahead of Cambridge because of a much better performance on citations per faculty. I thought it might be interesting to see how this margin was achieved.

QS is now using the Scopus database for which a 30-day free trial is available. THES states that the consultants counted the number of citations of papers published between 2002 and 2006 and then divided the total by the number of faculty. I have tried to reproduce QS's scores for Cambridge and Harvard

First, here is the number of papers published by authors with an affiliation to “Cambridge University” between 2002 and 2006 and the number of citations of those papers. The number of documents in the Scopus database is increasing all the time so a count done today would yield different results. These numbers are from two weeks ago.

CAMBRIDGE (“Cambridge University”) 2002-2006


Life sciences

Documents 7,614

Citations 116,875

Health sciences

Documents 4,406

Citations 65,211

Physical sciences

Documents 11,514

Citations 100,225

Social sciences

Documents 2,636

Citations 24,292


Total

Documents 26,170

Citations 306,603

Using the FTE faculty figure of 3,765 provided by QS on their website, we have 83 citations per faculty.

I noticed that a number of authors gave their affiliation as “University of Cambridge”. This added 26,710 citations to make a total of 333,313 citations and 89 citations per faculty.

Now for Harvard. Searching the Scopus database reveals the following totals of papers and citations for “Harvard University”.

HARVARD ("Harvard University") 2002-2006

Life sciences

Documents 4,003

Citations 79,663


Health science

Documents 2,577

Citations 47,486

Physical Science

Documents 6,429

Citations 91,154

Social Science

Documents 3,686

Citations 48,844

Total

Documents 16,695

Citations 267,147

I suspect that most observers would consider Cambridge's superiority to Harvard in number of publications and citations indicative more of the bias of the database than anything else.


If we use QS’s faculty headcount figure for Harvard of 3,389 and assume that 8 per cent of these are part-timers with a quarter-time teaching load then we have 3,167 FTE faculty. This would give us 84 citations per faculty, slightly better than Cambridge if citations of “University of Cambridge " publications are excluded and somewhat worse if they are included.


The problem is, though , that QS give Harvard a score of 96 for citations per faculty and Cambridge a score of 83. The only plausible way I can think of for Harvard to do so much better when they have fewer citations is that a smaller faculty figure was used to calculate the citations per faculty number for Harvard than was used to calculate the student faculty ratio. The Harvard web site refers to "about [sic] 2,497 non-medical faculty" and in QS’s school profile of Harvard there is a reference to "more than 2,000 faculty". I suspect that this number was used to calculate the citations per faculty score while the larger number was used to calculate the student faculty ratio. Had the former been used for both criteria, than Cambridge and Harvard would have been virtually equal for citations and Cambridge would have moved into the lead by virtue of a better international faculty score.

The may be some other explanation . If so , then I would be glad to hear it.

If this is what happened then it would be interesting to know whether there was simply another run of the mill error with that ubiquitous junior staff member using two different faculty figures to calculate the two components or a cynical ploy to prevent Cambridge moving into the lead too early.


Sunday, December 16, 2007

What Happened to Cardiff?

The Malaysian Star (print edition 16/12/07, E11) has a feature on Brian Smith, Vice-Chancellor of Cardiff University from 1993 to 2001. (The Star reports that he was appointed in 2001)

Professor Smith is reputed to have revitalised the research capability of Cardiff . According to the Star:

Said Prof Smith: “Cardiff offered a fantastic opportunity.

“Here was a university that had been through very difficult times; it was the perfect opportunity to try out my theories.

“And they worked because the people at Cardiff were ready for change and ready to change dramatically.”

The main problem faced by the university at that time was that it had not yet re-established itself as a research university.

According to Prof Smith, there are a number of factors involved in the move to regain a university's research strength.

“A very big factor is research staff.

“Because British universities have a great deal of autonomy and flexibility, we were able to go out and recruit.”

And that was how Prof Sir Martin Evans, one of this year's Nobel Prize in Medicine recipients, came to join the university.

“He came to a department that was not strong but actually managed to increase its number of publications in top journals 11-fold,” said Prof Smith.

...........................................................................

Asked how he managed to attract top people like Prof Evans to join him at Cardiff, Prof Smith said he believed what counted was not just a lucrative contract but the whole package.

“I don't think it's entirely about money. I feel that Prof Evans was equally attracted by the opportunity to unify the entire biology department and direct its vision,” he observed.

To encourage productivity, Prof Smith switched the promotion system from a quota-based system (where the total number of professorial positions in a faculty were pre-determined) to a performance-based one.

He even offered an attractive retirement package to faculty members who were not producing much research.

However, in order for universities to be able to do that, Prof Smith said they need autonomy.

“The university has to be free to offer different contracts (to academics and scientists).

“And within the university, a lot of power needs to be devolved to the young people.

“It's all about having decisions taken at the lowest level practicable.

“That’s a major change,” he said.


The article proceeds:

Due in large part to these strategies, Cardiff has risen from a ranking of 241 in the THES-QS World University Rankings in 2005 to 99 this year.

It may well be true that Cardiff researchers became more productive because of Professor Smith's policies. A quick look at the Scopus database indicates that from 1997 to 2007 the total output of research papers rose three fold.

It is also undeniable that Cardiff rose to 99th place in the THES-QS rankings this year.


It does not, however, follow that those two facts had anything to do with each other. For a start, one wonders why the rankings should detect the improvement in research only in 2007 and not in 2005 or 2006.

What really happened?

In 2006 Cardiff scored reasonably well on the "peer review" (151th out of the overall top 400 universities), employer review (91st), student faculty ratio (111th), international faculty (116th), international students (111th) but miserably on citations per faculty (253th).

In 2007 Cardiff did better on the "peer review", rising to 129th but worse on the employer review , falling to 250th. The other criteria were pretty much the same: 138th for student faculty ratio, 106th for international faculty, 110th for international students and 269th for citations per faculty.

It seems that Cardiff's remarkable improvement between 2006 and 2007 resulted from getting many more points for citations, 65 in 2007 as against 6 in 2006. This is far greater than any improvement resulting from a new database and is almost certainly caused by the introduction of Z scores this year.

What happened was that in 2006 Cardiff was doing OK on most measures but badly on research. In 2007 it was still doing OK on most measures , except for the employer review, and still doing badly for research. But in 2007 because of the smoothing of the curve, it got a lot more points for the limited amount of research that it did.

The rise of Cardiff is largely an illusion created by a change in method.

Thursday, December 13, 2007

Comment on the THES-QS Rankings

There is an excellent article by Andrew Oswald of Warwick University in yesterday's Independent. It is worth quoting a large chunk of it here.

First, 2007 saw the release, by a UK commercial organisation, of an unpersuasive world university ranking. This put Oxford and Cambridge at equal second in the world. Lower down, at around the bottom of the world top-10, came University College London, above MIT. A university with the name of Stanford appeared at number 19 in the world. The University of California at Berkeley was equal to Edinburgh at 22 in the world.

Such claims do us a disservice. The organisations who promote such ideas should be unhappy themselves, and so should any supine UK universities who endorse results they view as untruthful. Using these league table results on your websites, universities, if in private you deride the quality of the findings, is unprincipled and will ultimately be destructive of yourselves, because if you are not in the truth business what business are you in, exactly?

Worse, this kind of material incorrectly reassures the UK government that our universities are international powerhouses.

Let us instead, a bit more coolly, do what people in universities are paid to do. Let us use reliable data to try to discern the truth. In the last 20 years, Oxford has won no Nobel Prizes. (Nor has Warwick.) Cambridge has done only slightly better. Stanford University in the United States, purportedly number 19 in the world, garnered three times as many Nobel Prizes over the past two decades as the universities of Oxford and Cambridge did combined. Worryingly, this period since the mid 1980s coincides precisely with the span over which UK universities have had to go through government Research Assessment Exercises (RAEs). To hide away from such inconvenient data is not going to do our nation any good. If John Denham, the Secretary of State for Innovation, Universities and Skills, is reading this, perhaps, as well as doing his best to question the newspapers that print erroneous world league tables, he might want to cut out these last sentences, blow them up to 100 point font, and paste them horizontally in a red frame on his bedroom ceiling, so that he sees them every time he wakes up or gets distracted from other duties. In his shoes, or out of them, this decline would be my biggest concern.

Since the 1980s the UK's Nobel-Prize performance has fallen off. Over the last 20 years, the US has been awarded 126 Nobel Prizes compared to Britain's nine.


The THES-QS rankings have done great damage to university education in Asia and Australia where they have distorted national education policies, promoted an emphasis on research at the expenses of teaching and induced panic about non-existent decline in some countries while encouraging false complacency about quality in others.

In the United Kingdom they have generally been taken as proof that British universities are the equals of the Ivy League and Californian universities, a claim that is plausible only if the rankings' numerous errors, biases and fluctuations are ignored.

I hope that Chris Patten and others who are in denial about the comparative of British universities will read Professor Oswald's article.

Wednesday, December 12, 2007

Student Faculty Ratios

Something especially striking about the THES~QS rankings this year is that British universities have done spectacularly well overall while getting miserable scores, comparatively speaking, on the citations section. We have to remember that this component does not measure the absolute numbers of citations but the number per faculty. It is then worth investigating whether the high score for student faculty ratios are the result of inflated faculty numbers which have also led to a reduced score for citations per faculty. First, I want to look at the faculty data for the top British and American universities.

Cambridge

Looking at the QS website we find that they claim that Cambridge has a total of 3,765 Full Time Equivalent (FTE) faculty. The data was entered on 23/8/07 by Saad Shabbir, presumably an employee of QS.

Going to the Cambridge site we find that as of July, 2005, Cambridge had 1,558 academic staff, 1,167 academic-related staff (presumably in computers, administration, libraries and so on and probably also research) and 2,497 contract research staff. Adding the first and third categories and leaving out the second, gives us 4,055, close to QS’s figure for total faculty.

It seems reasonable then to conclude that QS added academic staff to research contract staff and made an adjustment to arrive at a Full Time Equivalent (FTE) number to come up with the total faculty. No doubt they got more up to date information than is available on the university website.

With 18,309 FTE students this gives us a student faculty ratio of 4.9. This is much better than the data from third party sources. The Higher Education Statistics Agency (HESA) provides a figure of 11.9.

It looks like QS have counted both teaching staff and contract research staff who do little or no teaching as faculty members.

Oxford

According to QS Oxford has 3,942 FTE faculty (data entered by Saad Shabbir 21/08/07) and 18,667 FTE students, a ratio of 4.7 students per faculty.

According to Oxford there were (July 2006) 1,407 academic staff, 612 in administration and computing, 169 library and museum staff, 753 in university funded research, 2,138 in externally funded research and 15 in self-funded research (all FTE). All this adds up to 4,094, very close to QS’s figure. It seems that for Oxford, QS has included research and other staff in the total faculty.

According to HESA Oxford has 13 students per faculty.


Imperial College London

The QS site indicates that Imperial has 2,963 FTE faculty and 12,025 FTE students (data entered by Saad Shabbir 21/08/07), a ratio of 3.03.

The Imperial site indicates 1,114 academic staff and 1,856 research staff (FTE 2006-7), a total of 2,970 academic and research staff combined. It would seem that QS have again counted research staff as faculty. This site refers to a 12,509 student load and a student staff ratio of 11.2. The HESA ratio is 9.4.

Harvard

According to QS, the Harvard faculty headcount is 3,369 (data entered by Baerbel Eckelmann 8/07/07). There were 29,000 students by headcount (FTE 16,520).The headcount student faculty ratio is 8.6.

According to the United States News and World Report (USNWR), 8% of Harvard’s faculty are part-time. If part time means doing a quarter of a full time teaching load this means that Harvard’s FTE faculty would be 3,406.The FTE student faculty would then be 4.8.

The Harvard site, however, refers to a much smaller number of faculty, 2,497 non-medical faculty and to 20,042 students, making a ratio of 8.0.The USNWR indicates a ratio of 7 for Harvard (2005).


Something strange about QS’s data is that it refers to a headcount of 13,078 and 3,593 FTE undergraduates. This is something that definitely needs explaining.


Yale

According to QS, the number of faculty by headcount is 3,248. The number of students is 11, 851 by headcount and 10,845 FTE. The headcount student faculty ratio is then 3.6.

According to the Yale site, there are 3,384 faculty and 11,358 students, a ratio of 3.4. (All figures from the 2006-7 academic year.)

For the fall of 2006 the faculty headcount included:

Tenured faculty 906

Term 966

Nonladder 903

Research 609

The USNWR ratio for Yale is 6.

Princeton

According to QS, the faculty headcount was 1,263 (entered by Baerbel Eckelmann 09/07/07). The number of students was 6,708 by headcount and 6,795 FTE. The headcount ratio is 5.3

According to the Princeton site, there are more than 850 FTE faculty and 7,055 students, a ratio of 8.3. USNWR has a ratio of 5.

Conclusion

It seems that QS’s policy is to include any sort of research staff, whether or not they do any teaching, in the category of faculty. In some cases, other professional non-teaching staff are also included. This produces student faculty ratios that are noticeably better than those that can be calculated from, and sometimes specifically stated in, the universities’ web sites or that are provided by other sources. It looks as though British universities have benefited from this more than their American counterparts.

This means, very ironically, that this measure, which is supposed to be a proxy for teaching quality, is to a large extent a reflection of a university’s commitment to research since the employment of large numbers of researchers, or even librarians and computer programmers, would lead to an improvement in this ratio.


It also looks as though leading British universities are favoured disproportionately by this procedure although a definite conclusion would have to wait more extensive analysis.


I think that we can put forward a working hypothesis that British universities have been ascribed inflated faculty numbers and that this contributes to high scores for teaching quality as measured by student faculty radio and to low scores for research as measured by citations per faculty.

Sunday, December 09, 2007

Macquarie Update

Here is a little bit about Steven Schwartz, Vice-Chancellor of Macquarie University, that I just came across at Wikipedia.

Schwartz was named one of the 100 highest cited researchers in his field and he received many recognitions including a World Health Organisation
Fellowship, a NATO fellowship and the Australian Academy of Science-Royal
Society (London) Exchange Fellowship. He was elected by his peers to the
Academy of Social Sciences and he was elected Morris Leibovitz Fellow at the
University of Southern California. Schwartz is a Fellow of the Royal Society of the Arts, the Australian Institute of Company Directors, and the Australian Institute of
Management. He was a visiting Fellow of
Wolfson College, Oxford and he won
the Brain Research Award of the
British Red Cross Society. He was
elected the first President of Sigma Xi, The Scientific Research Society in
Australia and was awarded the distinguished Career Scientist Award by the
National Institutes of Health. He served on the editorial boards of many
scientific journals and was a fellow of many learned societies.

I assume that Wikipedia is not in error and that Dr. Schwartz does in fact have a highly distinguished research and academic record.

It is therefore very surprising that Dr. Schwartz, has apparently shown an extreme degree of carelessness. He has stated in the Sydney Morning Herald (SMH) that Macquarie University's fall in this year's THES-QS rankings was because there was a change in the weighting that the rankings gave to the international students section. He also said that this was the reason for LSE's fall.

At the risk of being repetitive, let me point out again that the weighting of the international students section has nothing to do with Macquarie's fall. It was five per cent in 2004 , 2005 and 2006 and in 2007. LSE fell because the consultants began using Z scores this year. This is a common statistical technique that has the effect of smoothing out scores. LSE fell, not because of the any change in the weighting but because other universities lagging behind on this measure got more points this year and therefore overtook LSE on the overall ranking.

I will repeat again that Macquarie fell in the rankings firstly because of a poor showing, like several other Australian universities, on the "peer review". This might have resulted from fewer responses from Australian universities this year or from respondents not being allowed to vote for their own universities or a combination of the two.

There was also a fall in its placing for international faculty. The overall effect of this limited by the small weighting given to this criterion.

There was a fall in the citations per faculty section matched by a similar rise in the student faculty ratio. These two changes, which effectively cancelled each other out, might have been caused by a decrease in the reported number of faculty which would have a good effect on citations per faculty and a bad effect on the student faculty ratio.

It is also possible that the high score for international faculty in 2006 might also have resulted from a low reported figure for total faculty.

I would like to ask a few questions.

Did Dr Schwartz read the THES' s description of its methodology?

If he did, did he really misunderstand the description?

Dr Schwartz is reported to get a bonus of A$100,000 when Macqaurie rises in the rankings. Why did this not encourage him to read about the methodology of the rankings carefully?

Why did SMH allow Dr Schwartz to publish an article in which he criticised the the newspaper for not referring to this change of rankings, when there is in fact no such change?

Will SMH point out to Dr Schwartz that there was no change in the weuighting and request an apology from him?

Will Dr Schwartz investigate how QS gave Macquarie such a high and presumably incorrect score for international faculty in 2006?

Thursday, December 06, 2007

The Politics of Rankings: The Case of Macquarie

The Sydney Morning Herald has an article by Steven Schwartz, Vice-Chancellor of Macquarie University. He begins by arguing that university rankings cannot capture the full complexity of a large modern university. A good point, although it would have been more convincing had it been made before rather than after Macquarie's spectacular fall in the THES-QS rankings.

Schwartz goes on to say that:

Although those who work in universities know the pace of change is glacial, university rankings can change dramatically. For example, the Times Higher Education Supplement in Britain dropped Macquarie more than 80 places down the ranks in one year - front-page news in this newspaper. Was the previous ranking incorrect? Is the present one more accurate? The answer in both cases is no.

The changed ranking resulted from a decision by the publication to reduce the weight given to international students, so that many universities with large international enrolments dropped down the rankings. The prestigious London School of Economics dropped from 17 to 59. By omitting mention of this change in method, the Herald's report on November 9 produced more heat than light.

This is an extraordinary claim. There has been no change whatsoever in the weighting given to international students in the THES-QS rankngs. It is five per cent this year just as it has been since 2004.

Macquarie has fallen in the rankings for two reasons. First it fell from 93rd position in the survey of academic opinion to 142nd (among the overall top 400 universities). This could be because QS, the consultants who collect the data for the rankings, did not allow respondents to vote for their own institutions this year or because the number of respondents from Australia was lower.

Second, in 2006 Macquarie was in first place for international faculty, meaning that QS, must have thought that at least half of Macquarie's faculty were international. This year the rankings have Macquarie in 55th place for international faculty. This represents, according to the QS website (registration required), a figure of 25 % for international faculty.

Dr Schwartz would be well advised to find out how QS received incorrect information about international faculty in 2006.

The international students section had nothing at all to do with Macquarie's fall.

Dr Schwartz would probably claim that he has better things to do than read about the methodology of the rankings. I would entirely sympathise with him although perhaps he should be more careful when writing about them or hire an assistant that would read them carefully.

Wednesday, December 05, 2007

Reactions to the Rankings

There is a very interesting article by Moshidi Sirat at GlobalHigherEd. He notes that reactions to the latest THES-QS rankings in the UK have varied widely. There is a lot of scepticism there but many universities are developing explicit strategies to boost their performance, with the aim of recruiting more international students. In Australia, there has been much debate, especially among universities that did not do so well. Brazilian universities do not seem to have shown much interest.

Sirat also notes:

A colleague in France noted that the manner Malaysia, especially the Malaysian Cabinet of Ministers and the Parliament, reacted to Times Higher rankings is relatively harsh. It appears that, in the specific case of Malaysia, the ranking outcome is being used by politicians to ‘flog’ senior officials governing higher education systems and/or universities. And yet critiques of such ranking schemes and their methodologies (e.g., via numerous discussions in Malaysia, or via the OECD or University Ranking Watch) go unnoticed. Malaysia better watch out, as the world is indeed watching us.


In a little while I hope to comment on the relative performance of Malaysian universities over the last few years. Reality is very different from the alleged ongoing decline presented by THES-QS.

Tuesday, December 04, 2007

Something about the QS "Peer Review"

QS's topuniversities site (registration required) has some information about the 2007 survey of academic opinion, which they insist on calling a peer review and which carries a 40% weighting in the THES-QS world university rankings.


First, there is list of the subject areas and the number of respondents in 2007:

All areas 43
Arts and Humanities 312
Engineering and IT 810
Life Sciences and Biomedicine 339
Natural Sciences 776
Social Sciences 715

There is also a section that cross-references the respondents' chosen geographical region of expertise and their subject area.

An interesting item is the current location of the respondents:


United States
307
Italy
174
United Kingdom
171
New Zealand
125
Canada
123
Australia
108
India
106
Malaysia
99
Germany
92
Belgium
79
Singapore
76
France
74
Spain
70
Japan
58
Hong Kong
57
Philippines
56
Sweden
52
China
50
Ireland
47
Switzerland
46
Austria
41
Denmark
37
Indonesia
36
Brazil
33
Turkey
33
Portugal
27
Mexico
26
Poland
26
South Korea
25
Argentina
23
South Africa
23
Greece
22
Iran
22
Russia
21
Taiwan
21
Netherlands
19
Thailand
19
Finland
16
Other
586

Is it possible to keep a straight face while maintaining that this a representative sample of international academic opinion? More respondents from the UK and Italy combined than from the US. More from New Zealand than from Germany. Almost the same number from Hong Kong and Japan. More from Ireland than from Russia. More from Belgium than from France.

Monday, December 03, 2007

Manipulating the Rankings?

There is a very interesting post at Wouter on the Web. I thought it worth pasting all of it.

From the university newspaper of Groningen we get some interesting insights in the way Groningen University has optimized their data for submission to the THES rankings. Deemed not to be important, the rector nevertheless wanted Groningen University to score better in the THES-QS rankings. For the rector, the first notation in the top 200 of the THES rankings, 173 to be exactly, was a good reason to celebrate with his subordinates.

What did they do? They concentrated on the questions of the most favourable number of students. The number of PhD students was a number they could play with. In the Netherlands PhD students are most often employed as faculty, albeit they are students as well to international standards. They contemplated on the position of the researchers in the University hospital. This would increase the number of staff considerably and thus lower the student/faculty ratio, but on the other hand this could have an important effect on the number of citations per research staff as well. Increases in staff number will lower the citations per staff. Which is detrimental to the overall performance. However, if they only could guarantee that citations to hospital staff were included in the citation counts as well?

So in Groningen they have exercised through some scenarios of number of students, number of staff, student/staff ratio and citations/staff ratio to arrive at the best combination to enhance their performance. I really do wonder if the contact between Groningen and QS -the consultants establishing the rankings- did also lead to the improvement of the search for citations by including the University Hospital for the university results. It is known from research by CWTS that searches for papers from all parts of the university are notoriously difficult. Especially to include the papers produced by staff from the teaching hospitals. In Groningen they have the feeling that it helped what they did in their contacts with QS. Well, at least it resulted in a nice picture on their university profile page.

Optimization or manipulation? It is only a thin line. If you only could make sure that all staff of your university would use the proper name of the institution in the authors affiliation. The university would gain a lot.

Chris Patten, Oxford and the Rankings

On 21st November, the London Spectator had an article by Chris Patten, former governor of Hong Kong, that referred to the THES-QS international university rankings and the high place that they give to Oxford and Cambridge.

In the last month, another respected international survey
placed Oxford and Cambridge joint second to Harvard in the league table of
world-class universities. This confirms what others have suggested in recent
years. Moreover, other British universities — most notably London’s Imperial
College and University College — came out high on the list. There are, alas, too
few areas of our national life — the armed forces, the City of London, our
diplomatic service — where we do as well in global comparisons. And it
matters.


Patten suggests that the strength of Oxford and Cambridge lies in the balance between the colleges and the universities and that this is reflected in their performance in the rankings.
Patten is not the first to comment on the apparently excellent performance of British universities, Oxford and Cambridge especially, compared to other national institutions, not least the increasingly pathetic football team. There is a slight touch of desperation here. Even if we can’t beat Croatia or Macedonia, at least Cambridge and Oxford can still run rings around the Universities of Zagreb or Skopje or even Berkeley or Johns Hopkins. Nor is he the first to refer to the THES-QS rankings when commenting on the question of reorganising major British universities. The rankings have, for example, been used to bolster Imperial College London’s claim to become fully independent of the University of London.

But there are some very dubious claims here. I am not sure what Patten is referring to when he talks about another respected international survey. It is certainly not the Shanghai rankings which have Oxford in tenth place and Cambridge in second overall by virtue of long dead Nobel laureates and much lower down by more contemporary criteria.

As for being respected, while the THES-QS rankings are avidly followed in Australia and Southeast and East Asia and routinely used in advertising by British universities, they are usually ignored or politely dismissed by American schools. Washington University in St Louis has apparently not even noticed that QS think that they have done almost no research at all over the last few years.

In fact, even QS does not provide much evidence that Oxford is a world-beater. In 2006 it did extremely well on the peer review and very well on the recruiter review but posted a mediocre performance on everything else, especially research as measured by citations per faculty, 63rd , behind the Hebrew University of Jerusalem, the Tokyo Institute of Technology and the University of Naples 2.

In fact, it isn’t that Oxford is that bad at research but that QS apparently inflated the numbers of faculty to get a good faculty student ratio at the cost of an unrealistically bad score for research. Still, it looks as though for research Oxford is now trailing around the middle to bottom of the Ivy League.

And this year? QS has introduced a new scoring system that in effect compresses scores at the top. So, Oxford did well for nearly everything with 100 for peer review, recruiter review, and student faculty ratio, 97 for international faculty and 96 for international students. This does not mean very much. The better universities now get high marks for just about everything. So does Oxford but again, according to QS, it lags behind on citations per faculty in 85th place behind Colorado State University, Showa University and Georgia Institute of Technology
Again, the problem probably is not that Oxford researchers are doing little research or not getting cited enough but that QS is using an inflated faculty figure.

Still it seems clear that Oxford’s position in the rankings is derived from a dubious “peer review” and from a scoring system that blurs differences at the top of the scale. It is not a result of measured research excellence. The THES-QS rankings are simply covering up the relative decline of Oxford and Cambridge.

Tuesday, November 27, 2007

Aston Business School

And what is Aston Business School doing in the THES-QS rankings at 266th place?
The THES-QS Rankings: Citations per Faculty

This year there have been two main changes in this section of the World University Rankings. First, a new database has been used. Second, as in the other sections, scores have been converted into Z scores.

The use of the Scopus database, which is run by the Dutch-based publishing company, Elsevier, is questionable. QS correctly state that, with over 15,000 journals and many other sources, it is generally more inclusive than the ESI database, which was used in previous years. A more comprehensive database is, however, not necessarily a better one if the objective is to evaluate quality as well as quantity of research. The Scopus database includes 785 conference proceedings and 703 trade journals out of 25, 483 titles. Such items are likely to be subject to a much less rigorous process of review or perhaps to none at all. Furthermore, 7,972 of the titles are listed as inactive.

It is possible therefore that the shift to Scopus means that a lot of mediocre or inferior research is being counted. Whether this is desirable in a measure of quality is debatable.

The most obvious feature of the Scopus database is its geographical bias. Here are the number of titles from selected countries:

US 8,090
UK 4,968
Netherlands 2,184
Germany 1,878
Japan 1,174
France 748
Australia 667
Canada 576
Switzerland 491
Russia 429
Korea 208
Belgium 39
Singapore 121
Taiwan 119
Hong Kong 59

In relation to population, number of universities, output of research, quality of research or almost anything else the UK appears overrepresented in relation to the USA. There are also, perhaps not surprisingly, many more journals from the Netherlands than from Belgium or countries with a similar population.

The citations per faculty section is now as biased towards the UK as the “peer review”. With a forty % weighting given to the "peer review", in which in 2006 UK respondents alone were 71% of those from the US and 20 % towards a citations count in , which UK items alone are 61 % of those from the USA, it is difficult to avoid the conclusion that this is a blatant exercise in academic gerrymandering.

What is more, this measure seems to have little validity. Looking at the rankings in relation to the other criteria, we find that the correlations are very low and usually insignificant.

“Peer review” .260
Employer review - .008
Faculty student ratio .088
International faculty .018
International students .039

The only significant correlation, a slight one, is with the "peer review". There is then no association between the university’s performance on this criterion and four of the five others. In 2006, when the ESI database was used the correlations were much stronger:

"Peer review” .480
Employer review . 348
Faculty student ratio .135
International faculty –045
International students .094

There is also a very modest correlation of .467 between the citations per faculty in 2006 and in 2007 (among the 174 universities that were in the top 200 in both years). It seems that Alejandro Pisanty is quite correct when he says that these look like two completely different sets of data.

“The canvassing of publications and citations seems to bring results which are so different, using Scopus instead of the Thomson/ISI products of the last three years, and the changes are in such way non-uniform among institutions, that it seems appropriate to consider the new version really a new ranking. There will hardly be any comparability with the previous years.”



Furthermore, there are some entries here that look a bit strange. Is the University of Alabama really the fifth best university in the world on this measure, Pohang University of Science the 12th, Renssalaer Polytechnic Institute the 36th? And do leading British universities really deserve to be so low, with Cambridge in 80 th place and Imperial College in 86th? Certainly, they are grossly overrated in the “peer review” but are they as bad at research as this data suggests?

There are also dramatic and suspicious changes from 2006. Cambridge is down from 47th place to 80th, National University of Singapore up from 160th to 74th, Kuopio (Finland) down from 14th to 70th, Tokyo Institute of Technology up from 58th to29th.

So, we now have a database that emphasises quantity rather than quality, which has an even more pronounced pro-British and anti-American bias and which is noticeably lacking in validity.
I will conclude by returning to the extraordinarily poor performance of Cambridge, Oxford and Imperial on this criterion, below previous years and well below their performance on the more contemporary parts of the Shanghai Jiao Tong rankings.

I wonder whether at least a part of the answer can be found in the faculty part of the equation. Is it possible that faculty numbers in British universities have been inflated to give a high score for student faculty ratio at the price, probably very acceptable, of driving down the score for citations per faculty?

Wednesday, November 21, 2007

What has Really Happened to Malaysian universities?

Three years ago the administration at Universiti Malaya (UM) was celebrating getting into the world's top 100 universities according to the THES-QS rankings. A year later it turned out that it was just the result of one of many errors by QS Quacquarelli Symonds, the consultants who produced the rankings.

Now it looks as though the same thing is happening all over again, but in the opposite direction.

This year four Malaysian universities have fallen in the rankings. UM and Universiti Kebangsaan Malaysia (UKM) went right out of the top 200.

Commentators have listed several factors responsible for the apparent slide and proposed remedies. Tony Pua at Education in Malaysia says that

our universities are not competitive, are not rigourous in nature, do not promote and encourage merit and the total lack of transparency in admission and recruitment exercises served the perfect recipe for continual decline in global recognition and quality

According to the Malaysian opposition leader, Lim Kit Siang

JUST as Vice Chancellors must be held responsible for the poor rankings of their universities, the Higher Education Minister, Datuk Mustapha Mohamad must bear personal responsibility for the dismal international ranking of Malaysian universities - particularly for Malaysia falling completely out of the list of the world’s Top 200 Universities this year in the 2007 Times Higher Education Supplement (THES)-Quacquarelli Symonds (QS) World University Rankings.
An article in the Singapore Straits Times reported that
eminent academician Khoo Kay Kim felt there was too much emphasis on increasing the number of PhD holders, instead of producing quality doctorate graduates. 'If this goes on, next year I expect the rankings to slip further,' he said.

Everyone seems to assume that the decline in the rankings reflects a real decline or at least a lack of quality that the rankings have finally exposed.

But is this in fact the case ?

To understand what really happened it is necessary to look at the methodological changes that have been introduced this year.

QS`have done four things. They have stopped respondents to their "peer review" selecting their own institutions. They are using full time equivalent (FTE) numbers for staff and students instead of counting heads. They now use the Scopus data base instead of ISI. They use Z scores which means that the mean of all scores is subtracted from the raw score. The result is divided by the standard deviation. Then the resulting figures are normalised with the mean score converted to 50.

The prohibition on voting for one's own university would seem like a good idea if was also extended to voting for one's alma mater. I suspect that Oxford and Cambridge are getting and will continue to get many votes from their graduates in Asia and Australia, which would seem just as bad as picking one's current employer.

Using FTEs is not a bad idea in principle. But note that QS` are still
apparently counting research staff as faculty, giving a large and undeserved boost to places like Imperial College London.

I am sceptical about the shift to Scopus . This database includes a lot of conference papers, reviews and so on and therefore not all of the items included would have been subject to rigorous peer review It therefore might include research that is a a lower quality than that in the ISI database. There are some also strange things the citations section this year . The fifth best university for citations is the University of Alabama. (You have to look at the details for the top 400 to see this because it is not in the overall top 200.) According to QS's data, theEcole Normale Superieure in Paris is better than Harvard. Oxford is down at number 85, which seems a bit too low. It could be that the database is measuring quantity more than quality or perhaps there have been s number of errors.

Using Z scores is a standard practice among other rankers but it does cause huge fluctuations when introduced for the first time. What Z scores do is, in effect, to compress scores at the top and stretch them out lower down the rankings. They make it easier to distinguish among the mediocre universities at the price of blurring differences among the better ones.

So how did Malaysian universities do in 2007?

There is no point in looking at the scores for the various criteria. The introduction of Z scores means that scores in 2006 and 2007 cannot be compared. What we have to do is to work out the relative position of the universities on each component.

[Two days ago the scores for the six components of the rankings were available (registration required) at the QS topuniversities site for the top 400 universities. They could not be accessed today, which might mean that the details for the 400-500 univerities are being prepared or, just possibly, that errors are being corrected.]

Peer review
In 2006 UM was 90th for peer review among the top 400 in that year. In 2007 it fell to 131st position among the top 400.

Recruiter rating
In 2006 it was 238th for recruiter rating . In 2007 it rose to159th place.

Student faculty ratio
In 2006 it was in 274th place for student faculty ratio. In 2007 it rose to 261st place.

International faculty
In 2006 it was 245th for international faculty. In 2007 it rose to 146th place.

International Staff
In 2006 it was 308th for international students. In 2007 it rose to 241st place.

International students
In 2007 it was 342nd for citations per faculty . In 2007 it fell to 377th place.


This means that UM did much better compared to other universities on the following measures:

  • Recruiter rating
  • Student faculty ratio
  • International students
  • International faculty

It did somewhat worse on two items , peer review and citations. But notice that the number of places by which it fell are much less than than the number of places by which it rose, except for student faculty ratio.

The peer review was given a weighting of forty per cent and this meant that the modest fall here cancelled out the greater rises on the other sections.

It was, however, the citations part that scuppered UM this year. Without this, it would have remained roughly where it was. Basically falling from position 342 to 377 meant losing a bit more than 30 points on this section or about six points on the total score, sufficient to eject UM from the top two hundred.

Why should such a modest fall have such dire consequences?

Basically what happened is that Z scores , as noted earlier, compress scores at the top and stretch them out over the middle . Since the mean for the whole group is normalised at fifty and since the maximum score is hundred, an institution like Caltech will never get more than twice as many points as a university scoring around the mean, even if it were, as in fact it does, to produce ten times as much research.

So, in 2006 Caltech scored 100, Harvard 55 , the National University of Singapore (NUS) 8, Peking University 2 and UM 1.

Now in 2007, Caltech gets 100, Harvard 96, NUS 84, Peking 53 and UM 14.

The scores have been stretched out at the bottom and compressed at the top. But there has almost certainly been no change in the underlying reality.

So what is the real position? UM, it seems, has,
relative to other universities, recruited more international staff and admitted more international students. Its faculty student ratio has improved very slightly relative to other universities. The employers contacted by QS think more highly of its graduates this year.

This was all cancelled out by the fall in the "peer review", which may in part have been caused by the prohibition on voting for the respondent's own institution.

The real killer for UM, however, was the introduction of Z scores I'll leave it to readers to decide whether a procedure that represents Caltech as only slightly better than Pohong University of Science and the University of Helsinki is superior to one that gives Peking only twice as many points as UM.

The pattern for the other Malaysian universities was similar, although less pronounced. It is also unfortunately noticeable that UKM got a very high score for international faculty, suggesting that an error
similar to that of 2004 has occurred.

What is the real situation with regard to Malaysian universities? Frankly, I consider the peer review a dubious exercise and the components relating to student faculty ratio and internationalisation little better.

Counts of research and citations produced by third parties are, however, fairly reliable. Looking at the Scopus database (don't trust me -- get a 30 day free trial)l I found that 1,226 research papers (defined broadly to include things like reviews and conference papers) by researchers affiliated to Malaysian universities and other institutions were published in 2001 and 3,372 in 2006. This is an increase of 175% over 5 years.

For Singapore the figures are 5, 274 and 9,630, an increase of 83%.

For Indonesia they are 511 and 958, an increase of 87%.

For Australia they are 25,939 and 38, 852, an increase of 56%.

For Japan they are 89, 264 and 103, 428, an increase of 16%.

It is of course easy to grow rapidly when you start from a low base and these statistics say nothing about the quality of the research. Nor do they distinguish between a conference paper with a graduate student as sixth author and an article in a leading indexed journal.

Still the picture is clear. The amount of research done in Malaysia has increased rapidly over the last few years and has increased more rapidly than in Singapore, Japan and Australia. Maybe Malaysia is not improving fast enough but it is clear that there has been no decline, either relative or absolute, and that the THES-QS rankings have, once again, given a false impression of what has happened.






I