Friday, January 07, 2011

Value for Money

An article by Richard Vedder describes how the publication of data by Texas A and M University shows enormous variation in the cost of faculty members per student taught.

I recently asked my student research assistant to explore this data by choosing, more or less at random, 40 professors of the university's main campus at College Station — including one highly paid professor with a very modest teaching load in each department and one instructor who is modestly paid but teaches many students. The findings were startling, even for a veteran professor like myself.


The 20 high-paid professors made, on average, over $200,000 each, totaling a little over $5 million annually to the university. These professors collectively taught 125 students last year, or roughly $40,000 per student. Since a typical student takes about 10 courses a year, the average cost of educating a student exclusively with this group of professors would be about $400,000, excluding other costs beyond faculty salaries.
There are of course questions to be asked about whether the data included the supervision of dissertations and the difficulty of the courses taught. Even so, the results deserve close scrutiny and might even be a model for some sort of international comparison.

Tuesday, January 04, 2011

Dumbing Down of University Grades

An article in the London Daily Telegraph shows that the number of first and upper second class degrees awarded by British universities has risen steadily over the last few decades. Their value to employers as an indicator of student quality has accordingly diminished.

David Barrett reports that:


The latest data shows that the criteria for awarding degrees has changed dramatically - despite complaints from many universities that grade inflation at A-level has made it hard for them to select candidates.

Traditionally, first class honours have been awarded sparingly to students who show exceptional depth of knowledge and originality.


But the new figures add further weight to a report by MPs last year which found that "inconsistency in standards is rife" and accused vice-chancellors of "defensive complacency".

We might note that the THE-QS rankings until 2009 and the QS rankings of last year  have probably done quite a lot to encourage complacency by consistently overrating British universities especially Oxbridge and the London colleges.

Thursday, December 23, 2010

Top US Colleges by Salary

PayScale has published its ranking of US colleges by mid-career median salary. The top five are:

1. Harvey Mudd College
2.  Princeton
3.  Dartmouth
4.  Harvard
5.  Caltech

The top schools in various categories are:

Engineering: Harvey Mudd
Ivy League: Princeton
Liberal Arts: Harvey Mudd
Party Colleges: Union College, NY
Private Research Universities: Princeton
State Universities: Colorado School of Mines

Saturday, December 04, 2010

Can 25 Million Citations be Wrong?

Perhaps not but a few hundred might.

University World News has an article by Phil Baty, deputy editor of Times Higher Education, that discusses the recent THE World University Rankings. He is mainly concerned with the teaching component of the rankings and I hope to discuss this in a little while. However, there are some remarks about the citation component that are worth commenting on. He says:

"We look at research in a number of different ways, examining research reputation, income and research volume (through publication in leading academic journals indexed by Thomson Reuters). But we give the highest weighting to an indicator of 'research influence', measured by the number of times a university's published research is cited by academics around the globe.


We looked at more than 25 million citations over a five-year period from more than five million articles. All the data were normalised to reflect variations in citation volume between different subject areas.


This indicator has proved controversial, as it has shaken up the established order, giving high scores to some smaller institutions with clear pockets of research excellence, often at the expense of the larger research-intensive universities.


We make no apology for recognising quality over quantity, but we concede that our decision to openly include in the tables the two or three extreme statistical outliers, in the interests of transparency, has given some fuel for criticism, and has given us some food for thought for next year's table."
First, normalisation of data means that the number of citations is compared to a benchmark derived from the world average number of citations for a subject area. A large number of citations might mean that an article has been warmly received. It might equally well mean that the article was in a field where articles are typically cited a lot. Comparing simple numbers of citations in  a field like literary studies to those in medical research would be  unfair to the former since citations there are relatively scarce. So part of the reason for Alexandria  University's remarkable success in the recent THE WUR was not just the number of citations of the papers of Mohamed El Naschie but also that he was  publishing in a field with a low frequency of citations. Had he published papers on medicine nobody would have noticed.

It is also very likely -- although I cannot recall seeing direct confirmation -- that Thomson Reuters were benchmarking by year so that a university would score more for a citation to a recently cited article than one to an article published four years ago.

In the case of Alexandria and other universities that scored unexpectedly well we are not talking about millions of citations. We are talking about dozens of papers and hundreds of citations the effect of which has been enormously magnified because the papers were in a low-citation field  and were cited within months of publication.

Remember also that we are talking about averages. This means that a university will get a higher score the smaller the number of papers that are published in ISI- indexed journals. Alexandria did not do so well just because El Naschie published a lot and was cited a lot. It also did well because overall it published few articles. Had Alexandria researchers published more then its score would have been correspondingly lower..

Perhaps El Naschie constitutes a clear pocket of excellence, although that is not entirely uncontroversial. But he is a clear pocket of excellence who only became visible because of the modest achievement of the rest of the university. Conversely, there are probably many modestly cited researchers in Europe and the USA who might welcome a move to a  university in Asia or Latin America where a few papers and citations in a low cited discipline would blossom into such a pocket.

Is Aleaxandria one of two or three anomalies? There are in fact many more more anomalies perhaps not quite so obvious and this can be seen by comparing scores for research impact with other rankings of research output and impact such as HEEACT and Scimago or with the scores for research in the THE rankings themselves. It would also be interesting if THE released the released of the academic reputational survey.

Consider what would happen if we had  a couple of universities that were generally similar, with the same income, staff-student ratio and so on. One however had published two or three times the number of ISI indexed articles as the other. Both had a few researchers who had been cited more frequently than is usual for their discipline. Under the current system, the first university would get a much lower score than the second. Can this really be consider a preference for quality over quantity? Only if we think that publishing in ISI journals adds to quantity but does not indicate quality.

 I hope that food for thought means radical revision of the citations indicator.

A minimal list of changes would include adding more markers of research impact, removing self citations and citations to the same university and the same journal and combining the score for the various disciplinary fields.

 If this can be done then the THE rankings may become what was promised.

Friday, December 03, 2010

Is there a Future for Citations?

simplification administrative has some caustic comments on the role of self-citation and reciprocal citation  in the remarkable performance of Alexandria University in the 2010 THE rankings.

The title, 'Bibliometry -- already broken', is perhaps unduly pessimistic but THE and Thomson Reuters are going to have to move quickly if they are to rescue their rankings. An obvious remedy would include removing self-citations and intra-university and intra-journal citations  from the count.

Monday, November 29, 2010

Priorities

An article by William Patrick Leonard in the Korea Herald discusses how this year's rankings by Shanghai Jiao Tong University, THE and QS, whatever their faults, indicate a long term shift in academic excellence from the USA and Europe to China and other parts of Asia.

"All three release their findings as the school year begins. Each employs a similar blend of indicators, which purportedly measure the relative quality of the institutions surveyed. All emphasize institutional reputation expressed in the quality and quantity of faculty publications, peer assessments, faculty/student ratios, budgets and other input quality measures.



These rankings are clearly flawed; for example, it is not evident that the volume of scholarly publications or peer assessments reflects quality in the classroom. Nevertheless, the rankings show that other fast-growing countries are willing to apply their resources to higher education, just as the United States has been doing for years"
.
He then observes how in the US sport seems to take precedence over education.

"Yet, instead of strengthening our academic programming, some are planning costly recreational diversions. In May, the National Football Foundation & College Hall of Fame announced that “six new college football teams are set to take the field for the first time this season with 11 more programs set to launch between 2011 and 2013.” Such an announcement is simultaneously sad and humorous. The resources spent to implement and subsequently prop up these programs could be used to improve technology, science, and engineering programs. Sadly, some institutions have opted for stadiums over instructional infrastructure."







Saturday, November 20, 2010

Comment on the New York Times Article

This is from Paul Wouters at CWTS, Leiden University

"However, the reason for this high position is the performance of exactly one (1) academic: Mohamed El Naschie, who published 323 articles in the Elsevier journal Chaos, Solitons and Fractals of which he is the founding editor. His articles frequently cite other articles in the same journal by the same author. On many indicators, the Alexandrian university does not score very high, but on the number of citations indicator the university scores 99.8 which puts it as the 4th most highly cited in the world. This result clearly does not make any sense at all. Apparently, the methodology used by the THES is not only problematic because it puts a high weight on surveys and perceived reputation. It is also problematic because the way the THES survey counts citations and makes them comparable across fields (in technical terms, the way these counts are normalized) is not able to filter out these forms of self-promotion by self-citations. In other words: the way the THES uses citation analysis does not meet one of the requirements of sound indicators: robustness against simple forms of manipulation."

BTW, to be a bit pedantic, it's not THES any more.
The Influence of Rankings

From an announcement about merit scholarships from The Islamic Development Bank.

"The successful candidate must secure admission to one universities listed in the Times Higher Education Supplement (THES)."

That obviously needs updating but does it mean Alexandria but not Texas?

Wednesday, November 17, 2010

Correction

A previous post, "Debate, Anyone" contained some data about comparative rates of self-citation among various univeristies. The methodology used was not appropriate (calcuting the number of articles that contained self-citations as a percentage of the sum of citations). I am recalculating although the relative level of self-citation among universities is most unlikely to be affected.
Article in New York Times

The New York Times has a long article, which is being quoted globally, about the THE World University Rankings. So far it has been cited in newspapers in Italy, Spain, France and Egypt and no doubt other places to come.

Sunday, November 14, 2010

Another Ranking

An organisation called Eduroute has produced a ranking of universities by "web quality". The idea sounds interesting but we need to be given some more information.The top five are:

1. Harvard
2. MIT
3.  Cornell
4.  Stanford
5.  UC Berkeley

After that there are some surprises with National Taiwan University in 7th place, University of the Basque Country 16th, Sofia University 44th, Yildiz Technical University 57th.

The structure of these rankings is as follows:

Volume 20%  "The volume of information published is measure by a set of commands that run on the major search engines. "

Online scientific information  10%  "Eduroute measures this aspect through the search engines which specialize in publishing researches and scholarly articles and which search a university's website for all available publications."

Links quantity 30%:   "Here Eduroute measures the number of incoming links whether these links are from academic or nonacademic websites."

Quality of links and contents 40%:  "it was of great importance to measure this aspect of any website in order to reflect the true size of a university's website on the internet and to measure the degree in which the university is concerned with the quality of content it provides on its website."

This is rather vague and, since only rank order is given, there is no explanation for the high position of the University of the Basque Country, Yildiz Technical University and so on.

The location and personnel of Eduroute also remains mysterious. I did, however, receive this message

"Eduroute is an organization involved in determining rankings for universities. We pride ourselves in offering people with a true collection of information that will assist them when it comes to classifying universities and their rankings.


When coming up with rankings for universities we put into consideration several parameters in order to come up with as accurate a conclusion as possible. This methodology is frequently evaluated and improved on to obtain a solid benchmark that can cast a true reflection of rankings for universities. Eduroute focuses on studying universities’ websites. We believe that the support and investment a university inputs into its website is proportional to the degree of interaction of the website and its users (students, staff and lecturers). The volume and content of the university’s website is analysed while also putting into consideration the traffic flow to the website. The number of external links leading to the university’s website is also a key factor as it is a reflection of how popular the site is. Such parameters and more are useful while determining the rankings for universities. Educational institutions also have the opportunity of registering with us so as to be included in the rankings.

At Eduroute we put all our energies in ensuring the rankings for universities we offer are as accurate as possible. We believe this information is of vital importance both to the general public and to the universities and as such we offer a professional service that satisfies both parties.


Regards,

May Attia


Project Manager

www.eduroute.info"

Friday, November 12, 2010

Article by Philip Altbach

Inside Higher Education has a substantial and perceptive article on international university rankings by Philip Altbach.

Towards the end there is a round-up of the current season. Some quotations:

On QS
Forty percent of the QS rankings are based on a reputational survey. This probably accounts for the significant variability in the QS rankings over the years. Whether the QS rankings should be taken seriously by the higher education community is questionable.

On ARWU

Some of AWRU’s criteria clearly privilege older, prestigious Western universities — particularly those that have produced or can attract Nobel prizewinners. The universities tend to pay high salaries and have excellent laboratories and libraries. The various indexes used also heavily rely on top peer-reviewed journals in English, again giving an advantage to the universities that house editorial offices and key reviewers. Nonetheless, AWRU’s consistency, clarity of purpose, and transparency are significant advantages.


On THE

Some of the rankings are clearly inaccurate. Why do Bilkent University in Turkey and the Hong Kong Baptist University rank ahead of Michigan State University, the University of Stockholm, or Leiden University in Holland? Why is Alexandria University ranked at all in the top 200? These anomalies, and others, simply do not pass the "smell test." Let it be hoped that these, and no doubt other, problems can be worked out.

Saturday, October 23, 2010

Navarra Round Table

Presentations by Nian Cai Liu, Zhuo Lin Feng, Daniel Torres-Salinas, Jean Rapp, Phil Baty and Isidro Aguillo at the Ranking Round Table in Navarra can be found here.

Monday, October 18, 2010

The THE-QS World Universities Rankings, 2004-2009

See here for a draft of an article on the THE-QS rankings.

Sunday, October 17, 2010

Debate, anyone?

Times Higher Education and Thomson Reuters have said that they wish to engage and that they will be happy to debate their new rankings methodology. So far we have not seen much sign of a debate although I will admit that perhaps more things were said at the recent seminars in London and Spain than got into print. In particular, they have been rather reticent about defending the citations indicator which gives the whole ranking a very distinctive cast and which is likely to drag down what could have been a promising development in ranking methodology.

First, let me comment on the few attempts to defend this indicator, which accounts for nearly a third of the total weighting and for more in some of the subject rankings. It has been pointed out that David Willetts, British Minister for Universities and Science has congratulated THE on its new methodology.


“I congratulate THE for reviewing the methodology to produce this new picture of
the best in higher education worldwide. It should prompt all of us who care
about our universities to see how we can improve the range and quality of the
data on offer. Prospective students — in all countries — should have good
information to hand when deciding which course to study, and where. With the
world to choose from, it is in the interests of universities themselves to
publish figures on graduate destinations as well as details of degree
programmes.”
Willetts has praised THE for reviewing its methodology. So have many of us but that is not quite the same as endorsing what has emerged from that review.

Steve Smith, President of Universities UK and Vice-Chancellor of Exeter University is explicit in supporting the new rankings, especially the citations component.


But, as we shall see in a moment, there are serious issues with the robustness of citations as a measure of research impact and, if used inappropriately, they can become indistinguishable from a subjective measure of reputation.

The President of the University of Toronto makes a similar point and praises the new rankings’ reduced emphasis on subjective reputational surveys and refers to the citations (knowledge transfer?) indicator.


It might be argued that this indicator is noteworthy for revealing that some universities possess hitherto unsuspected centres of research excellence. An article by Phil Baty in THE of the 16th of September refers to the most conspicuous case, a remarkably high score for citations by Alexandria University, which according to the THE rankings has had a greater research impact than any university in the world except Caltech, MIT and Princeton. Baty suggests that there is some substance to Alexandria University’s extraordinary score. He refers to Ahmed Zuweil, a Nobel prize winner who left Alexandria with a master’s degree some four decades ago. Then he mentions some frequently cited papers by a single author in one journal.

The author in question is Mohamed El Naschie, who writes on mathematical physics and the journals – there are two that should be given the credit for Alexandria’s performance, not one – are Chaos, Solitons and Fractals and the International Journal of Nonlinear Sciences and Numerical Simulation. The first is published by Elsevier and was until recently edited by El Naschie. It has published a large number of papers by El Naschie and these have been cited many times by himself and by some other writers in CSF and IJNSNS.

The second journal is edited by Ji-Huan He of Donghua University in Shanghai, China with El Naschie as co-editor and is published by the Israeli publishing company, Freund Publishing House Ltd of Tel Aviv.

An amusing digression. In the instructions for authors in the journal the title is given as International Journal of Nonlinear Sciences and Numerical Stimulation. This could perhaps be described as a Freundian slip.

Although El Naschie has written a large number of papers and these have been cited many times, his publication and citation record is far from unique. He is not, for example, found in the ISI list of highly cited researchers. His publications and citations were perhaps necessary to push Alexandria into THE’s top 200 universities but they were not enough by themselves. This required a number of flaws in TR’s methodology.

First, TR assigned a citation impact score that compares actual citations of a paper with a benchmark score based on the expected number of citations for a specific subject in a specific year. Mathematics is a field where citations are relatively infrequent and usually occur a few years after publication. Since El Naschie published in a field in which citations are relatively scarce and published quite recently this boosted the impact score of his papers. The reason for using this approach is clear and sensible, to overcome the distorting effects of varying citation practices in different disciplines when comparing individual researchers or departments. But there are problems if this method is used to compare whole universities. A great deal depends on when the cited and citing articles are published and in which subject they were classified by TR.

A question for TR. How are articles classified? Is it possible to influence the category in which they are placed by the use of key words or the wording of the title?

Next, note that TR were measuring average citation impact. A consequence of this is that the publication of large numbers of papers that are cited less frequently than the high fliers could drag down the score. This explains an apparent oddity of the citation scores in the 2010 THE rankings. El Naschie listed nine universities as his affiliation in varying combinations between 2004 and 2008. yet it was only Alexandria that managed to leave the Ivy League and Oxbridge standing in the research impact dust. Recently, El Naschie’s list of affiliations has consisted of Alexandria, Cairo, Frankfurt University and Shanghai Jiao Tong University.

What happened was quite simply that all the others were producing so many papers that El Naschie’s made little or no difference. For once, it would be quite correct if El Naschie announced that he could not have done it without the support of his colleagues. Alexandria University owes its success not only to El Naschie and his citers but also to all those researchers who refrained from submitting articles to ISI–indexed journals or conference proceedings.

TR have some explaining to do here. If an author lists more than one affiliation, are they all counted? Or are fractions awarded for each paper? Is there any limit on the number of affiliations that an author may have. I think that it is two but would welcome clarification

As for the claim that Alexandria is strong in research, a quick look at the Scimago rankings is enough to dispose of that. It is ranked 1,047th in the 2010 rankings, which admittedly include many non-university organizations, for total publications over a decade. Also, one must ask how much of El Naschie’s writing was actually done in Alexandria, seeing that he had eight other affiliations between 2004 and 2008.

It has to be said that even if El Naschie is, as has been claimed in comments on Phil’s THE article and elsewhere, one of the most original thinkers of our time, it is strange that THE and TR should use a method that totally undermines their claim that the new methodology is based on evidence rather than reputation By giving any sort of credence to the Alexandria score, THE are asking us to believe that Alexandria is strong in research because precisely one writer is highly reputed by himself and a few others. Incidentally, will TR tell us what score Alexandria got in the research reputation survey?

I am not qualified to comment on the scientific merits of El Naschie’s work. At the moment it appears, judging from the comments in various physics blogs, that among physicists and mathematicians there are more detractors than supporters. There are also few documented signs of conventional academic merit in recent years such as permanent full time appointments or research grants. None of his papers between 2004 and 2008 in ISI-indexed journals,for example, apparently received external funding. His affiliations, if documented, turn out to be honorary, advisory or visiting.To be fair, readers might wish to visit El Naschie’s site. I will also publish any comments of a non-libellous nature that support or dispute the scientific merits of his writings.

Incidentally, it is unlikely that Alexandria’s score of 19.3 for internationalisation was faked. TR use a logarithm. If there were zero international staff and students a university would get a score of 1 and a score of 19.3 actually represents a small percentage. On the other hand, I do wonder whether Alexandria counted those students in the branch campuses in Lebanon, Sudan and Chad.

Finally, TR did not take the very simple and obvious step of not counting individual self-citations. Had they done so, they would have saved everybody, including themselves a lot of trouble. It would have been even better if they had excluded intra-institutional and intra-journal citation. See here for the role of citations among the editorial board of IJNSNS in creating an extraordinarily high Journal Impact Factor.

THE and TR have done everyone a great service by highlighting the corrosive effect of self citation on the citations tracking industry. It has become apparent that there are enormous variations in the prevalence of self citation in its various forms and that these have a strong influence on the citation impact score.

Professor Dirk Van Damme is reported to have said at the London seminar that the world’s elite universities were facing a challenge from universities in the bottom half of the top 200. If this were the case then THE could perhaps claim that their innovative methodology had uncovered reserves of talent ignored by previous rankings. But what exactly was the nature of the challenge? It seems that it was the efficiency with which the challengers turned research income into citations. And how did they do that?

I have taken the simple step of dividing the score for citations by the score for the research indicator (which includes research income) and then sorting the resulting values. The top ten are Alexandria, Hong Kong Baptist University, Barcelona, Bilkent, William and Mary, ENS de Lyon, Royal Holloway, Pompeu Fabra, University College Dublin, the University of Adelaide.

Seriously, these are a threat to the world’s elite?

The high scores for citations relative to research were the result of a large number of citations or a small number of total publications or both. It is of interest to note that in some cases the number of citations was the result of assiduous self-citation.

This section of the post contained comments about comparative rates of self citation among various universities. The method used was not correct and I am recalculating.
As noted already, using the THE iPad app to change the importance attached to various indicators can produce very different results. This is a list of universities that rise more than a hundred places when the citations indicator is set to ‘not important’. They have suffered perhaps because of a lack of super-cited papers, perhaps also because they just produced too many papers.

Loughborough
Kyushu
Sung Kyun Kwan
Texas A and M
Surrey
Shanghai Jiao Tong University
Delft University of Technology
National Chiao Tung University (Taiwan)
Royal Institute of Technology Sweden
Tokushima
Hokkaido

Here is a list of universities that fall more than 100 places when the citations indicator is set to ‘not important’. They have benefitted from a few highly cited papers or low publication counts or a combination of the two.

Boston College
University of California Santa Cruz
Royal Holloway, University of London
Pompeu Fabra
Bilkent
Kent State University
Hong Kong Baptist University
Alexandria
Barcelona
Victoria University Wellington
Tokyo Metropolitan University
University of Warsaw

There are many others that rise or fell seventy, eighty, ninety places when citations are taken out of the equation. This is not a case of a few anomalies. The whole indicator is one big anomaly.

Earlier Jonathon Adams in a column that has attracted one comment said:

"Disciplinary diversity is an important factor, as is international diversity. How would you show the emerging excellence of a really good university in a less well known country such as Indonesia? This is where we would be most controversial, and most at risk, in using the logic of field-normalisation to add a small weighting in favour of relatively good institutions in countries with small research communities. Some may feel that we got that one only partially right."

The rankings do not include universities in Indonesia, really good or otherwise. The problem is with good, mediocre and not very good universities in the US, UK, Spain, Turkey, Egypt, New Zealand, Poland etc. . It is a huge weighting, not a small one, the universities concerned range from relatively good to relatively bad, in one case the research community seems to consist of one person and many are convinced that TR got that one totally wrong.


Indonesia may be less well known to TR but it is very well known to itself and neighbouring countries.


I will publish any comments by anyone who wishes to defend the citation indicator of the new rankings. Here are some questions they might wish to consider.


Was it a good idea to give such a heavy weighting to research impact, 32.5% in the oerall rankings , 37.5 in at least 2 subject rankings? Is it possible that commercial considerations, citations data being a lucrative business for TR , had something to do with it?

Are citations such a robust indictor? Is there not enough evidence now to suggest that manipulation of citations, including self-citation, intra-institutional citation and intra-journal citation, is so pervasive that the robustness of this measure is very slight?

Since there are several ways to measure research impact, would it not have been a good idea to have used several methods? After all, Leiden University has several different ways of assessing impact. Why use only one?


Why set the threshold for inclusion so low at 50 papers per year?

Monday, October 11, 2010

Auditing University Rankings

The Chronicle of Higher Education reports on a meeting of the International Rankings Expert Group's (IREG) Observatory on Academic Ranking and Excellence.

The Organisation has set up a mechanism to audit the various university rankings.

"The audit project, which he [Gero Federkeil] is helping to manage, will be based closely on IREG's principles, which emphasize clarity and openness in the purposes and goals of rankings, the design and weighting of indicators, the collection and processing of data, and the presentation of results.

"We all say that rankings should aim at delivering transparency about higher-education institutions, but we think there should be transparency about rankings too," Mr. Federkeil said. The audit process could eventually give rise to an IREG quality label, which would amount to an identification of trustworthy rankings, thereby enhancing the credibility of rankings and improving their quality, Mr. Federkeil said.

At the Berlin meeting last week, Mr. Federkeil and Ying Cheng, of the Center for World-Class Universities at Shanghai Jiao Tong University, which produces the best-known and most influential global ranking of universities, outlined the proposed methodology and procedure for the audit. The IREG executive committee will nominate audit teams consisting of three to five people. The chair of each team must not have any formal affiliation with a ranking organization, and at least one member of the audit team must be a member of the IREG executive committee. Audits will be based on self-reported data as well as possible on-site visits, and each full audit is expected to take about five months to complete."

The executive committee of IREG includes Liu Nian Cai from Shanghai Rankings Consultancy, Bob Morse from US News & World Report and Gero Federkeil from CHE, the German ranking body.

Members of the Observatory include HEEACT (Taiwan), the Kazakhstan and Slovak ranking agencies and QS Intelligence Agency.

Would anybody like to guess who will be the first to be audited?

Thursday, September 30, 2010

Return of the British Gift for Understatement

One of the tiresome things about the ranking season is the mania for inflated adjectives that readers of THE and other publications and sites have had to endure: rigorous, innovative, transparent, robust, sophisticated and so.

It was a relief to read a column by Jonathon Adams in today's THE which uses words like "small" and "good". Here is an example.

Disciplinary diversity is an important factor, as is international diversity. How would you show the emerging excellence of a really good university in a less well known country such as Indonesia? This is where we would be most controversial, and most at risk, in using the logic of field-normalisation to add a small weighting in favour of relatively good institutions in countries with small research communities. Some may feel that we got that one only partially right.

I think the bit about "using the logic of field-normalisation to add a small weighting in favour of relatively good institutions in countries with small research communities" refers to the disconcerting presentation of Alexandria University as the 4th best in the world for research impact.
The THE Life Sciences Ranking (subscription required)

First is MIT, then Harvard, then Stanford. Nothing to argue about there.

For research impact (i.e. citations) the order is:

1.. MIT
2. University of Barcelona
3. Harvard
4. Princeton
5. Stanford
6. Oxford
7. Dundee
8. Hong Kong
9. Monash
10. Berkeley

In this subject group, the citations indicator gets 37.5%.
Rankings Undermined by Flawed Indicator

See commentary in University World News

Tuesday, September 28, 2010

From the Straits Times of Singapore

Students at Nanyang Technological University in Singapore have been dismayed by the poor performance in the THE World University Rankings. Apparently they are concerned about future job prospects. The university was ranked 174th, much less than it is used to. The apparent poor performance for citations (222nd) was at the root of the problem.

Su Guanning the President has written to the Straits Times

"The QS 2010 Ranking is today the most widely-used world university ranking.
Nanyang Technological University was placed 74th in the list and the National
University of Singapore 31st, both down by one position from last year. The
performance of both universities has been consistent over the last four years.

NTU started off as a practice-oriented, teaching university in 1991 and
is in fact the youngest university in the Top 100 ranked in the QS 2010.

The Times Higher Education 2010 rankings is entirely new. Its criteria
have yet to be accepted by many universities. A detailed analysis reveals it is
88 per cent computed from research-related indicators with unusual normalization
of data resulting in some bizarre results. In a joint article in Edmonton
Journal, Indira Samarasekera, the President and Vice-Chancellor of the
University of Alberta, and Carl Amrhein, the Provost and Vice-President
(Academic) of the same university, wrote that the Times ranking has very
peculiar outcomes that do not pass the “reasonableness test”. They advised the
public to take the “rankings with a truckload of salt”.

Malcolm Grant, President and Provost of University College London pointed out to The Guardian that research citations, if not intelligently applied, can lead to bizarre
results. He cited the example of Egypt’s Alexandria University being ranked
above Harvard and Stanford universities in research influence in the Times
Higher Education 2010. "

Sunday, September 26, 2010

Something is Awry

A comment by the President of the University of Alberta.


"This year, Times Higher Education partnered with Thompson
Reuters to compile a new ranking, which was released a few days ago. This
ranking is also based on objective and subjective measures. A school’s final
score is based on an aggregate of teaching, research reputation/income,
citations/research influence, industry income and international mix. Academics are surveyed to rate teaching and research quality, a precarious exercise since teaching is difficult, if not impossible, for academics to assess at institutions other than their own. At first blush, this ranking appears to capture a broader range of indicators and does a good job of identifying the top 20. We applaud the University of Toronto for being named among the top 20; this is very good for Canada and Canadian universities.


However, when one looks under the hood, beyond the top 20 universities, the new ranking has some very peculiar outcomes that does not pass the “reasonableness test.” Alexandria University in Egypt received one of the highest scores for citations while also receiving the lowest score for research reputation. By comparison #1 Harvard received near perfect scores on both measures. Logic would suggest that the aggregate score for research reputation should correlate with the score for citations, which measure research influence. However, in spite of the vast discrepancy between its scores, Alexandria, which has never been on any other international ranking, came in ahead of many top universities. Furthermore many universities considered by peers as among the best do not appear in the top 200.
Something is awry.

Completely Obscure

The President of the University of Groningen has commented on the THE 2010-11 World University rankings:


"President of the Board of the University of Groningen Prof. Sibrand Poppema is
very critical of the way that THE has determined the quality of university
education and of the way the number of citations per academic publication has
been calculated. ‘There’s definitely something wrong in that’, said Poppema
about the latter in an interview for the Groningen university paper UK. The
number of citations is a measure of the quality of the academic research.
However, because not all academic fields are cited as frequently as some, THE
uses a calculation method to be able to compare the results even so. Poppema
calls this method ‘completely obscure’."

Friday, September 24, 2010

Here We Go Again

Times Higher Education have released the ranking of the world's top universities in engineering and technology (subscription required). Caltech is number 1 and MIT is second. So far, nothing strange.

But one wonders about the Ecole Normale Superieure in Paris in 34th place and Birkbeck College London in 43rd.

Clicking on the Citations Indicator we find that the Ecole has a score of 99.7 and Birkbeck 100.

So Birkbeck has the highest research impact for engineering and technology in the world and ENS Paris the second highest?

Thursday, September 23, 2010

Missing the Point

One of the most depressing things about university rankings is that whenever rankers make an error or use inappropraiate methodologies to push a un iversity up the table, administrators produce detailed justifications of their ascent usually avoiding the real reason. Here the head of Alexandria University explains why her university is in the top 200 without mentioning the real reasons, namely the deficiencies of the citation indicator.
From the Economist



But I suspect that today's league tables say as much about
the motives behind those who compile them (and, indeed, those who laud their
findings) as they do about the true global standing of the institutions
concerned. Britain is poised to slash its public services, and the axe hangs
over its universities just as surely as it does over almost all other area of
public life (only the National Health Service and the overseas aid budget have
won reprieves). Other countries with rickety public finances are nevertheless
splurging on universities, including America, Canada, France and Germany. Even
Australia and China, which avoided recession, have big plans.

In such circumstances, it makes sense for British universities to present themselves as a national treasure whose crown is slipping for want of investment. Universities
UK, which represents vice-chancellors, issued a statement from Steve Smith, its
president, saying, "The tables may show that the UK remains the second-strongest
university system in the world, but the most unmistakable conclusion is that
this position is genuinely under threat. The higher education sector is one of
the UK's international success stories, but it faces unprecedented competition.
Our competitors are investing significant sums in their universities, just when
the UK is contemplating massive cuts in its expenditure on universities and
science."

He may be right, but the evidence he uses to support his
conclusion is far from objective.


And a comment on the article


deanquill wrote: Sep 20th 2010 7:31 GMT .It's notable that Israel, which had two
universities just outside The Times' top 100 last year, had none in the list at
all this year. The universities apparently failed to return statistical data and
were excluded; they say they never received the request from The Times' new
survey organizers. Incompetence is more likely than conspiracy but it doesn't
say much about the list's accuracy
.

Good point, but the magazine in question is not The Times and the data was collected by Thomson Reuters.