Showing posts sorted by date for query debate anyone. Sort by relevance Show all posts
Showing posts sorted by date for query debate anyone. Sort by relevance Show all posts

Sunday, November 05, 2017

Ranking debate: What should Malaysia do about the rankings?


A complicated relationship

Malaysia has had a complicated relationship with global university rankings. There was a moment back in 2004 when the first Times Higher Education Supplement- Quacquarelli Symonds (THES-QS) world rankings put the country's flagship, Universiti Malaya (UM), in the top 100. That was the result of an error, one of several QS made in its early days. Over the next few years UM has gone down and up in the rankings, but generally trending upwards with other Malaysian universities following behind. This year it is 114th in the QS world rankings and the top 100 seems in sight once again.

There has been a lot of debate about the quality of the various ranking systems, but it does seem that UM and some other universities have been steadily improving, especially with regard to research, although, as the recent Universitas 21 report shows, output and quality are still lagging behind the provision of resources.  

There is, however, an unfortunate tendency in many places, including Malaysia, for university rankings to get mixed up with local politics. A good ranking performance is proclaimed a triumph by the government and a poor one is deemed by the opposition to be punishment for failed policies.

QS rankings criticised

Recently Ong Kian Ming, a Malaysian opposition MP, said that it was a mistake for the government to use the QS world rankings as a benchmark to measure the quality of Malaysian universities and that the ranking performance of UM and other universities is not a valid measure of quality.

"Serdang MP Ong Kian Ming today slammed the higher education ministry for using the QS World University Rankings as a benchmark for Malaysian universities.
In a statement today, the DAP leader called the decision “short-sighted” and “faulty”, pointing out that the QS rankings do not put much emphasis on the criteria of research output.

According to the QS World University Rankings  for 2018, released on June 8, five Malaysian varsities were ranked in the top 300, with Universiti Malaya (UM) occupying 114th position."

The article went on to say that:


"However, Ong pointed to the Times Higher Education (THE) World University Rankings for 2018, which he said painted Malaysian universities in a different light.

According to the THE rankings, which were released earlier this week, none of Malaysia’s universities made it into the top 300.



Ong suggests that they should rely on locally developed measures.

“Instead of being “obsessed” with the ranking game, he added, the ministry should work to improve the existing academic indicators and measures which have been developed locally by the ministry and the Malaysian Qualifications Agency to assess the quality of local public and private universities”

Multiplication of rankings

It is certainly not a good idea for anyone to rely on any single ranking. There are now over a dozen global rankings and several regional ones that assess universities according to a variety of criteria. Universities in Malaysia and elsewhere could make more use of these rankings some of which are technically much better than the well known big three or four, QS, THE, The Shanghai Academic Ranking of World Universities (ARWU) and sometimes the US News Best Global Universities.

Dr. Ong is also quite right to point out the QS rankings have methodological flaws.  However, the THE rankings are not really any better, and they are certainly not superior in the measurement of research quality. They also have the distinctive attribute that 11 of their 13 indicators are not presented separately but bundled into three groups of indicators so that the public cannot, for example, tell whether a good score for research is the result of an increase in research income, more publications, an improvement in reputation for research, or a reduction in the number of faculty.

The important difference between the QS and THE rankings is not that the latter are focussed on research. QS's academic survey is specifically about research and its faculty student ratio, unlike THE's, includes research-only staff. The salient difference is that the THE academic survey is restricted to published researchers while QS's  allows universities to nominate potential respondents, something that gives an advantage to upwardly mobile institutions in Asia and Latin America.


Ranking vulnerabilities
All of the three well known rankings, THE, QS and ARWU now have  vulnerabilities, metrics that can be influenced by institutions and where a modest investment of resources can produce a disproportionate and implausible rise in the rankings.

In the Shanghai rankings the loss or gain of a single highly cited researcher can make a university go up or down dozens of places in the top 500. In addition the recruitment of scientists whose work is frequently cited, even for adjunct positions, can help universities excel in ARWU’s publications and Nature and Science indicators.

The THE citations indicator has allowed a succession of institutions to over-perform  in the world or regional rankings:  Alexandria University, Anglia Ruskin University in Cambridge, Moscow Engineering Physics Institute, Federico Santa Maria Technical University in Chile, Middle East Technical University, Tokyo Metropolitan University, Veltech University in India, Universiti Tunku Abdul Rahman (UTAR) in Malaysia. The indicator officially has a 30% weighting but in reality it is even greater because of THE’s “regional modification” that gives a boost to every university except those in the top scoring country. The modification used to apply to all of the citations but now covers half.

The vulnerability of the QS rankings is the two survey indicators accounting for  50% of the total weighting which allows universities to propose their own respondents. In recent years some Asian and Latin American universities such as Kyoto University, Nanyang Technological University (NTU), the University of Buenos Aires, the Pontifical Catholic University of Chile and the National University of Colombia have received scores for research and employer reputation that are out of line with their performance on any other indicator.

QS may have discovered a future high flyer in NTU but I have my doubts about the Latin American places. It is also most unlikely that Anglia Ruskin, UTAR and  Veltech will do so well in the THE rankings if they lose their highly cited researchers.

Consequently, there are limits to the reliability of the popular rankings and none of them should be considered the only sign of excellence. Ong is quite correct to point out the problems of the QS rankings but the other well known ones also have defects.


Beyond the Big Four


Ong points out that if we look at "the big four" then the high position of UM in the QS rankings is anomalous.  It is in 114th place in the QS world rankings (24th in the Asian rankings), 351-400 in THE, 356 in US News global rankings and 401-500  in ARWU.

The situation looks a little different when you consider all of the global rankings. Below is UM's position in 14 global rankings. The QS world  rankings are still where UM does best but here it is at the end of a curve. UM is 135th  for publications in the Leiden Ranking, generally considered by experts to be the best technically, although it is lower for high quality publications, 168th in the Scimago Institution Rankings, which combine research and innovation and 201-250 in the QS graduate employability rankings.

The worst performance is in the Unirank rankings (formerly ic4u), based on web activity, where UM is 697th.

The Shanghai rankings are probably a better guide to research prowess than either QS or THE since they deal only with research and, with one important exception, have a generally stable methodology. UM is 402nd overall, having fallen from 353rd in 2015 because of changes in the list of highly cited researchers used by the Shanghai rankers.  UM does better for publications, 143rd this year and 142nd in 2015.

QS World University Rankings: 114 [general, mainly research]
CWTS Leiden Ranking:  publications 135,  top 10% of journals 195 [research]
Scimago Institutions Rankings:  168 [research and innovation]
QS Graduate Employability Rankings: 201-250 [graduate outcomes]
Round University Ranking: 268 [general]
THE World University Rankings: 351-400 [general, mainly research]
US News Best Global Universities: 356 [research]
Shanghai ARWU: 402 [research]
Webometrics: overall 418 (excellence 228) [mainly web activity]
Center for World University Rankings: 539 [general, quality of graduates]
Nature Index: below 500 [high impact research]
uniRank: 697 [web activity]


The QS rankings are not such an outlier. Looking at indicators in other rankings devoted to research gives us results that are fairly similar. Malaysian universities would, however, be wise to avoid concentrating on any single ranking and  they should look at the specific indicators that measure features that are considered important.


Universities with an interest in technology and innovation could look at the Scimago rankings which include patents. Those with strengths in global medical studies might find it beneficial to go for the THE rankings but should always watch out for changes in methodology. 

Using local benchmarks is not a bad idea and it can be valuable for those institutions that are not so concerned with research but many Malaysian institutions are now competing on the global stage and are subject to international assessment and that, whether they like it or not, means assessment by rankings. It would be an improvement if benchmarks and targets were expressed as reaching a certain level in two or three rankings, not just one. Also, they should focus on specific indicators rather than the overall score and different rankings and indicators should be used to assess and compare different places.


For example, the Round University Rankings from Russia, which include five of the six metrics in the QS rankings plus others but with sensible weightings, could be used to supplement the QS world rankings.


For measuring research output and quality universities the Leiden Ranking might be a better alternative to either the QS or the THE rankings. Those universities with an innovation mission could refer to the innovation knowledge metric in the Scimago Institutions Rankings

When we come to measuring teaching and the quality of graduates there is little of value from the current range of global rankings. There have been some interesting initiatives such as the OECD's AHELO project and U-Multirank but these have yet to be widely accepted. The only international metric that even attempts to directly assess graduate quality is QS's employer survey.

So, universities, governments and stakeholders need to stop thinking about using one ranking as a benchmark for everyone and also to stop looking at the overall rankings. 

Tuesday, February 21, 2017

Never mind the rankings, THE has a huge database



There has been a debate, or perhaps the beginnings of a debate, about international university rankings following the publication of Bahram Bekhradnia's report to the Higher Education Policy Institute with comments in University World News by Ben SowterPhil BatyFrank Ziegele and Frans van Vought  and Philip Altbach and Ellen Hazelkorn and a guest post by Bekhradnia in this blog.

Bekhradnia argued that global university rankings were damaging and dangerous because they encourage an obsession with research, rely on unreliable or subjective data, and emphasise spurious precision. He suggests that governments, universities and academics should just ignore the rankings.

Times Higher Education (THE) has now published a piece by THE rankings editor Phil Baty that does not really deal with the criticism but basically says that it does not matter very much because the THE database is bigger and better than anyone else's. This he claims is "the true purpose and enduring legacy" of the THE world rankings.

Legacy? Does this mean that THE is getting ready to abandon rankings, or maybe just the world rankings, and go exclusively into the data refining business? 

Whatever Baty is hinting at, if that is what he is doing, it does seem a rather insipid defence of the rankings to say that all the criticism is missing the point because they are the precursor to a big and sophisticated database.

The article begins with a quotation from Lydia Snover, Director of Institutional Research, at MIT:

“There is no world department of education,” says Lydia Snover, director of institutional research at the Massachusetts Institute of Technology. But Times Higher Education, she believes, is helping to fill that gap: “They are doing a real service to universities by developing definitions and data that can be used for comparison and understanding.”

This sounds as though THE is doing something very impressive that nobody else has even thought of doing. But Snover's elaboration of this point in an email gives equal billing to QS and THE as definition developers and suggests the definitions and data that they provide will improve and expand in the future, implying that they are now less than perfect. She says:

"QS and THE both collect data annually from a large number of international universities. For example, understanding who is considered to be “faculty” in the EU, China, Australia, etc.  is quite helpful to us when we want to compare our universities internationally.  Since both QS and THE are relatively new in the rankings business compared to US NEWS, their definitions are still evolving.  As we go forward, I am sure the amount of data they collect and the definitions of that data will expand and improve."

Snover, by the way , is a member of 
the QS advisory board, as is THE's former rankings  "masterclass" partner, Simon Pratt.

Baty offers a rather perfunctory defence of the THE rankings. He talks about rankings bringing great insights into the shifting fortunes of universities. If we are talking about year to year changes then the fact that THE purports to chart shifting fortunes is a very big bug in their methodology. Unless there has been drastic restructuring universities do not change much in a matter of months and any ranking that claims that it is detecting massive shifts over a year is simply advertising its deficiencies.

The assertion that the THE rankings are the most comprehensive and balanced is difficult to take seriously. If by comprehensive it is meant that the THE rankings have more indicators than QS or Webometrics that is correct. But the number of indicators does not mean very much if they are bundled together and the scores hidden from the public and if some of the indicators, the teaching survey and research survey for example, correlate so closely that they are effectively the same thing. In any case, The Russian Round University Rankings have 20 indicators compared with THE's 13 in the world rankings.

As for being balanced, we have already seen Bekhradnia's analysis showing that even the teaching and international outlook criteria in the THE rankings are really about research. In addition, THE gives almost a third of its weighting to citations. In practice that is often even more because the effect of the regional modification, now applied to half the indicator, is to boost in varying degrees the scores of everybody except those in the best performing country. 

After offering a scaled down celebration of the rankings, Baty then dismisses critics while announcing that THE "is quietly [seriously?] getting on with a hugely ambitious project to build an extraordinary and truly unique global resource." 


Perhaps some elite universities, like MIT, will find the database and its associated definitions helpful but whether there is anything extraordinary or unique about it remains to be seen.







Monday, June 02, 2014

What should India do about the rankings?

India seems to be suffering from ranking fever. This is a serious problem that periodically sweeps across countries with the national media echoing statements from university heads and bureaucrats about being in the top one hundred or two hundred of something in the next few years or passionate claims that rankings do not reflect the needs of local society or that the uniquely transformative features of this or that institution -- how dare they ignore our sensitivity training or sustainability programs! --  are not recognised by the rankers.

There is  now a lot of debate about which is the best university in India and also about why Indian institutions, especially the sometimes lauded Indian Institutes of Technology (IITs), have a modest impact on the international rankings.

So what do the various rankings say about the quality of Indian universities (counting the IITs and other Institutes)? Starting with Webometrics, which measures the Internet presence of universities,  first place in India goes to IIT Bombay, 517th in the world, followed by IIT Madras, The Indian Institute of Science (IISc) in Bangalore, IIT Kanpur and the University of Delhi.

Moving on to research based rankings, only one Indian university is ranked in  Shanghai Jiao Tong University's Academic Ranking of World Universities (ARWU) top 500, and that is the IISc in the 301-400 band.

Scimago Institutional Rankings in their 2013 default list of number of publications, also puts IISc in first place in India, followed by IIT Kharagur, IIT Delhi, the University of Delhi, and IIT Madras.

The Leiden Ranking has the IISc in first place for number of publications although IIT Roorkee is first for publications in high quality journals.

Looking at the research-only rankings then, the best bet for top place would be the IISc which is ranked first in India by ARWU, Scimago, and, for number of publications, by Leiden Ranking, although for quality of research the IITs at Roorkee, Delhi and Guwahati perform comparatively well.

Moving on to rankings that attempt to assess factors other than research, we find that in the most recent QS World and Asian University Rankings first place in India goes to IIT Delhi with IIT Bombay second and IIT Kanpur third.

Last year's Times Higher Education world rankings produced an unusual result. Panjab University (PU) was ranked in the 226-250 band well ahead of the IITs Delhi, Kanpur, Kharagpur and Roorkee in the 350 - 400. In this case, Panjab university's feat was entirely due to its massive score for citations, 84.7 compared to IIT Delhi's 38.5, a score that was in stark contrast to a very poor 14 for research.

The main reason for PU's whopping score for citations appears to be that a few of its physicists are involved in the Large Hadron Collider project, which involves more than 2000 physicists in more than 150 research centers and 37 countries and consequently produces a huge number of citations. PU gets the credit for all of those citations even though its contribution to the cited papers is extremely small.

This only works because the overall number of papers produced is low. Hundreds or even thousands of citations are of little incremental value if they are spread out over thousands or tens of thousands of papers.

It would be unwise for other Indian universities to emulate PU's approach to get into the THE rankings. For one thing they would have to keep total publications low. For another, they may find that highly cited researchers might be a tempting target for universities in the US or Australia. And it does not work for any other ranking.

It is noticeable that The IISc is not included in the QS or THE rankings, presumably as a result of its own choice.

Should India's universities try to improve their ranking performance? Perhaps, but it would be better if they focused on improving their research performance, admissions policies, administration and selection processes. And here there is a much bigger problem for India, the utterly dismal performance of the country's school system.

In 2009, students from Tamil Nadu and Himachel Pradesh, which do better than the Indian average on social and economic development measures, took the PISA test. They were just ahead of the lowest ranked Kirgystan.

Riaz Haq writes:

"In Tamil Nadu, only 17% of students were estimated to possess proficiency in reading that is at or above the baseline needed to be effective and productive in life. In Himachal Pradesh, this level is 11%. “This compares to 81% of students performing at or above the baseline level in reading in the OECD countries, on an average,” said the study. 
The average Indian child taking part in PISA2009+ is 40 to 50 points behind the worst students in the economic superstars. Even the best performers in Tamil Nadu and Himachal Pradesh - the top 5 percent who India will need in science and technology to complete globally - were almost 100 points behind the average child in Singapore and 83 points behind the average Korean - and a staggering 250 points behind the best in the best.
The average child in HP & TN is right at the level of the worst OECD or American students (only 1.5 or 7.5 points ahead). Contrary to President Obama's oft-expressed concerns about American students ability to compete with their Indian counterparts, the average 15-year-old Indian placed in an American school would be among the weakest students in the classroom, says Lant Pritchett of Harvard University. Even the best TN/HP students are 24 points behind the average American 15 year old."

If this does not change, there is very little that anyone can do to improve the status of India's universities, apart from importing large numbers of Finnish, Chinese or Korean students, teachers and researchers.









Tuesday, May 07, 2013

Unsolicited Advice



There has  been a lot of debate recently about the reputation survey component in the QS World University Rankings.

The president of University College Cork asked faculty to find friends at other universities who "understand the importance of UCC improving its university world ranking". The reason for the reference to other universities is that the QS survey very sensibly does not permit respondents to vote for their own universities, those that they list as their affiliation.  

This request appears to violate QS's guidelines which permit universities to inform staff about the survey but not to encourage them to nominate or refrain from nominating any particular university. According to an article in Inside Higher Ed QS are considering whether it is necessary to take any action.

This report has given Ben Sowter of QS sufficient concern to argue that it is not possible to effectively manipulate the survey.  He has set out a reasonable case why it is unlikely that any institution could succeed in marching graduate students up to their desktops to vote for favoured institutions to avoid being sent to a reeducation camp or to teach at a community college.

However, some of his reasons sound a little unconvincing: signing up, screening, an advisory board with years of experience. It would help if he were a little more specific, especially about the sophisticated anomaly detection algorithm, which sounds rather intimidating.

The problem with the academic survey is not that an institution like University College Cork is going to push its way into the global  top twenty or top one hundred  but that there could be a systematic bias towards those who are ambitious or from certain regions. It is noticeable that some universities in East and Southeast Asia do very much better on the academic survey than on other indicators. 

The QS academic survey is getting overly complicated and incoherent. It began as a fairly simple exercise. Its respondents were at first drawn form the subscription lists of World Scientific, an academic publishing company based in Singapore. Not surprisingly, the first academic survey produced a strong, perhaps too strong, showing for Southeast and East Asia and Berkeley. 

The survey turned out to be unsatisfactory, not least because of an extremely small response rate. In succeeding years QS has added respondents drawn from the subscription lists of Mardev, an academic database, largely replacing those from World Scientific, lists supplied by universities, academics nominated by respondents to the survey and those joining the online sign up facility. It is not clear how many academics are included in these groups or what the various response rates are. In addition, counting responses for three years unless overwritten by the respondent might enhance the stability of the indicator but it also means that some of the responses might be from people who have died or retired.

The reputation survey does not have a good reputation and it is time for QS to think about revamping the methodology. But changing the methodology means that rankings cannot be used to chart the progress or decline of universities over time. The solution to this dilemma might be to launch a new ranking and keep the old one, perhaps issuing it later in the year or giving it less prominence.

My suggestion to QS is that they keep the current methodology but call it the Original QS Rankings or the QS Classic Rankings. Then they could introduce the  QS Plus or New QS rankings or something similar which would address the issues about the academic survey and introduce some other changes. Since QS are now offering a wide range of products, Latin American Rankings, Asian Rankings, subject rankings, best student cities and probably more to come, this should  not impose an undue burden.

First, starting with the academic survey, 40 percent is too much for any indicator. It should be reduced to 20 per cent.

Next, the respondents should be divided into clearly defined categories, presented with appropriate questions and appropriately verified.

It should be recognised that subscribing to an online database or being recommended by another faculty member is not really a qualification for judging international research excellence. Neither is getting one’s name listed as corresponding author. These days that  can have as much to do with faculty politics as with ability.  I suggest that the academic survey should be sent to:

(a) highly cited researchers  or those with a high h-index who should be asked about international research excellence;
(b) researchers drawn from the Scopus database who should be asked to rate the regional or national research standing of universities.

Responses should be weighted according to the number of researchers per country.

This could be supplemented with a survey of student satisfaction with teaching based on a student version of the sign up facility and requiring a valid academic address with verification.

Also, a sign up facility could be established for anyone interested and asking a question about general perceived quality.

If QS ever do change the academic survey they might as well review the other indicators. Starting with the employer review, this should be kept since, whatever its flaws, it is an external check on universities. But it might be easier to manipulate than the academic survey. Something was clearly going on in the 2011 ranking when there appeared to be a disproportionate number of respondents from some Latin American countries, leading QS to impose caps on universities exceeding the national average by a significant amount. 

"QS received a dramatic level of response from Latin America in 2011, these counts and all subsequent analysis have been adjusted by applying a weighting to responses from countries with a distinctly disproportionate level of response."

It seems that this problem was sorted out in 2012. Even so, QS might consider giving   half the weighting for this survey to an invited panel of employers. Perhaps  they could also broaden their database by asking NGOS and non-profit groups about their preferences.

There is little evidence that overall the number of international students has anything to do with any measure of quality and it also may have undesirable backwash effects as universities import large numbers of less able students. The problem is that QS are doing a good business moving graduate students across international borders so it is unlikely that they will ever consider doing away this indicator.

Staff student ratio is by all accounts a very crude indicator of quality of teaching. Unfortunately, at the moment there does appear to be any practical alternative. 
One thing that QS could do is to remove research staff from the faculty side of the equation. At the moment a university that hires an army of underpaid research assistants  and sacks a few teaching staff, or packs them off to a branch campus, would be recorded as having brought about a great improvement in teaching quality.

Citations are a notoriously problematical way of measuring research influence or quality. The Leiden Ranking shows that there are many ways of measuring research output and influence. It would be a good  idea to combine several different ways of counting citations. QS have already started to use the h- index in their subject rankings starting this year and have used citations per paper in the Asian University Rankings.

With the 20 per cent left over from reducing the weighting for the academic survey QS might consider introducing a measure of research output rather than quality since this would help distinguish among universities outside the elite and perhaps use internet data from Webometrics as in the Latin American rankings.

Sunday, November 18, 2012

Article in University World News

online hub
    s View Printable VersionEmail Article To a Friend
GLOBAL
Ranking’s research impact indicator is skewed

Tuesday, July 12, 2011

This WUR had such promise

The new Times Higher Education World University Rankings of 2010 promised much, new indicators based on income, a reformed survey that included questions on postgraduate teaching, a reduction in the weighting given to international students.

But the actual rankings that came out in September were less than impressive.  Dividing the year's intake of undergraduate students by the total of academic faculty looked rather odd. Counting the ratio of doctoral students to undergraduates, while omitting masters programs, was an invitation to the herding of marginal students into substandard doctoral degree programmes.

The biggest problem though was the insistence on giving a high weighting – somewhat higher than originally proposed -- to citations. Nearly a third of the total weighting was assigned to the average citations per paper normalized by field and year. The collection of statistics about citations is the bread and butter of Thomson Reuters (TR), THE’s  data collector, and one of their key products is the Incites system, which apparently was the basis for their procedure during the 2010 ranking exercise. This compares the citation records of academics with international scores benchmarked by year and field. Of course, those who want to find out exactly where they stand have to find out what the benchmark scores are and that is something that cannot be easily calculated without Thomson Reuters.

Over the last two or three decades the number of citations received by papers, along with the amount of money attracted from funding agencies, has become an essential sign of scholarly merit. Things have now reached the point where, in many universities, research is simply invisible unless it has been funded by an external agency and then published in a journal noted for being cited frequently by writers who contribute to journals that are frequently cited. The boom in citations has begun to resemble classical share and housing bubbles as citations acquire an inflated value increasingly detached from any objective reality.

It has become clear that citations can be manipulated as much as, perhaps more than, any other indicator used by international rankings. Writers can cite themselves, they can cite co-authors, they can cite those who cite them. Journal editors and reviewers can  make suggestions to submitters about who to cite. And so on.

Nobody, however, realized quite how unrobust citations might become until the unplanned intersection of THE’s indicator and a bit of self citation and mutual citation by two peripheral scientific figures raised questions about the whole business.

One of these two was Mohamed El Naschie who comes from a wealthy Egyptian family. He studied in Germany and took a Ph D in engineering at University College London. Then he taught in Saudi Arabia while writing several papers that appear to have been of an acceptable academic standard although not very remarkable. 

But this was not enough. In 1993 he started a new journal dealing with applied mathematics and theoretical physics called Chaos, Solitons and Fractals (CSF), published by the leading academic publishers, Elsevier. El Naschie’s journal published many papers written by himself. He has, to his credit, avoided exploiting junior researchers or insinuating himself into research projects to which he has contributed little. Most of his papers do not appear to be research but rather theoretical speculations many of which concern the disparity between the mathematics that describes the universe and that which describes subatomic space and suggestions for reconciling the two.

Over the years El Naschie has listed a number of universities as affiliations. The University of Alexandra was among the most recent of them. It was not clear, however, what he did at or for the university and it was only recently, after the publication of the 2010 THE World University Rankings, that there is documentation of any official connection.

El Naschie does not appear to be highly regarded by physicists and mathematicians, as noted earlier in this blog,  and he has been criticized severely in the physics and mathematics blogosphere.  He has, it is true, received some very vocal support but he is not really helped by the extreme enthusiasm and uniformity of style of his admirers. Here is a fairly typical example, from the comments in Times Higher Education: 
“As for Mohamed El Naschie, he is one of the most original thinkers of our time. He mastered science, philosophy, literature and art like very few people. Although he is an engineer, he is self taught in almost everything, including politics. Now I can understand that a man with his charisma and vast knowledge must be the object of envy but what is written here goes beyond that. My comment here will be only about what I found out regarding a major breakthrough in quantum mechanics. This breakthrough was brought about by the work of Prof. Dr. Mohamed El Naschie”
Later, a professor at Donghua University, China, Ji-Huan He, an editor at El Naschie’s  journal, started a similar publication, the International Journal of Nonlinear Sciences and Numerical Simulation (IJNSNS), whose editorial board included El Naschie. This journal was published by the respectable and unpretentious Israeli company, Freund of Tel Aviv. Ji-Huan He’s journal has published 29 of his own papers and 19 by El Naschie. The  two journals have contained articles that cite and are cited by articles in the other. Since they deal with similar topics some degree of cross citation is to be expected but here it seems to be unusually large.

Let us look at how El Naschie worked. An example is his paper, ‘The theory of Cantorian spacetime and high energy particle physics (an informal review)’, published in Chaos, Solitons and Fractals,41/5, 2635-2646, in  September  2009.

There are 58 citations in the bibliography. El Naschie cites himself 24 times, 20 times to papers in Chaos, Solitons and Fractals and 4 in IJNSNS.  Ji-Huan He is cited twice along with four  other authors from CSF. This paper has been cited 11 times, ten times in CSF in issues of the journal published later in the year.

Articles in mathematics and theoretical physics do not get cited very much. Scholars in those fields prefer to spend time thinking about an interesting paper before settling down to comment. Hardly any papers get even a single citation in the same year. Here we have 10 for one paper. That might easily be 100 times the average for that discipline and that year.

The object of this exercise had nothing to do with the THE rankings. What it did do was to push El Naschie’s  journal into the top ranks of scientific journals as measured by the Journal Impact Factor, that is the number of citations per paper within a two year period. It also meant that for a brief period El Naschie was listed by Thomson Reuters’ Science Watch as a rising star of research.

Eventually, Elsevier appointed a new editorial board at CSF that did not include El Naschie. The journal did however continue to refer to him as the founding editor. Since then the number of citations has declined sharply.

Meanwhile, Ji-huan  He was also accumulating a large number of citations, many of them from conference proceedings that he had organized. He was launched into the exalted ranks of the ISI Highly Cited Researchers and his journal topped the citation charts in mathematics. Unfortunately, early this year Freund sold off its journals to the reputed German publishers De Gruyter, who appointed a new editorial board that did not include either him or El Naschie.

El Naschie, He and a few others have been closely scrutinized by Jason Rush, a mathematician formerly of the University of Washington. Rush was apparently infuriated by El Naschie s unsubstantiated claims to have held senior positions at a variety of universities including Cambridge, Frankfurt, Surrey and Cornell. Since 2009 he has closely, perhaps a little obsessively, published a blog that chronicles the activities of El Naschie and those associated with him. Most of what is known about El Naschie and He was unearthed by his blog, El Naschie Watch.

Meanwhile, Thomson Reuters were preparing their analysis of citations for the THE rankings. They used the Incites system and compared the number of citations with benchmark scores representing the average for year and field.
This meant that for this criterion a high score did not necessarily represent a large number of citations. It could simply represent more citations than normal in a short period of time in fields where citation was infrequent and, perhaps more significantly since we are talking about averages here, a small total number of publications. Thus, Alexandria, with only a few publications but listed as the affiliation of an author who was cited much more frequently than usual in theoretical physics or applied mathematics, did spectacularly well.


This is rather like declaring Norfolk (very flat according to Oscar Wilde) the most mountainous county in England because of a few hillocks that were nonetheless relatively much higher than the surrounding plains.

Thomson Reuters would have done themselves a lot of good if they had taken the sensible course of using several indicators of research impact, such as total citations, citations per faculty, the h-index or references in social media or if they had allocated a smaller weighting to the indicator or if they had imposed a reasonable  threshold number of publications instead of just 50 or if they had not counted self-citations, or citations within journals or if they had figured out a formula to detect mutual citations..

So, in September  THE published its rankings with University of Alexandria in the top 200 overall and in fourth place for research impact, ahead of Oxford, Cambridge and most of the Ivy league. Not bad for a university that had not even been counted by HEEACT, QS or the Shanghai rankings and that in 2010 had lagged behind two other institutions in Alexandria itself in Webometrics.

When the rankings were published THE pointed out that Alexandria had once had a famous library and that a former student had gone on to the USA to eventually win a Nobel prize decades later. Still, they did concede that the success of Alexandria was mainly due  to one "controversial" author.

Anyone with access to the Web of Science could determine in a minute precisely who the controversial author was. For a while it was unclear exactly how a few dozen papers and a few hundred citations could put Alexandria among the world’s elite. Some observers wasted time wondering if  Thomson Reuters had been counting papers from a community college in Virginia or Minnesota, a branch of the Louisiana State University or federal government offices in the Greater Washington area. Eventually, it was clear that El Naschie could not, as he himself asserted, have done it by himself: he needed the help of the very distinctive features of Thomson Reuters’ methodology.

There were  other oddities in the 2010 rankings. Some might have accepted a high placing for Bilkent University in Turkey. It was well known for its Academic English programs. It also had one much cited article whose apparent impact was increased because it was classified as multidisciplinary, usually a low cited category, thereby scoring well above the world benchmark. However, when regional patterns were analyzed, the rankings began to look rather strange, especially the research impact indicator. In Australia, the Middle East, Hong Kong and Taiwan the order of universities, looked rather different from what local experts expected. Hong Kong Baptist University the third best in the SAR? Pohang University of Science and Technology so much better than Yonsei or KAIST? Adelaide the fourth best Australian university?

In the UK or the US these placings might seem plausible or at least not worth bothering about. But in the Middle East the idea of Alexandria as top university even in Egypt is a joke and the places awarded to the others look very dubious.

THE and Thomson Reuters tried to shrug off the complaints by saying that there were just a few outliers which they were prepared to debate and that anyone who criticized them had a vested interest in the old THE-QS rankings which had been discredited. They  dropped hints that the citations indicator would be reviewed but so far nothing specific has emerged.

A few days ago, however,  Phil Baty of THE seemed to imply that there was nothing wrong with the citations indicator.
Normalised data allow fairer comparisons, and that is why Times Higher Education will employ it for more indicators in its 2011-12 rankings, says Phil Baty.
One of the most important features of the Times Higher Education World University Rankings is that all our research citations data are normalised to take account of the dramatic variations in citation habits between different academic fields.
Treating citations data in an “absolute manner”, as some university rankings do, was condemned earlier this year as a “mortal sin” by one of the world’s leading experts in bibliometrics, Anthony van Raan of the Centre for Science and Technology Studies at Leiden University. In its rankings, Times Higher Education gives most weight to the “research influence” indicator – for our 2010-11 exercise, this drew on 25 million citations from 5 million articles published over five years. The importance of normalising these data has been highlighted by our rankings data supplier, Thomson Reuters: in the field of molecular biology and genetics, there were more than 1.6 million citations for the 145,939 papers published between 2005 and 2009; in mathematics, however, there were just 211,268 citations for a similar number of papers (140,219) published in the same period.
To ignore this would be to give a large and unfair advantage to institutions that happen to have more provision in molecular biology, say, than in maths. It is for this crucial reason that Times Higher Education’s World University Rankings examine a university’s citations in each field against the global average for that subject.

Yes, but when we are assessing hundreds of universities in very narrowly defined fields we start running into quite small samples that can be affected by deliberate manipulation or by random fluctuations.

Another point is that if there are many more journals, papers, citations and grants in oncology or genetic engineering than in the spatialization of gender performativity or the influence of Semitic syntax on Old Irish then perhaps society is telling us something about what it values and that is something that should not be dismissed so easily.

So, it could be  we are going to get the University of Alexandria in the top 200 again, perhaps joined by Donghua university.

At the risk of being repetitive, there are a few simple  things that Times Higher  and TR could do to make the citations indicator more credible. There are also  more ways of measuring research excellence.Possibly they are thinking about them but so far there is no sign  of this.

The credibility of last year's rankings has  declined further with the decisions of the judge presiding over the libel case brought by El Naschie against Nature (see here for commentary). Until now it could be claimed that El Naschie was a wll known scientist by virtue of the large numbers of citations that he had received or at least an interesting and controversial maverick.

El  Naschie is pursuing a case against  Nature for publishing an article that suggested his writings were not of a high quality and that those published in his journal did not appear to be properly peer reviewed

The judge has recently ruled  ruled that  El Naachie cannot proceed with a claim for specific damages since he has not brought any evidence for this. He can only go ahead with a claim for general damages for loss of reputation and hurt feelings. Even here, it looks like it will be tough going. El Naschie seems to be unwilling or unable to find expert witnesses to testify to the scientific merits of his papers.

"The Claimant is somewhat dismissive of the relevance of expert evidence in this case, largely on the basis that his field of special scientific knowledge is so narrow and fluid that it is difficult for him to conceive of anyone qualifying as having sufficient "expert" knowledge of the field. Nevertheless, permission has been obtained to introduce such evidence and it is not right that the Defendants should be hindered in their preparations."

He also seems to have problems with locating records that would demonstrate that his many articles published in Chaos, Solitons and Fractals were adequately reviewed.
  1. The first subject concerns the issue of peer-review of those papers authored by the Claimant and published in CSF. It appears that there were 58 articles published in 2008. The Claimant should identify the referees for each article because their qualifications, and the regularity with which they reviewed such articles, are issues upon which the Defendants' experts will need to comment. Furthermore, it will be necessary for the Defendants' counsel to cross-examine such reviewers as are being called by the Claimant as to why alleged faults or defects in those articles survived the relevant reviews.

  2. Secondly, further information is sought as to the place or places where CSF was administered between 2006 and 2008. This is relevant, first, to the issue of whether the Claimant has complied with his disclosure obligations. The Defendants' advisers are not in a position to judge whether a proportionate search has been carried out unless they are properly informed as to how many addresses and/or locations were involved. Secondly, the Defendants' proposed expert witnesses will need to know exactly how the CSF journal was run. This information should be provided.
It would therefore  seem to be getting more and more difficult for anyone to argue that TR's methodology has uncovered a pocket of excellence in Alexandria.

Unfortunately, it is beginning to look as though THE will not only use much the same method as last time but will apply normalisation to other indicators as well.
But what about the other performance indicators used to compare institutions? Our rankings examine the amount of research income a university attracts and the number of PhDs it awards. For 2011-12, they will also look at the number of papers a university has published that are co-authored by an international colleague.
Don’t subject factors come into play here, too? Shouldn’t these also be normalised? We think so. So I am pleased to confirm that for the 2011-12 World University Rankings, Times Higher Education will introduce subject normalisation to a range of other ranking indicators.
This is proving very challenging. It makes huge additional demands on the data analysts at Thomson Reuters and, of course, on the institutions themselves, which have had to provide more and richer data for the rankings project. But we are committed to constantly improving and refining our methodology, and these latest steps to normalise more indicators evidence our desire to provide the most comprehensive and rigorous tables we can.
What this might mean is that universities that spend modest amounts of money in fields where little money is usually spent would get a huge score. So what would happen if an eccentric millionaire left millions to establish a lavishly funded research chair in continental philosophy at Middlesex University?  There are no doubt precautions that Thomson Reuters could take but will they? The El Naschie business does not inspire very much confidence that they will.

The reception of the 2010 THE WUR rankings suggests that the many in the academic world have doubts about the wisdom of using normalised citation data without considering the potential for gaming or statistical anomalies. But the problem may run deeper and involve citations as such. QS, THE 's rival and former partner, have produced a series of subject rankings based on data from 2010. The overall results for each subject are based on varying combinations of the scores for academic opinion, employer opinion and citations per paper (not per faculty as in the general rankings).

The results are interesting. Looking at citations per paper alone we see that Boston College and Munich are jointly first in Sociology. Rutgers is third for politics and international studies. MIT is third for philosophy (presumably Chomsky and co). Stellenbosch is first for Geography and Area studies. Padua is first for linguistics. Tokyo Metropolitan University is second for biological sciences and Arizona State University first.


Pockets of excellence or statistical anomalies? These results may not be quite as incredible as Alexandria in the THE rankings but they are not a very good advertisement for the validity of citations as a measure of research excellence.

It appears that THE have not made their minds up yet. There is still time to produce a believable and rigorous ranking system. But whatever happens, it is unlikely that citations,  normalized or unnormalized, will continue to be the unquestionable gold standard of academic and scientific research.