Showing posts sorted by relevance for query MEPhi. Sort by date Show all posts
Showing posts sorted by relevance for query MEPhi. Sort by date Show all posts

Sunday, November 18, 2012

Article in University World News

online hub
    s View Printable VersionEmail Article To a Friend
GLOBAL
Ranking’s research impact indicator is skewed

Saturday, October 27, 2012

Dancing in the Street in Moscow

"Jubilant crowds poured into the streets of Moscow when it was announced that Moscow State Engineering Physics Institute had been declared to be the joint top university in the world, along with Rice University in Texas, for research impact".

Just kidding about the celebrations.

But the Times Higher Education - Thomson Reuters World University Rankings have given the "Moscow State Engineering Physics Institute" a score of 100 for research impact, which is measured by the number of citations per paper normalised by field, year of publication and country.

There are a couple of odd things about this.

First, "Moscow State Engineering Physics Institute " was reorganised in 2009 and its official title is now  National Research Nuclear University MEPhI. It still seems to be normal to refer to MEPhI or Moscow State Engineering Physics Institute so I will not argue about this. But I wonder if there has been some confusion in TR's data collection.

Second, THE says that institutions are not ranked if they teach only a single narrow subject. Does the institution teach more than just physics?

So how did MEPhI do it ?  The answer seems to be because of a couple of massively cited review articles. The first was by C Amsler et (many many) alia in Physics Letters B of September 2008, entitled Review of Particle Physics. It was cited 1278 times in 2009 and 1627 times in 2010 according to the Web of Science, even more according to Google Scholar.

Here is the abstract.

"Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions., plus 2778 new measurements from 645 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors., probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, top quark, muon anomalous magnetic moment, extra dimensions, particle detectors, cosmic background radiation, dark matter, cosmological parameters, and big bang cosmology".

I have not counted the number of authors but there are113 institutional affiliations of which MEPhI is 84th.

The second paper is by K. Nakamura et alia.  It is also entitled Review of Particle Physics and was published in the Journal Of Physics G-Nuclear and Particle Physics in July 2010 . It was cited 1240 times in 2011. This is the abstract.
 
"This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2158 new measurements from 551 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on neutrino mass, mixing, and oscillations, QCD, top quark, CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, fragmentation functions, particle detectors for accelerator and non-accelerator physics, magnetic monopoles, cosmological parameters, and big bang cosmology".

There are 119 affiliations of which MEPhI is 91st..

Let me stress that there is nothing improper here. It is normal for papers in the physical sciences to include summaries or reviews of research at the beginning of a literature review. I also assume that the similarity in the wording of the abstracts would be considered appropriate standardisation within the discipline rather than plagiarism.

TR 's method counts the numbers of citations of a paper compared to the average for that field in that year in that country. MEPhI would not get very much credit for a publication in physics which is a quite highly cited discipline, but it would get some for being in Russia where citations in English are relatively sparse and a massive boost for exceeding the average for citations within one or two years of publication many times over.

There is one other factor. MEPhI was only one of more than 100 institutions contributing to each of these papers but it got such an unusually massive score because its citations, which were magnified by region and period of publication, were divided by a comparatively small number of publications.

This is not as bad as Alexandria University being declared the fourth best university for research impact in 2010. MEPhI is a genuinely excellent institution which Alexandria, despite a solitary Nobel laureate and an historic library, was not.  But does it really deserve to be number one for research impact or even in the top 100? TR's methods are in need of very thorough revision.

And I haven't heard about any celebrations in Houston either.

More on MEPhI

Right after putting up the post on Moscow State Engineering Physics Institute and its "achievement" in getting the maximum score for research impact in the latest THE - TR World University Rankings, I found this exchange on Facebook.  See my comments at the end.

  • Valery Adzhiev So, the best university in the world in the "citation" (i.e. "research influence") category is Moscow State Engineering Physics Institute with maximum '100' score. This is remarkable achivement by any standards. At the same time it scored in "research" just 10.6 (out of 100) which is very, very low result. How on earth that can be?
  • Times Higher Education World University Rankings Hi Valery,

    Regarding MEPHI’s high citation impact, there are two causes: Firstly they have a couple of extremely highly cited papers out of a very low volume of papers.The two extremely highly cited papers are skewing what would ordinarily be a very g
    ood normalized citation impact to an even higher level.

    We also apply "regional modification" to the Normalized Citation Impact. This is an adjustment that we make to take into account the different citation cultures of each country (because of things like language and research policy). In the case of Russia, because the underlying citation impact of the country is low it means that Russian universities get a bit of a boost for the Normalized Citation Impact.

    MEPHI is right on the boundary for meeting the minimum requirement for the THE World University Rankings, and for this reason was excluded from the rankings in previous years. There is still a big concern with the number of papers being so low and I think we may see MEPHI’s citation impact change considerably over time as the effect of the above mentioned 2 papers go out of the system (although there will probably be new ones come in).

    Hope this helps to explain things.
    THE
  • Valery Adzhiev Thanks for your prompt reply. Unfortunately, the closer look at that case only adds rather awkward questions. "a couple of extremely highly cited papers are actually not "papers": they are biannual volumes titled "The Review of Particle Physics" that ...See More
  • Valery Adzhiev I continue. There are more than 200 authors (in fact, they are "editors") from more than 100 organisation from all over the world, who produce those volumes. Look: just one of them happened to be affiliated with MEPhI - and that rather modest fact (tha...See More
  • Valery Adzhiev Sorry, another addition: I'd just want to repeat that my point is not concerned only with MEPhI - Am talking about your methodology. Look at the "citation score" of some other universities. Royal Holloway, University of London having justt 27.7 in "res...See More
  • Alvin See Great observations, Valery.
  • Times Higher Education World University Rankings Hi Valery,

    Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for
    ...See More
  • Andrei Rostovtsev This is in fact rather philosofical point. There are also a number of very scandalous papers with definitively negative scientific impact, but making a lot of noise around. Those have also high contribution to the citation score, but negative impact t...See More

    It is true that two extremely highly cited publications combined with a low total number of publications skewed the results but what is equally or perhaps more important is that  these citations occur in the year or two years after publication when citations tend to be relatively infrequent compared to later years. The 2010 publication is a biennial review, like the 2008 publication, that will be cited copiously for two years after which it will no doubt be superseded by the 2012 edition.

    Also, we should note that in the ISI Web of Science, the 2008 publication is classified as "physics, multidisciplinary". Papers listed as multidisciplinary generally get relatively few citations so if the publication was compared to other multidisciplinary papers it would get an even larger weighting. 
    Valery has an excellent point when he points out that these publications have over 100 authors or contributors each (I am not sure whether they are actual researchers or administrators). Why then did not all the other contributors boost their instutitions' scores to similar heights? Partly because they were not in Russia and therefore did not get the regional weighting but also because they were publishing many more papers overall than MEPhI.  

    So basically, A. Romaniouk who contributed 1/173rd of one publication was considered as having more research impact than hundreds of researchers at Harvard, MIT, Caltech etc producing hundreds of papers cited hundreds of times.  Sorry, but is this a ranking of research quality or a lottery?

    The worse part of THE's reply is this:

    Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for all to see (and indeed scrutinise, which everyone is entitled to do).

    We welcome feedback, are constantly developing our system, and will definitely take your comments on board.

    The system is not balanced. Citations have a weighting of 30 %, much more than any other  indicator. Even the research reputation survey has a weighting of only 18%.  And to describe as comprehensive an indicator which allows a fraction of one or two publications to surpass massive amounts of original and influential research is really plumbing the depths of absurdity.

    I am just about to finish comparing the scores for research and research impact for the top 400 universities. There is a statistically significant correlation but it is quite modest. When research reputation, volume of publications and research income show such a modestcorrelation with research impact it is time to ask whether there is a serious problem with this indicator.

    Here is some advice for THE and TR.

    • First, and surely very obvious, if you are going to use field normalisation then calculate the score for discipline groups, natural sciences, social sciences and so on and aggregate the scores. So give MEPhI a 100 for physical or natural sciences if you think they deserve it but not for the arts and humanities.
    • Second, and also obvious, introduce fractional counting, that is dividing the number of citations by the number of authors of the cited paper.
    • Do not count citations to summaries, reviews or compilations of research.
    • Do not count citations of commercial material about computer programs. This would reduce the very high and implausible score for Gottingen which is derived from a single publication.
    • Do not assess research impact with only one indicator. See the Leiden ranking for the many ways of rating research.
    • Consider whether it is appropriate to have a regional weighting. This is after all an international ranking.
    • Reduce the weighting for this indicator.
    • Do not count self-citations. Better yet do  not count citations from researchers at the same university.
    • Strictly enforce your rule about  not including single subject institutions in the general rankings.
    • Increase the threshold number of publications for inclusion in the rankings from two hundred to four hundred.


Saturday, June 27, 2015

Why Russia Might Rise Fairly Quickly in the Rankings After Falling a Bit

An article  by Alex Usher in Higher Education in Russia and Beyond, reprinted in University World News, suggests five structural reasons why Russian universities will not rise very quickly in the global rankings. These are:


  • the concentration of resources in academies rather than universities


  • excessive specialisation among existing universities


  • a shortage of researchers  caused by the economic crisis of the nineties


  • excessive bureaucratic control over research projects

  • limited fluency in English.

Over the next couple of years things might even get a bit worse. QS are considering introducing a sensible form of field normalisation, just for the five main subject groups. This might not happen since they are well aware of the further advantages this will give to English speaking universities, especially Oxbridge and places like Yale and Princeton, that are strong in the humanities and social sciences. But if it did it would not be good for Russian universities. Meanwhile, THE has spoken about doing something about hugely cited multi-authored physics papers and that could drastically affect institutions like MEPhI.

But after that, there are special features in the QS and THE world rankings that could be exploited by Russian universities. 

Russia is surrounded by former Soviet countries where Russian is widely used and which could provide large numbers of international research collaborators, an indicator in the THE rankings, and could be a source of international students and faculty, indicators in the THE and QS rankings and a source of respondents to the THE and QS academic surveys.

Russia might also consider tapping the Chinese supply of bright students for STEM subjects. It is likely that the red bourgeoisie will start wondering about the wisdom of sending their heirs to universities that give academic credit for things like walking around with a mattress or  not shaving armpit hair and think about a degree in engineering from Moscow State or MEPhI.

Russian universities also appear to have a strong bias towards applied sciences and vocational training that should, if marketed properly, produce high scores in the QS employer survey and the THE Industry Income: Innovation indicator.






Friday, October 04, 2013

MIT and TMU are the most influential research universities in the world

I hope to comment extensively on the new Times Higher Education - Thomson Reuters rankings in a while but for the moment here is a comment on the citations indicator.

Last year Times Higher Education and Thomson Reuters solemnly informed the world that the two most influential places for research were Rice University in Texas and the Moscow State Engineering Physics Institute (MEPhI).

Now, the top two for Citations: research influence are MIT, which sounds a bit more sensible than Rice, and Tokyo Metropolitan University. Rice has slipped very slightly and MEPhI has disappeared from the general rankings because it was realised that it is a single-subject institution. I wonder how they worked that out.

That may be a bit unfair. What about that paper on opposition politics in central Russia in the 1920s?

Tokyo Metropolitan University's success at first seems rather odd because it also has a very low score for Research, which probably means that it has a poor reputation for research, does not receive much funding, has few graduate students and/or publishes few papers. So how could its research be so influential?

The answer is that it was one of scores of contributors to a couple of multi-authored publications on particle physics and a handful of widely cited papers in genetics and also produced few papers overall. I will let Thomson Reuters explain how that makes it into a pocket or a mountain of excellence.

Saturday, April 20, 2013

The Leiden Ranking

The Leiden ranking for 2013 is out. This is produced by the Centre for Science and Technology Studies (CWTS) at Leiden University and represents pretty much the state of the art in assessing research publications and citations.

A variety of indicators are presented with several different settings but no overall winner is declared which means that these rankings are not going to get the publicity given to QS and Times Higher Education.

Here are top universities, using the default settings provided by CWTS.

Total Publications: Harvard
Citations per Paper: MIT
Normalised Citations per Paper: MIT
Quality of Publications: MIT

There are also indicators for international and industrial collaboration that I hope to discuss later.

It is also noticeable that high flyers in the Times Higher Education citations indicator, Alexandria University, Moscow Engineering Physics Institute (MEPhI), Hong Kong Baptist University, Royal Holloway, do not figure at all in the Leiden Ranking. What happened to them?

How could MEPhI, equal first in the world for research influence according to THE and Thomson Reuters, fail to even show up in the normalised citation indicator in the Leiden Ranking?

Firstly, Leiden have collected data for the top 500 universities in the world according to number of publications in the Web of Science. That would have been sufficient to keep these institutions out of the rankings.

In addition, Leiden use fractionalised counting as a default setting so that the impact of mutiple-author publications is divided by the number of university addresses. This would drastically reduce the impact of publications like the Review of Particle Physics.

Also, by field Leiden mean five broad subject groups whereas Thomson Reuters appears to use a larger number (21 if they use the same system as they do for highly cited researchers.) There is accordingly more chance of anomalous cases having a great influence in the THE rankings.

THE and Thomson Reuters would do well to look at the multi-authored, and most probably soon to be multi-cited, papers that were published in 2012 and look at the universities that could do well in 2014 if the methodology remains unchanged.


Wednesday, September 18, 2019

Going Up and Up Down Under: the Case of the University of Canberra

It is a fact almost universally ignored that when a university suddenly rises or falls many places in the global rankings the cause is not transformative leadership, inclusive excellence, team work, or strategic planning but nearly always a defect or a change in the rankers' methodology.

Let's take a look at the fortunes of the University of Canberra (UC) which THE world rankings now have in the world's top 200 universities and Australia's top ten. This is a remarkable achievement since the university did not appear in these rankings until 2015-16 when it was placed in the 500-600 band with very modest scores of 18.4 for teaching, 19.3 for research, 29.8 for citations, which is supposed to measure research impact, 36.2 for industry income, and 54.6 for international outlook.

Just four years later the indicator scores are 25.2 for teaching, 31.1 for research, 99.2 for citations, 38.6 for industry income, and 86.9 for international orientation. 

The increase in the overall score over four years, calculated with different weightings for the indicators, was composed of 20.8 points for citations and 6.3 for the other four indicators combined. Without those 20.8 points Canberra would be in the 601-800 band.

I will look at where that massive citation score came from in a moment. 

It seems that the Australian media is reporting on this superficially impressive performance with little or no scepticism and without noting how different it is from the other global rankings. 

The university has issued a statement quoting vice-chancellor Professor Deep Saini as saying that the "result confirms the steady strengthening of the quality at the University of Canberra, thanks to the outstanding work of our research, teaching and professional staff" and that the "increase in citation impact is indicative of the quality of research undertaken at the university, coupled with a rapid growth in influence and reach, and has positioned the university as amongst the best in the world."

The Canberra Times reports that the vice-chancellor has said  that part of the improvement was the result of a talent acquisition campaign while noting that many faculty were complaining about pressure and excessive workloads.

Leigh Sullivan, DVC for research and innovation, has a piece in the Campus Morning Mail that hints at reservations about UC's apparent success, which is " a direct result of its Research Foundation Plan (2013-2017) and "a strong emphasis on providing strategic support for research excellence in a few select research areas where UC has strong capability." He notes that when the citation scores of research stars are excluded there has still been a significant increase in citations and warns that what goes up can go down and that performance can be affected by changes in the ranking methodology.

The website riotact quotes the vice-chancellor on the improvement in research quality as evidence by the citation score and as calling for more funding for universities: the "government has to really think and look hard at how well we support our universities. That's not to say it badly supports us, it's that the university sector deserves to be on the radar of our government as a major national asset."

The impressive ascent of UC is unique to THE. No serious ranking puts it in the top 200 or anywhere near. In the current Shanghai Rankings it is in the 601-700 band and has been falling for the last two years. In Webometrics it is 730th in the world and 947th for Excellence, that is publications in the 10% most cited in 25 disciplines.  In  University Ranking by Academic Performance it is 899th and in the CWUR Rankings it doesn't even make the top 1,000.

Round University Ranking and Leiden Ranking do not rank UC at all.

Apart from THE UC does best in the QS rankings where it is 484th in the world and 26th in Australia.

So how could UC perform so brilliantly in THE rankings when nobody else has recognised that brilliance? What does THE know that nobody else does? Actually, it does not perform brilliantly in the THE rankings, just in the citations indicator which is supposed to measure research influence or research impact.

This year UC has a score of 99.2 which puts it in the top twenty for citations just behind Nova Southeastern University in Florida and Cankaya University in Turkey and ahead of Harvard, Princeton and Oxford. The top university this year is Aswan University in Egypt replacing Babol Noshirvani University of Technology in Iran. 

No, THE is not copying the interesting methodology of the Fortunate 500. This is the result of an absurd methodology that THE is unable or unwilling for some reason to change.

THE has a self-inflicted  problem with  a small number of papers that have hundreds or thousands of "authors" and collect thousands of citations. Some of these are from the CERN project and THE has dealt with them  by using a modified form of fractional counting for papers with more than a thousand authors. That has removed the privilege of institutions that contribut to CERN projects but has replaced it with the privilege of those that contribute to the Global Burden of Disease Study (GBDS) whose papers tend to have hundreds but not thousands of contributors and sometimes receive over a thousand citations. As a result, places like Tokyo Metropolitan University, National Research University MEPhI and Royal Holloway London have been replaced as citation super stars by St Georges' London, Brighton and Sussex Medical School, and Oregon Health and Science University.

It would be a simple matter to apply fractional counting to all papers, dividing the number of citations by the number of authors. After all Leiden Ranking and Nature Index manage to do it but THE for some reason has chosen not to follow.

The problem is compounded by counting self-citations, by hyper-normalisation so that the chances of hitting the jackpot with an unusually highly cited paper are increased, and by the country bonus that boosts the scores for universities by virtue of their location in low scoring countries. 

And so to UC's apparent success this year. This is entirely the result of it's citation score which is entirely dependent on THE's methodology. 

Between 2014 and 2018 UC had 3,825 articles in the Scopus database of which 27 were linked to the GBDS which is funded by the Bill and Melinda Gates Foundation. Those 27 articles, each with hundreds of contributors, have received 18,431 citations all of which are credited to UC and its contributor. The total number of citations is 53,929 so those 27 articles accounted for over a third of UC's citations. Their impact might be even greater if they were cited disproportionately soon after publication.

UC has of course improved its citation performance even without those articles but it is clear that they have made an outsize contribution. UC is not alone here. Many universities in the top 100 for citations in the THE world rankings owe their status to the GBDS: Anglia Ruskin, Reykjavik, Aswan, Indian Institute of Technology Ropar, the University of Peradeniya, Desarrollo, Pontifical Javeriana and so on.

There is absolutely nothing wrong with the GBDS nor with UC encouraging researchers to take part. The problem lies with THE and its reluctance to repair an indicator that produces serious distortions and is an embarrassment to those universities who apparently look to the THE rankings to validate their status.

Monday, October 29, 2018

Is THE going to reform its methodology?


An article by Duncan Ross in Times Higher Education (THE) suggests that the World University Rankings are due for repair and maintenance. He notes that these rankings were originally aimed at a select group of research orientated world class universities but THE is now looking at a much larger group that is likely to be less internationally orientated, less research based and more concerned with teaching.

He says that it is unlikely that there will be major changes in the methodology for the 2019-20 rankings next year but after that there may be significant adjustment.

There is a chance that  the industry income indicator, income from industry and commerce divided by the number of faculty, will be changed. This is an indirect attempt to capture innovation and is unreliable since it is based entirely on data submitted by institutions. Alex Usher of Higher Education Strategy Associates has pointed out some problems with this indicator.

Ross seems most concerned, however, with the citations indicator which at present is normalised by field, of which there are over 300, type of publication and year of publication. Universities are rated not according to the number of citations they receive but by comparison with the world average of citations to documents of a specific type in a specific field in a specific year. There are potentially over 8,000 boxes into which any single citation could be dropped for comparison.

Apart from anything else, this has resulted in a serious reduction in transparency. Checking on the scores for Highly Cited Researchers or Nobel and fields laureates in the Shanghai rankings can be done in few minutes. Try comparing thousands of world averages with the citation scores of a university.

This methodology has produced a series of bizarre results, noted several times in this blog. I hope I will be forgiven for yet again listing some of the research impact superstars that THE has identified over the last few years: Alexandria University, Moscow Nuclear Research University MEPhI, Anglia Ruskin University, Brighton and Sussex Medical School, St George's University of London, Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology, Babol Noshirvani University of Technology, Oregon Health and Science University, Jordan University of Science and Technology, Vita-Salute San Raffaele University.

The problems of this indicator go further than just a collection of quirky anomalies. It now accords a big privilege to medical research as it once did to fundamental physics research. It offers a quick route to ranking glory by recruiting highly cited researchers in strategic fields and introduces a significant element of instability into the rankings.

So here are some suggestions for THE should it actually get round to revamping the citations indicator.

1. The number of universities around the world that do a modest amount of  research of any kind is relatively small, maybe five or six thousand. The number that can reasonably claim to have a significant global impact is much smaller, perhaps two or three hundred. Normalised citations are perhaps a reasonable way of distinguishing among the latter, but pointless or counterproductive when assessing the former. The current THE methodology might be able to tell whether  a definitive literary biography by a Yale scholar has the same impact in its field as cutting edge research in particle physics at MIT but it is of little use in assessing the relative research output of mid-level universities in South Asia or Latin America.

THE should therefore consider reducing the weighting of citations to the same as research output or lower.

2.  A major cause of problems with the citations indicator is the failure to introduce complete fractional counting, that is distributing credit for citations proportionately among authors or institutions. At the moment THE counts every author of a paper with less than a thousand authors as though each of them were the sole author of the paper. As a result, medical schools that produce papers with hundreds of authors now have a privileged position in the THE rankings, something that the use of normalisation was supposed to prevent.

THE has introduced a moderate form of fractional counting for papers with over a thousand authors but evidently this is not enough.

It seems that some, rankers do not like fractional counting because it might discourage collaboration. I would not dispute that collaboration might be a good thing, although it is often favoured by institutions that cannot do very well by themselves, but this is not sufficient reason to allow distortions like those noted above to flourish.

3. THE have a country bonus or regional modification which divides a university's citation impact score by the square root of the score of the country in which the university is located. This was supposed to compensate for the lacking of funding and networks that afflicts some countries, which apparently does not affect their reputation scores or publications output. The effect of this bonus is to give some universities a boost derived not from their excellence but from the mediocrity or worse of their compatriots. THE reduced the coverage of this bonus to fifty percent of the indicator in 2015.  It might well be time to get rid of it altogether

4. Although QS stopped counting self-citations in 2011 THE continue to do so. They have said that overall they make little difference. Perhaps, but as the rankings expand to include more and more universities it will become more likely that a self-citer or mutual-citer will propel undistinguished  schools up the charts. There could be more cases like Alexandria University or Veltech University.

5. THE needs to think about what they are using citations to measure. Are they trying to assess research quality in which they case they should use citations per papers? Are they trying to estimate overall research impact in which case the appropriate metric would be total citations.

6. Normalisation by field and document type might be helpful for making fine distinctions among elite research universities but lower down it creates or contributes to serious problems when a single document or an unusually productive author can cause massive distortions. Three hundred plus fields may be too many and THE should think about reducing the number of fields. 

7. There has been a proliferation in recent years In the number of secondary affiliations. No doubt most of these are making a genuine contribution to the life of both or all of the universities with which they are affiliated. There is, however, a possibility of serious abuse if the practice continues. It would be greatly to THE's credit if they could find some way of omitting or reducing the weighting of secondary affiliations. 

8. THE are talking about different models of excellence. Perhaps they could look at the Asiaweek rankings which had a separate table for technological universities or Maclean's with its separate rankings for doctoral/medical universities and primarily undergraduate schools. Different weightings could be given to citations for each of these categories.