The Taiwan Rankings
It is unfortunate that the "big three" of the international ranking scene -- ARWU (Shanghai), THE and QS -- receive a disproportionate amount of public attention while several research-based rankings are largely ignored. Among them is the National Taiwan University Ranking which until this year was run by the Higher Education Evaluation and Acceditation Council of Taiwan.
The rankings, which are based on the ISI databases, assign a weighting of 25% to research productivity (number of articles over the last 11 years, number of articles in the current year), 35% to research impact (number of citations over the last 11 years, number of citations in the current year, average number of citations over the last 11 years) and 40 % to research excellence (h-index over the last 2 years, number of highly cited papers, number of articles in the current year in highly cited journals).
Rankings by field and subject are also available.
There is no attempt to assess teaching or student quality and publications in the arts and humanities are not counted.
These rankings are a valuable supplement to the Shanghai ARWU. The presentation of data over 11 and 1 year periods allows quick comparisons of changes over a decade.
Here are the top ten.
1. Harvard
2. Johns Hopkins
3. Stanford
4. University of Washington at Seattle
5. UCLA
6. University of Washington Ann Arbor
7. Toronto
8. University of California Berkeley
9. Oxford
10. MIT
High-flyers in other rankings do not do especially well here. Princeton is 52nd, Caltech 34th, Yale 19th, Cambridge 15th most probably because they are relatively small or have strengths in the humanities.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Saturday, December 15, 2012
Sunday, November 18, 2012
Article in University World News
GLOBAL
Ranking’s research impact indicator is skewed
Richard Holmes15 November 2012 Issue No:247
The 2012 Times Higher Education World University Rankings consisted of 13 indicators grouped into five clusters. One of these clusters consisted of precisely one indicator, research impact, which is measured by normalised citations and which THE considers to be the flagship of the rankings.
It is noticeable that this indicator, prepared for THE by Thomson Reuters, gives some universities implausibly high scores. I have calculated the world's top universities for research impact according to this indicator.
Since it accounts for 30% of the total ranking it clearly can have a significant effect on overall scores. The data are from the profiles, which can be accessed by clicking on the top 400 universities. Here are the top 20:
1. Rice University
1. Moscow (State) Engineering Physics Institute (MEPhI)
3. University of California Santa Cruz
3. MIT
5. Princeton
6. Caltech
7. University of California Santa Barbara
7. Stanford
9. University of California Berkeley
10. Harvard
11. Royal Holloway London
12. Chicago
13. Northwestern
14. Tokyo Metropolitan University
14. University of Colorado Boulder
16. University of Washington Seattle
16. Duke
18. University of California San Diego
18. University of Pennsylvania
18. Cambridge
There are some surprises here, such as Rice University in joint top place, second-tier University of California campuses at Santa Cruz (equal third with MIT) and Santa Barbara (equal seventh with Stanford) placed ahead of Berkeley and Los Angeles, Northwestern almost surpassing Chicago, and Tokyo Metropolitan University ahead of Tokyo and Kyoto universities and everywhere else in Asia.
It is not totally implausible that Duke and the University of Pennsylvania might be overtaking Cambridge and Oxford for research impact, but Royal Holloway and Tokyo Metropolitan University?
These are surprising, but Moscow State Engineering Physics Institute (MEPhI) as joint best in the world is a definite head-scratcher.
Other oddities
Going down a bit in this indicator we find more oddities.
According to Thomson Reuters, the top 200 universities in the world for research impact include Notre Dame, Carleton, William and Mary College, Gottingen, Boston College, University of East Anglia, Iceland, Crete, Koc University, Portsmouth, Florida Institute of Technology and the University of the Andes.
On the other hand, when we get down to the 300s we find that Tel Aviv, National Central University Taiwan, São Paulo, Texas A&M and Lomonosov Moscow State University are assigned surprisingly low places. The latter is actually in 400th place for research impact among the top 400.
It would be interesting to hear what academics in Russia think about an indicator that puts MEPhI in first place in the world for research impact and Lomonosov Moscow State University in 400th place.
I wonder too about the Russian reaction to MEPhI as overall second among Russian and Eastern European universities. See here, here and here for national university rankings and here and here for web-based rankings.
Déjà vu
We have been here or somewhere near here before.
In 2010 the first edition of the THE rankings placed Alexandria University in the world's top 200 and fourth for research impact. This was the result of a flawed methodology combined with diligent self-citation and cross-citation by a writer whose lack of scientific credibility has been confirmed by a British court.
Supposedly the methodology was fixed last year. But now we have an indicator as strange as in 2010, perhaps even more so.
So how did MEPhI end up as world's joint number one for research impact? It should be emphasised that this is something different from the case of Alexandria. MEPhI is, by all accounts, a leading institution in Russian science. It is, however, very specialised and fairly small and its scientific output is relatively modest.
First, let us take a look at another source, the Scimago World Report, which gives MEPhI a rank of 1,722 for total publications between 2006 and 2010, the same period that Thomson Reuters counts.
Admittedly, that includes a few non-university institutions. It has 25.9 % of its publication in the top quartile of journals. It has a score of 8.8 % for excellence – that is, the proportion of publications in the most highly cited 10% of publications. It gets a score of 1.0 for ‘normalised impact’, which means that it gets exactly the world average for citations adjusted by field, publication type and period of publication.
Moving on to Thomson Reuters’ database at the Web of Science, MEPhI has had only 930 publications listed under that name between 2006 and 2010, although there were some more under other name variants that pushed it over the 200 papers per year threshold to be included in the rankings.
It is true that MEPhI can claim three Nobel prize winners, but they received awards in 1958 and 1964 and one of them was for work done in the 1930s.
So how could anyone think that an institution that now has a modest and specialised output of publications and a citation record that, according to Scimago, does not seem significantly different from the international average using the somewhat larger Scopus database – Thomson Reuters uses the more selective ISI Web of Science – could emerge at the top of Thomson Reuters' research impact indicator?
Furthermore, MEPhI has no publications listed in the ISI Social Science Citation Index and exactly one (uncited) paper in the Arts and Humanities Index on oppositional politics in Central Russia in the 1920s.
There are, however, four publications assigned to MEPhI authors in the Web of Science that are listed as being on the literature of the British Isles, none of which seem to have anything to do with literature or the British Isles or any other isles, but which have a healthy handful of citations that would yield much higher values if they were classified under literature rather than physics.
Citations
Briefly, the essence of Thomson Reuters' counting of citations is that a university's citations are compared to the average for a field in a particular period after publication.
So if the average for a field is 10 citations per paper one year after publication, then 300 citations of a single paper one year after publication would count as 30 citations. If the average for the field was one citation it would count as 300 citations.
To get a high score in the Thomson Reuters research impact indicator, it helps to get citations soon after publication, preferably in a field where citations are low or middling, rather than simply getting many citations.
The main cause of MEPhI's research impact supremacy would appear to be a biennial review that summarises research over two years in particle physics and is routinely referred to in the literature review of research papers in the field.
The life span of each review is short since it is superseded after two years by the next review so that the many citations are jammed into a two-year period, which could produce a massive score for ‘journal impact factor’.
It could also produce a massive score on the citations indicator in the THE rankings. In addition, Thomson Reuters then gives a weighting to countries according to the number of citations. If citations are generally low in their countries, then institutions get some more value added.
The 2006 “Review of Particle Physics” published in Journal of Physics G, received a total of 3,662 citations, mostly in 2007 and 2008. The 2008 review published in Physics Letters B had 3,841 citations, mostly in 2009 and 2010, and the 2010 review, also published in Journal of Physics G, had 2,592 citations, nearly all in 2011. Someone from MEPhI was listed as co-author of the 2008 and 2010 reviews.
It is not the total number of citations that matters, but the number of citations that occur soon after publication. So the 2008 review received 1,278 citations in 2009, but the average number of citations in 2009 to other papers published in Physics Letters B for 2008 was 4.4.
So the 2008 review received nearly 300 times as many citations in the year after publication as the mean for that journal. Add the extra weighting for Russia and there is a very large boost to MEPhI's score from just a single publication. Note that these are reviews of research so it is likely that there had already been citations to the research publications that are reviewed. Among the highlights of the 2010 review are 551 new papers and 108 mostly revised or new reviews.
If the publications had a single author or just a few authors from MEPhI then this would perhaps suggest that the institute had produced or made a major contribution to two publications of exceptional merit. The 2008 review in fact had 173 co-authors. The 2010 review listed 176 members of the Particle Data Group who are referred to as contributors.
It seems then that MEPhI was declared the joint best university for research impact largely because of two multi-author (or contributor) publications to which it had made a fractional contribution. Those four papers assigned to literature may also have helped.
As we go through the other anomalies in the indicator, we find that the reviews of particle physics contributed to other high research impact scores. Tokyo Metropolitan University, Royal Holloway London, the University of California at Santa Cruz, Santa Barbara and San Diego, Notre Dame, William and Mary, Carleton and Pisa also made contributions to the reviews.
This was not the whole story. Tokyo Metropolitan University benefited from many citations to a paper about new genetic analysis software and Santa Cruz had contributed to a massively cited multi-author human haplotype map.
Number of authors
This brings us to the total number of publications. There were more than 100 authors of or contributors to the reviews but for some institutions the number of citations had no discernible effect and for others not very much.
Why the difference? Here size really does matter and small really is beautiful.
MEPhI has relatively few publications overall. It only just managed to cross the 200 publications per year threshold to get into the rankings. This means that the massive and early citation of the reviews was averaged out over a small number of publications. For others the citations were absorbed by many thousands of publications.
These anomalies and others could have been avoided by a few simple and obvious measures. After the business with Alexandria in 2010 Thomson Reuters did tweak its system, but evidently this was not enough.
First, it would help if Thomson Reuters scrutinised the criteria by which specialised institutions are included in the rankings. If we are talking about how well universities spread knowledge and ideas, it is questionable whether we should count institutions that do research in one or only a few fields.
There are many methods by which research impact can be evaluated. The full menu can be found on the Leiden Ranking site. Use a variety of methods to calculate research impact, especially those like the h-index that are specifically designed to work around outliers and extreme cases.
It would be sensible to increase the threshold of publications for inclusion in the rankings. The Leiden Ranking excludes universities with fewer than 500 publications a year. If a publication has multiple authors, divide the number of citations by the number of authors. If that is too complex then start dividing citations when the number reaches 10 or a dozen.
Do not count reviews, summaries, compilations or other publications that refer to papers that may have already been cited, or at least put them in a separate publication type. Do not count self-citations. Even better, do not count citations within the same institution or the same journal.
Most importantly, calculate the indicator for the six subject groups and then aggregate them. If you think that a fractional contribution to two publications justifies putting MEPhI at the top for research impact in physics, go ahead and give them 100 for natural sciences or physical sciences. But is it reasonable to give the institution any more than zero for arts and humanities, the social sciences and so on?
So we have a ranking indicator that has again yielded some very odd results.
In 2010 Thomson Reuters asserted that it had a method that was basically robust, transparent and sophisticated but which had a few outliers and statistical anomalies about which they would be happy to debate.
It is beginning to look as though outliers and anomalies are here to stay and there could well be more on the way.
It will be interesting to see if Thomson Reuters will try to defend this indicator again.
* Richard Holmes is a lecturer at Universiti Teknologi MARA in Malaysia and author of the University Ranking Watch blog.
Apology for factual error
In an earlier version of this article in University World News I made a claim that Times Higher Education, in their 2012 World University Rankings, had introduced a methodological change that substantially affected the overall ranking scores. I acknowledge that this claim was without factual foundation. I withdraw the claim and apologise without reservation to Phil Baty and Times Higher Education.
Richard Holmes
It is noticeable that this indicator, prepared for THE by Thomson Reuters, gives some universities implausibly high scores. I have calculated the world's top universities for research impact according to this indicator.
Since it accounts for 30% of the total ranking it clearly can have a significant effect on overall scores. The data are from the profiles, which can be accessed by clicking on the top 400 universities. Here are the top 20:
1. Rice University
1. Moscow (State) Engineering Physics Institute (MEPhI)
3. University of California Santa Cruz
3. MIT
5. Princeton
6. Caltech
7. University of California Santa Barbara
7. Stanford
9. University of California Berkeley
10. Harvard
11. Royal Holloway London
12. Chicago
13. Northwestern
14. Tokyo Metropolitan University
14. University of Colorado Boulder
16. University of Washington Seattle
16. Duke
18. University of California San Diego
18. University of Pennsylvania
18. Cambridge
There are some surprises here, such as Rice University in joint top place, second-tier University of California campuses at Santa Cruz (equal third with MIT) and Santa Barbara (equal seventh with Stanford) placed ahead of Berkeley and Los Angeles, Northwestern almost surpassing Chicago, and Tokyo Metropolitan University ahead of Tokyo and Kyoto universities and everywhere else in Asia.
It is not totally implausible that Duke and the University of Pennsylvania might be overtaking Cambridge and Oxford for research impact, but Royal Holloway and Tokyo Metropolitan University?
These are surprising, but Moscow State Engineering Physics Institute (MEPhI) as joint best in the world is a definite head-scratcher.
Other oddities
Going down a bit in this indicator we find more oddities.
According to Thomson Reuters, the top 200 universities in the world for research impact include Notre Dame, Carleton, William and Mary College, Gottingen, Boston College, University of East Anglia, Iceland, Crete, Koc University, Portsmouth, Florida Institute of Technology and the University of the Andes.
On the other hand, when we get down to the 300s we find that Tel Aviv, National Central University Taiwan, São Paulo, Texas A&M and Lomonosov Moscow State University are assigned surprisingly low places. The latter is actually in 400th place for research impact among the top 400.
It would be interesting to hear what academics in Russia think about an indicator that puts MEPhI in first place in the world for research impact and Lomonosov Moscow State University in 400th place.
I wonder too about the Russian reaction to MEPhI as overall second among Russian and Eastern European universities. See here, here and here for national university rankings and here and here for web-based rankings.
Déjà vu
We have been here or somewhere near here before.
In 2010 the first edition of the THE rankings placed Alexandria University in the world's top 200 and fourth for research impact. This was the result of a flawed methodology combined with diligent self-citation and cross-citation by a writer whose lack of scientific credibility has been confirmed by a British court.
Supposedly the methodology was fixed last year. But now we have an indicator as strange as in 2010, perhaps even more so.
So how did MEPhI end up as world's joint number one for research impact? It should be emphasised that this is something different from the case of Alexandria. MEPhI is, by all accounts, a leading institution in Russian science. It is, however, very specialised and fairly small and its scientific output is relatively modest.
First, let us take a look at another source, the Scimago World Report, which gives MEPhI a rank of 1,722 for total publications between 2006 and 2010, the same period that Thomson Reuters counts.
Admittedly, that includes a few non-university institutions. It has 25.9 % of its publication in the top quartile of journals. It has a score of 8.8 % for excellence – that is, the proportion of publications in the most highly cited 10% of publications. It gets a score of 1.0 for ‘normalised impact’, which means that it gets exactly the world average for citations adjusted by field, publication type and period of publication.
Moving on to Thomson Reuters’ database at the Web of Science, MEPhI has had only 930 publications listed under that name between 2006 and 2010, although there were some more under other name variants that pushed it over the 200 papers per year threshold to be included in the rankings.
It is true that MEPhI can claim three Nobel prize winners, but they received awards in 1958 and 1964 and one of them was for work done in the 1930s.
So how could anyone think that an institution that now has a modest and specialised output of publications and a citation record that, according to Scimago, does not seem significantly different from the international average using the somewhat larger Scopus database – Thomson Reuters uses the more selective ISI Web of Science – could emerge at the top of Thomson Reuters' research impact indicator?
Furthermore, MEPhI has no publications listed in the ISI Social Science Citation Index and exactly one (uncited) paper in the Arts and Humanities Index on oppositional politics in Central Russia in the 1920s.
There are, however, four publications assigned to MEPhI authors in the Web of Science that are listed as being on the literature of the British Isles, none of which seem to have anything to do with literature or the British Isles or any other isles, but which have a healthy handful of citations that would yield much higher values if they were classified under literature rather than physics.
Citations
Briefly, the essence of Thomson Reuters' counting of citations is that a university's citations are compared to the average for a field in a particular period after publication.
So if the average for a field is 10 citations per paper one year after publication, then 300 citations of a single paper one year after publication would count as 30 citations. If the average for the field was one citation it would count as 300 citations.
To get a high score in the Thomson Reuters research impact indicator, it helps to get citations soon after publication, preferably in a field where citations are low or middling, rather than simply getting many citations.
The main cause of MEPhI's research impact supremacy would appear to be a biennial review that summarises research over two years in particle physics and is routinely referred to in the literature review of research papers in the field.
The life span of each review is short since it is superseded after two years by the next review so that the many citations are jammed into a two-year period, which could produce a massive score for ‘journal impact factor’.
It could also produce a massive score on the citations indicator in the THE rankings. In addition, Thomson Reuters then gives a weighting to countries according to the number of citations. If citations are generally low in their countries, then institutions get some more value added.
The 2006 “Review of Particle Physics” published in Journal of Physics G, received a total of 3,662 citations, mostly in 2007 and 2008. The 2008 review published in Physics Letters B had 3,841 citations, mostly in 2009 and 2010, and the 2010 review, also published in Journal of Physics G, had 2,592 citations, nearly all in 2011. Someone from MEPhI was listed as co-author of the 2008 and 2010 reviews.
It is not the total number of citations that matters, but the number of citations that occur soon after publication. So the 2008 review received 1,278 citations in 2009, but the average number of citations in 2009 to other papers published in Physics Letters B for 2008 was 4.4.
So the 2008 review received nearly 300 times as many citations in the year after publication as the mean for that journal. Add the extra weighting for Russia and there is a very large boost to MEPhI's score from just a single publication. Note that these are reviews of research so it is likely that there had already been citations to the research publications that are reviewed. Among the highlights of the 2010 review are 551 new papers and 108 mostly revised or new reviews.
If the publications had a single author or just a few authors from MEPhI then this would perhaps suggest that the institute had produced or made a major contribution to two publications of exceptional merit. The 2008 review in fact had 173 co-authors. The 2010 review listed 176 members of the Particle Data Group who are referred to as contributors.
It seems then that MEPhI was declared the joint best university for research impact largely because of two multi-author (or contributor) publications to which it had made a fractional contribution. Those four papers assigned to literature may also have helped.
As we go through the other anomalies in the indicator, we find that the reviews of particle physics contributed to other high research impact scores. Tokyo Metropolitan University, Royal Holloway London, the University of California at Santa Cruz, Santa Barbara and San Diego, Notre Dame, William and Mary, Carleton and Pisa also made contributions to the reviews.
This was not the whole story. Tokyo Metropolitan University benefited from many citations to a paper about new genetic analysis software and Santa Cruz had contributed to a massively cited multi-author human haplotype map.
Number of authors
This brings us to the total number of publications. There were more than 100 authors of or contributors to the reviews but for some institutions the number of citations had no discernible effect and for others not very much.
Why the difference? Here size really does matter and small really is beautiful.
MEPhI has relatively few publications overall. It only just managed to cross the 200 publications per year threshold to get into the rankings. This means that the massive and early citation of the reviews was averaged out over a small number of publications. For others the citations were absorbed by many thousands of publications.
These anomalies and others could have been avoided by a few simple and obvious measures. After the business with Alexandria in 2010 Thomson Reuters did tweak its system, but evidently this was not enough.
First, it would help if Thomson Reuters scrutinised the criteria by which specialised institutions are included in the rankings. If we are talking about how well universities spread knowledge and ideas, it is questionable whether we should count institutions that do research in one or only a few fields.
There are many methods by which research impact can be evaluated. The full menu can be found on the Leiden Ranking site. Use a variety of methods to calculate research impact, especially those like the h-index that are specifically designed to work around outliers and extreme cases.
It would be sensible to increase the threshold of publications for inclusion in the rankings. The Leiden Ranking excludes universities with fewer than 500 publications a year. If a publication has multiple authors, divide the number of citations by the number of authors. If that is too complex then start dividing citations when the number reaches 10 or a dozen.
Do not count reviews, summaries, compilations or other publications that refer to papers that may have already been cited, or at least put them in a separate publication type. Do not count self-citations. Even better, do not count citations within the same institution or the same journal.
Most importantly, calculate the indicator for the six subject groups and then aggregate them. If you think that a fractional contribution to two publications justifies putting MEPhI at the top for research impact in physics, go ahead and give them 100 for natural sciences or physical sciences. But is it reasonable to give the institution any more than zero for arts and humanities, the social sciences and so on?
So we have a ranking indicator that has again yielded some very odd results.
In 2010 Thomson Reuters asserted that it had a method that was basically robust, transparent and sophisticated but which had a few outliers and statistical anomalies about which they would be happy to debate.
It is beginning to look as though outliers and anomalies are here to stay and there could well be more on the way.
It will be interesting to see if Thomson Reuters will try to defend this indicator again.
* Richard Holmes is a lecturer at Universiti Teknologi MARA in Malaysia and author of the University Ranking Watch blog.
Apology for factual error
In an earlier version of this article in University World News I made a claim that Times Higher Education, in their 2012 World University Rankings, had introduced a methodological change that substantially affected the overall ranking scores. I acknowledge that this claim was without factual foundation. I withdraw the claim and apologise without reservation to Phil Baty and Times Higher Education.
Richard Holmes
Saturday, November 03, 2012
Apology
In a recent article in University World News I made a claim that Times Higher Education in their recent World University Rankings had introduced a methodological change that substantially affected the overall ranking scores. I acknowledge that this claim was without factual foundation. I withdraw the claim and apologise without reservation to Phil Baty and Times Higher education.
In a recent article in University World News I made a claim that Times Higher Education in their recent World University Rankings had introduced a methodological change that substantially affected the overall ranking scores. I acknowledge that this claim was without factual foundation. I withdraw the claim and apologise without reservation to Phil Baty and Times Higher education.
Saturday, October 27, 2012
More on MEPhI
Right after putting up the post on Moscow State Engineering Physics Institute and its "achievement" in getting the maximum score for research impact in the latest THE - TR World University Rankings, I found this exchange on Facebook. See my comments at the end.
Right after putting up the post on Moscow State Engineering Physics Institute and its "achievement" in getting the maximum score for research impact in the latest THE - TR World University Rankings, I found this exchange on Facebook. See my comments at the end.
- Valery Adzhiev So, the best university in the world in the "citation" (i.e. "research influence") category is Moscow State Engineering Physics Institute with maximum '100' score. This is remarkable achivement by any standards. At the same time it scored in "research" just 10.6 (out of 100) which is very, very low result. How on earth that can be?
- Times Higher Education World University Rankings Hi Valery,
Regarding MEPHI’s high citation impact, there are two causes: Firstly they have a couple of extremely highly cited papers out of a very low volume of papers.The two extremely highly cited papers are skewing what would ordinarily be a very good normalized citation impact to an even higher level.
We also apply "regional modification" to the Normalized Citation Impact. This is an adjustment that we make to take into account the different citation cultures of each country (because of things like language and research policy). In the case of Russia, because the underlying citation impact of the country is low it means that Russian universities get a bit of a boost for the Normalized Citation Impact.
MEPHI is right on the boundary for meeting the minimum requirement for the THE World University Rankings, and for this reason was excluded from the rankings in previous years. There is still a big concern with the number of papers being so low and I think we may see MEPHI’s citation impact change considerably over time as the effect of the above mentioned 2 papers go out of the system (although there will probably be new ones come in).
Hope this helps to explain things.
THE - Valery Adzhiev Thanks for your prompt reply. Unfortunately, the closer look at that case only adds rather awkward questions. "a couple of extremely highly cited papers are actually not "papers": they are biannual volumes titled "The Review of Particle Physics" that ...See More
- Valery Adzhiev I continue. There are more than 200 authors (in fact, they are "editors") from more than 100 organisation from all over the world, who produce those volumes. Look: just one of them happened to be affiliated with MEPhI - and that rather modest fact (tha...See More
- Valery Adzhiev Sorry, another addition: I'd just want to repeat that my point is not concerned only with MEPhI - Am talking about your methodology. Look at the "citation score" of some other universities. Royal Holloway, University of London having justt 27.7 in "res...See More
- Alvin See Great observations, Valery.
- Times Higher Education World University Rankings Hi Valery,
Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for...See More - Andrei Rostovtsev This is in fact rather philosofical point. There are also a number of very scandalous papers with definitively negative scientific impact, but making a lot of noise around. Those have also high contribution to the citation score, but negative impact t...See More
It is true that two extremely highly cited publications combined with a low total number of publications skewed the results but what is equally or perhaps more important is that these citations occur in the year or two years after publication when citations tend to be relatively infrequent compared to later years. The 2010 publication is a biennial review, like the 2008 publication, that will be cited copiously for two years after which it will no doubt be superseded by the 2012 edition.
Also, we should note that in the ISI Web of Science, the 2008 publication is classified as "physics, multidisciplinary". Papers listed as multidisciplinary generally get relatively few citations so if the publication was compared to other multidisciplinary papers it would get an even larger weighting.Valery has an excellent point when he points out that these publications have over 100 authors or contributors each (I am not sure whether they are actual researchers or administrators). Why then did not all the other contributors boost their instutitions' scores to similar heights? Partly because they were not in Russia and therefore did not get the regional weighting but also because they were publishing many more papers overall than MEPhI.
So basically, A. Romaniouk who contributed 1/173rd of one publication was considered as having more research impact than hundreds of researchers at Harvard, MIT, Caltech etc producing hundreds of papers cited hundreds of times. Sorry, but is this a ranking of research quality or a lottery?
The worse part of THE's reply is this:
Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for all to see (and indeed scrutinise, which everyone is entitled to do).
We welcome feedback, are constantly developing our system, and will definitely take your comments on board.
The system is not balanced. Citations have a weighting of 30 %, much more than any other indicator. Even the research reputation survey has a weighting of only 18%. And to describe as comprehensive an indicator which allows a fraction of one or two publications to surpass massive amounts of original and influential research is really plumbing the depths of absurdity.
I am just about to finish comparing the scores for research and research impact for the top 400 universities. There is a statistically significant correlation but it is quite modest. When research reputation, volume of publications and research income show such a modestcorrelation with research impact it is time to ask whether there is a serious problem with this indicator.
Here is some advice for THE and TR.
- First, and surely very obvious, if you are going to use field normalisation then calculate the score for discipline groups, natural sciences, social sciences and so on and aggregate the scores. So give MEPhI a 100 for physical or natural sciences if you think they deserve it but not for the arts and humanities.
- Second, and also obvious, introduce fractional counting, that is dividing the number of citations by the number of authors of the cited paper.
- Do not count citations to summaries, reviews or compilations of research.
- Do not count citations of commercial material about computer programs. This would reduce the very high and implausible score for Gottingen which is derived from a single publication.
- Do not assess research impact with only one indicator. See the Leiden ranking for the many ways of rating research.
- Consider whether it is appropriate to have a regional weighting. This is after all an international ranking.
- Reduce the weighting for this indicator.
- Do not count self-citations. Better yet do not count citations from researchers at the same university.
- Strictly enforce your rule about not including single subject institutions in the general rankings.
- Increase the threshold number of publications for inclusion in the rankings from two hundred to four hundred.
Friday, October 26, 2012
Dancing in the Street in Moscow
"Jubilant crowds poured into the streets of Moscow when it was announced that Moscow State Engineering Physics Institute had been declared to be the joint top university in the world, along with Rice University in Texas, for research impact".
Just kidding about the celebrations.
But the Times Higher Education - Thomson Reuters World University Rankings have given the "Moscow State Engineering Physics Institute" a score of 100 for research impact, which is measured by the number of citations per paper normalised by field, year of publication and country.
There are a couple of odd things about this.
First, "Moscow State Engineering Physics Institute " was reorganised in 2009 and its official title is now National Research Nuclear University MEPhI. It still seems to be normal to refer to MEPhI or Moscow State Engineering Physics Institute so I will not argue about this. But I wonder if there has been some confusion in TR's data collection.
Second, THE says that institutions are not ranked if they teach only a single narrow subject. Does the institution teach more than just physics?
So how did MEPhI do it ? The answer seems to be because of a couple of massively cited review articles. The first was by C Amsler et (many many) alia in Physics Letters B of September 2008, entitled Review of Particle Physics. It was cited 1278 times in 2009 and 1627 times in 2010 according to the Web of Science, even more according to Google Scholar.
Here is the abstract.
"Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions., plus 2778 new measurements from 645 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors., probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, top quark, muon anomalous magnetic moment, extra dimensions, particle detectors, cosmic background radiation, dark matter, cosmological parameters, and big bang cosmology".
I have not counted the number of authors but there are113 institutional affiliations of which MEPhI is 84th.
The second paper is by K. Nakamura et alia. It is also entitled Review of Particle Physics and was published in the Journal Of Physics G-Nuclear and Particle Physics in July 2010 . It was cited 1240 times in 2011. This is the abstract.
"This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2158 new measurements from 551 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on neutrino mass, mixing, and oscillations, QCD, top quark, CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, fragmentation functions, particle detectors for accelerator and non-accelerator physics, magnetic monopoles, cosmological parameters, and big bang cosmology".
There are 119 affiliations of which MEPhI is 91st..
Let me stress that there is nothing improper here. It is normal for papers in the physical sciences to include summaries or reviews of research at the beginning of a literature review. I also assume that the similarity in the wording of the abstracts would be considered appropriate standardisation within the discipline rather than plagiarism.
TR 's method counts the numbers of citations of a paper compared to the average for that field in that year in that country. MEPhI would not get very much credit for a publication in physics which is a quite highly cited discipline, but it would get some for being in Russia where citations in English are relatively sparse and a massive boost for exceeding the average for citations within one or two years of publication many times over.
There is one other factor. MEPhI was only one of more than 100 institutions contributing to each of these papers but it got such an unusually massive score because its citations, which were magnified by region and period of publication, were divided by a comparatively small number of publications.
This is not as bad as Alexandria University being declared the fourth best university for research impact in 2010. MEPhI is a genuinely excellent institution which Alexandria, despite a solitary Nobel laureate and an historic library, was not. But does it really deserve to be number one for research impact or even in the top 100? TR's methods are in need of very thorough revision.
And I haven't heard about any celebrations in Houston either.
"Jubilant crowds poured into the streets of Moscow when it was announced that Moscow State Engineering Physics Institute had been declared to be the joint top university in the world, along with Rice University in Texas, for research impact".
Just kidding about the celebrations.
But the Times Higher Education - Thomson Reuters World University Rankings have given the "Moscow State Engineering Physics Institute" a score of 100 for research impact, which is measured by the number of citations per paper normalised by field, year of publication and country.
There are a couple of odd things about this.
First, "Moscow State Engineering Physics Institute " was reorganised in 2009 and its official title is now National Research Nuclear University MEPhI. It still seems to be normal to refer to MEPhI or Moscow State Engineering Physics Institute so I will not argue about this. But I wonder if there has been some confusion in TR's data collection.
Second, THE says that institutions are not ranked if they teach only a single narrow subject. Does the institution teach more than just physics?
So how did MEPhI do it ? The answer seems to be because of a couple of massively cited review articles. The first was by C Amsler et (many many) alia in Physics Letters B of September 2008, entitled Review of Particle Physics. It was cited 1278 times in 2009 and 1627 times in 2010 according to the Web of Science, even more according to Google Scholar.
Here is the abstract.
"Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions., plus 2778 new measurements from 645 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors., probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, top quark, muon anomalous magnetic moment, extra dimensions, particle detectors, cosmic background radiation, dark matter, cosmological parameters, and big bang cosmology".
I have not counted the number of authors but there are113 institutional affiliations of which MEPhI is 84th.
The second paper is by K. Nakamura et alia. It is also entitled Review of Particle Physics and was published in the Journal Of Physics G-Nuclear and Particle Physics in July 2010 . It was cited 1240 times in 2011. This is the abstract.
"This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2158 new measurements from 551 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on neutrino mass, mixing, and oscillations, QCD, top quark, CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, fragmentation functions, particle detectors for accelerator and non-accelerator physics, magnetic monopoles, cosmological parameters, and big bang cosmology".
There are 119 affiliations of which MEPhI is 91st..
Let me stress that there is nothing improper here. It is normal for papers in the physical sciences to include summaries or reviews of research at the beginning of a literature review. I also assume that the similarity in the wording of the abstracts would be considered appropriate standardisation within the discipline rather than plagiarism.
TR 's method counts the numbers of citations of a paper compared to the average for that field in that year in that country. MEPhI would not get very much credit for a publication in physics which is a quite highly cited discipline, but it would get some for being in Russia where citations in English are relatively sparse and a massive boost for exceeding the average for citations within one or two years of publication many times over.
There is one other factor. MEPhI was only one of more than 100 institutions contributing to each of these papers but it got such an unusually massive score because its citations, which were magnified by region and period of publication, were divided by a comparatively small number of publications.
This is not as bad as Alexandria University being declared the fourth best university for research impact in 2010. MEPhI is a genuinely excellent institution which Alexandria, despite a solitary Nobel laureate and an historic library, was not. But does it really deserve to be number one for research impact or even in the top 100? TR's methods are in need of very thorough revision.
And I haven't heard about any celebrations in Houston either.
Tuesday, October 09, 2012
Saturday, October 06, 2012
What Happened in Massachusetts?
The University of Massachusetts has fallen from 64th place in the Times Higher Education World University Rankings to 72nd.. Was this the result of savage spending cuts, the flight of international students or the rise of Asian universities?
Maybe, but there is perhaps another, less interesting reason.
Take a look at the THE reputation rankings for 2011 and 2012. The University of Massachusetts has fallen from 19th position in 2011 to 39th in 2012. How did that happen when there is no other remotely comparable change in the reputation rankings?
The obvious answer is that THE, like QS before them, are sometimes a bit hazy about American state university systems. They most probably counted votes for all five campuses of the university system last year but only the flagship campus at Amherst this year. The decline in the reputation score produced a smaller fall in the overall score. If there is another explanation I would be happy to hear it.
The University of Massachusetts has fallen from 64th place in the Times Higher Education World University Rankings to 72nd.. Was this the result of savage spending cuts, the flight of international students or the rise of Asian universities?
Maybe, but there is perhaps another, less interesting reason.
Take a look at the THE reputation rankings for 2011 and 2012. The University of Massachusetts has fallen from 19th position in 2011 to 39th in 2012. How did that happen when there is no other remotely comparable change in the reputation rankings?
The obvious answer is that THE, like QS before them, are sometimes a bit hazy about American state university systems. They most probably counted votes for all five campuses of the university system last year but only the flagship campus at Amherst this year. The decline in the reputation score produced a smaller fall in the overall score. If there is another explanation I would be happy to hear it.
Update on My Job Application to Thomson Reuters
Looking at the Times Higher Education Reputation Rankings for 2012, I have just noticed that number 31 has gone missing, leaving 49 universities in the rankings.
Maybe not very important, but if you go around telling everybody how sophisticated and robust you are, a little bit more is expected.
Looking at the Times Higher Education Reputation Rankings for 2012, I have just noticed that number 31 has gone missing, leaving 49 universities in the rankings.
Maybe not very important, but if you go around telling everybody how sophisticated and robust you are, a little bit more is expected.
My Job Application to Thomson Reuters
Looking at the 2011 Times Higher Education Reputation Ranking, prepared by Thomson Reuters, purveyors of robust and dynamic datasets, I noticed that there are only 48 universities listed (there are supposed to be 50) because numbers 25 and 34 have been omitted.
Looking at the 2011 Times Higher Education Reputation Ranking, prepared by Thomson Reuters, purveyors of robust and dynamic datasets, I noticed that there are only 48 universities listed (there are supposed to be 50) because numbers 25 and 34 have been omitted.
Friday, October 05, 2012
Observation on the THE Ranking
There will be some comments on the latest Times Higher Education World University Rankings over the next few days.
For the moment, I would just like to point to one noticeable feature of the rankings. The scores have improved across the board.
In 2011 the top university (Caltech) had an overall score of 94.8. This year it was 95.5.
In 2011 the 50th ranked university had a score of 64.9. This year it was 69.4.
In 2011 the 100th ranked university had a score of 53.7. This year it was 57.5 for the universities jointly ranked 99th..
In 2011 the 150th ranked university had a score of 46.7. This year it was 51.6.
In 2011 the 200th ranked university had a score of 41.4. This year it was 46.2.
The overall score is a combination of 13 different indicators all of which are benchmarked against the highest scorer in each category, which receives a score of 100. Even if universities throughout the world were spending more money, improving staff - student ratios, producing more articles, generating more citations and so on, this would not in itself raise everbody's, or nearly everybody's score.
There are no methodological changes this year that might explain what happened.
There will be some comments on the latest Times Higher Education World University Rankings over the next few days.
For the moment, I would just like to point to one noticeable feature of the rankings. The scores have improved across the board.
In 2011 the top university (Caltech) had an overall score of 94.8. This year it was 95.5.
In 2011 the 50th ranked university had a score of 64.9. This year it was 69.4.
In 2011 the 100th ranked university had a score of 53.7. This year it was 57.5 for the universities jointly ranked 99th..
In 2011 the 150th ranked university had a score of 46.7. This year it was 51.6.
In 2011 the 200th ranked university had a score of 41.4. This year it was 46.2.
The overall score is a combination of 13 different indicators all of which are benchmarked against the highest scorer in each category, which receives a score of 100. Even if universities throughout the world were spending more money, improving staff - student ratios, producing more articles, generating more citations and so on, this would not in itself raise everbody's, or nearly everybody's score.
There are no methodological changes this year that might explain what happened.
Wednesday, October 03, 2012
Dumbing Down Watch
Is any comment needed?
Is any comment needed?
Students starting university for the first time this autumn will be given a
detailed breakdown of their academic achievements, exam results,
extra-curricular activities and work placements, it was revealed.
More than half of universities in Britain will issue the new "Higher
Education Achievement Report", with plans for others to adopt it in the future.
University leaders said the document would initially list students’
overarching degree classification.
But Prof Sir Robert Burgess, vice-chancellor of Leicester University and
chairman of a working group set up to drive the reforms, said it was hoped that
first, second and third-class degrees would eventually be phased out altogether.
University Migration Alert
At the time of posting, The new THE rankings listed Stellenbosch University in South Africa as one of the top European universities.
Remind me to apply for a job with Thomson Reuters sometime.
At the time of posting, The new THE rankings listed Stellenbosch University in South Africa as one of the top European universities.
Remind me to apply for a job with Thomson Reuters sometime.
THE Rankings Out
The Times Higher Education World University Rankings 2012 are out. The top ten are :
1. Caltech (same as last year)
2. Oxford (up 2 places)
3. Stanford (down 1)
4. Harvard (down 2)
5. MIT (up 2)
6. Princeton (down 1)
7. Cambridge (down 1)
8. Imperial College London (same)
9. Berekeley (up 1)
At the top, the most important change is that Oxford has moved up two places to replace Harvard in the number two spot.
The Times Higher Education World University Rankings 2012 are out. The top ten are :
1. Caltech (same as last year)
2. Oxford (up 2 places)
3. Stanford (down 1)
4. Harvard (down 2)
5. MIT (up 2)
6. Princeton (down 1)
7. Cambridge (down 1)
8. Imperial College London (same)
9. Berekeley (up 1)
At the top, the most important change is that Oxford has moved up two places to replace Harvard in the number two spot.
Monday, October 01, 2012
Well, they would, wouldn't they?
Times Higher Education has published the results of a survey by IDP, a student recruitment agency:
At the top of the page is a banner about IDP being proudly associated with the THE rankings. Also, IDP, which is in the student recruitment trade, is a direct competitor of QS.
The data could be interpreted differently. More respondents were aware of the THE rankings and had not used them than knew of the QS rankings and had not used them.
Times Higher Education has published the results of a survey by IDP, a student recruitment agency:
The international student recruitment agency IDP asked globally mobile students which of the university ranking systems they were aware of. The Times Higher Education World University Rankings attracted more responses than any other ranking - some 67 per cent.This was some way ahead of any others. Rankings produced by the careers information company Quacquarelli Symonds (QS) garnered 50 per cent of responses, and the Shanghai Academic Rankings of World Universities (ARWU) received 15.8 per cent.Asked which of the global rankings they had used when choosing which institution to study at, 49 per cent of students named the THE World University Rankings, compared to 37 per cent who named QS and 6.7 per cent who named the ARWU and the Webometrics ranking published by the Spanish Cybermetrics Lab, a research group of the Consejo Superior de Investigaciones Científicas (CSIC).
At the top of the page is a banner about IDP being proudly associated with the THE rankings. Also, IDP, which is in the student recruitment trade, is a direct competitor of QS.
The data could be interpreted differently. More respondents were aware of the THE rankings and had not used them than knew of the QS rankings and had not used them.
Saturday, September 29, 2012
Forgive me for being pedantic...
My respect for American conservatism took a deep plunge when I read this in an otherwise enjoyable review by Matthew Walther of Kingsley Amis's Lucky Jim:
It is well known, or ought to be, that the institution in the novel was based on University College, Leicester, which is a long way from Wales. The bit about the "Honours class over the road", a reference to the Welford Road municipal cemetery, is a dead giveaway.
Walther can be forgiven though since he reminded me of this description of Lucky Jim's history article;
:
My respect for American conservatism took a deep plunge when I read this in an otherwise enjoyable review by Matthew Walther of Kingsley Amis's Lucky Jim:
Its eponymous hero, Jim Dixon, is a junior lecturer in history at an undistinguished Welsh college. Dixon’s pleasures are simple: he smokes a carefully allotted number of cigarettes each day and drinks a rather less measured amount of beer most nights at pubs. His single goal is to coast successfully through his two-year probation period and become a permanent faculty member in the history department.
It is well known, or ought to be, that the institution in the novel was based on University College, Leicester, which is a long way from Wales. The bit about the "Honours class over the road", a reference to the Welford Road municipal cemetery, is a dead giveaway.
Walther can be forgiven though since he reminded me of this description of Lucky Jim's history article;
“It was a perfect title, in that it crystallized the article’s niggling mindlessness, its funereal parade of yawn-enforcing facts, the pseudo-light it threw upon non-problems. Dixon had read, or begun to read, dozens like it, but his own seemed worse than most in its air of being convinced of its own usefulness and significance.”
:
Dumbing Down Watch
The New York Fire Department has announced the results of a new entrance exam. The passmark of 70 was reached by 95.72% of applicants. Previous tests had been thrown out because insufficient numbers of African-Americans and Hispanics were able to pass.
The new exam appears to be extremely easy and seems to assume that firefighting is a job that requires minimal intelligence. Effectively, the new policy for the New York Fire Department is to select at random from those able to get themselves to a testing center and answer questions that should pose no challenge to the average junior high school student.
The change was in response to the directives of Judge Nichols Garaufis, a graduate of Columbia Law School, which would seem to be as lacking in diversity as the NYFD. One suspects that the judge's disdain for the skills and knowledge, not to mention physical courage, of the firefighters is rooted in blatant class prejudice.
When is someone going to file a disparate impact suit against the top law schools?
The New York Fire Department has announced the results of a new entrance exam. The passmark of 70 was reached by 95.72% of applicants. Previous tests had been thrown out because insufficient numbers of African-Americans and Hispanics were able to pass.
The new exam appears to be extremely easy and seems to assume that firefighting is a job that requires minimal intelligence. Effectively, the new policy for the New York Fire Department is to select at random from those able to get themselves to a testing center and answer questions that should pose no challenge to the average junior high school student.
The change was in response to the directives of Judge Nichols Garaufis, a graduate of Columbia Law School, which would seem to be as lacking in diversity as the NYFD. One suspects that the judge's disdain for the skills and knowledge, not to mention physical courage, of the firefighters is rooted in blatant class prejudice.
When is someone going to file a disparate impact suit against the top law schools?
Dumbing Down Watch
David Cameron, Old Etonian and Oxford graduate, apparently does not know the meaning of Magna Carta.
But, looking at the reports of the Letterman interview, he did not actually say he didn't know.
So, perhaps he was just pretending to be dumb.
David Cameron, Old Etonian and Oxford graduate, apparently does not know the meaning of Magna Carta.
But, looking at the reports of the Letterman interview, he did not actually say he didn't know.
So, perhaps he was just pretending to be dumb.
Tuesday, September 25, 2012
Are Indian Universities Really Good at English?
Indian universities have never done well in any of the international university rankings. The problem is that although there are many talented Indian students and researchers they seem, like Indian entrepreneurs, to leave India as soon as they can.
But some observers have been taking comfort from a good showing in QS's subject rankings for English Language and Literature, which actually consists largely of the 2011 academic survey. One observer notes:
The Times of India comments:
The QS subject rankings for English are 90 per cent based on the academic survey so research and publications had nothing to do with it. My suspicion is that graduates of Delhi University have fanned out across the world for postgraduate studies and have signed up for the QS survey giving an American, British or whatever university as their affiliation. When they fill out the form, they probably put the Ivy League and Oxbridge down first and then Delhi university after about 20 or 30 other names. The QS methodology does not take account of the order of the responses so Harvard would get the same weighting as Delhi or much less if the respondent gave an American affiliation making Harvard a domestic vote and Delhi an international one.
Indian universities have never done well in any of the international university rankings. The problem is that although there are many talented Indian students and researchers they seem, like Indian entrepreneurs, to leave India as soon as they can.
But some observers have been taking comfort from a good showing in QS's subject rankings for English Language and Literature, which actually consists largely of the 2011 academic survey. One observer notes:
But DU's English department has made the day special as it figured in the top 100 list. Consequently, the celebratory mood was not restricted to North Campus alone but spilled over to South and off-campus colleges. Smiles could be observed on the faces of teachers as well as their pupils as their hard work paid off. "It is always pleasing to know you have done well at international level," said Ratnakar Kumar from Khalsa College. Another student from Hansraj added jokingly, "So we haven't been reading just stories because they don't fetch you such accolades." Verily, students were high on emotions.
A number of reasons can be attributed for the success that Department of English at Delhi University has witnessed. The significant factor is that students are encouraged and pushed to think outside the box, making bizarre to impressive attempts in their process of learning. So in a country in which rote-learning is the norm, DUDE has adopted a strategy to keep such evil at bay. Beside, the world-class faculty has also contributed to the improving standards of English Department.
The Times of India comments:
Teachers cite a number of reasons for the success of DU's English department. "First, it's the profile of the department in terms of research and publications. We are on top in subaltern studies, in post-colonial studies. Then, the numbers — we have 600 MA students, of whom 10-15 are as good as anybody," says a professor. He adds that India is not considered modern for technology but for the ideas of democracy and freedom, and those belong to the domain of humanities. The department is a part of the University Grants Commission's Special Assistance Programme.
The QS subject rankings for English are 90 per cent based on the academic survey so research and publications had nothing to do with it. My suspicion is that graduates of Delhi University have fanned out across the world for postgraduate studies and have signed up for the QS survey giving an American, British or whatever university as their affiliation. When they fill out the form, they probably put the Ivy League and Oxbridge down first and then Delhi university after about 20 or 30 other names. The QS methodology does not take account of the order of the responses so Harvard would get the same weighting as Delhi or much less if the respondent gave an American affiliation making Harvard a domestic vote and Delhi an international one.
Merging Universities
There is talk of merging Trinity College Dublin and University College Dublin. I am not sure whether there would be any benefit for faculty or students but inevitably there is a ranking angle.
There is talk of merging Trinity College Dublin and University College Dublin. I am not sure whether there would be any benefit for faculty or students but inevitably there is a ranking angle.
The report says a UCD-TCD merger would give the merged college the critical mass and expertise needed to secure a place among the world’s best-ranked universities. At present, Ireland is not represented among the top 100 universities in the prestigious Times Higher Education World Reputation Ranking.
Monday, September 24, 2012
Are All Disciplines Equal?
Rankers, evaluators and assessors of all sorts have to face the problem that academic disciplines do not go about writing, publishing and citing in exactly the same way. Researchers in some disciplines write shorter papers that have more authors, are more easily divided into smaller units and get more citations sooner after publication than others. Medical research papers tend to be short, frequent, co-authored and cited very soon after publication compared to history or philosophy.
Different ranking organisations follow different approaches when dealing with this diversity of practice. Scimago just counts the total number of publications. ARWU of Shanghai counts publications but gives a double weighting to social sciences and none to the humanities. Thomson Reuters, who power the Times Higher Education world rankings, normalize by field and by country.
Here are some fairly simple things that rankers might try to overcome disparities between disciplines. They could count total pages rather than the number of papers. They could look into counting citations of conference papers or books. Something worth doing might be giving a reduced weighting to co-authored papers, which would shift the balance a little bit towards the arts and humanities and might also help to discourage dubious practices like supervisors adding their names to papers written by their graduate students.
We should also ask whether there are limits to how far field and country normalization should go. Do we really believe that someone who has received an average number of citations for political science in Belarus deserves the same score as someone with an average number of citations for chemical engineering in Germany?
It does seem that there are substantial and significant variations in the cognitive skills required to compete effectively in the academic arena. Here are combined verbal and quantitative GRE scores for selected disciplines by intended majors for 2011-2012.
Physics and Astronomy 317
Economics 313
Biology 308
Philosophy 310
English Language and Literature 305
Education: Secondary 305
History 304
Education: Higher 304
Psychology 300
Sociology 300
Education: Administration 299
Education: Curriculum 299
Education: Early Childhood 294
The scores look as they are very close together but this is largely a a consequence, perhaps intended, of the revision (dumbing down?) of the GRE.
Is it possible that one reason why physicists and economists publish more papers which are read more than those by education specialists is simply that the former have more ability and interest than the latter?
Friday, September 21, 2012
Dumbing Down Watch
There is a "cheating" scandal at Harvard. Apparently, students were given an open-book, open-anything take-home exam for 'Introduction to Congress' and were not expected to consult each other in any way.
Harvard College’s disciplinary board is investigating nearly half of the 279 students who enrolled in Government 1310: “Introduction to Congress” last spring for allegedly plagiarizing answers or inappropriately collaborating on the class’ final take-home exam.
Dean of Undergraduate Education Jay M. Harris said the magnitude of the case was “unprecedented in anyone’s living memory.”
Harris declined to name the course, but several students familiar with the investigation confirmed that Professor Matthew B. Platt's spring government lecture course was the class in question.
The professor of the course brought the case to the Administrative Board in May after noticing similarities in 10 to 20 exams, Harris said. During the summer, the Ad Board conducted a review of all final exams submitted for the course and found about 125 of them to be suspicious.
Presumably this is not the only take-home exam in Harvard and presumably not the first for this course. So why now has half the class felt compelled to plagiarise or to collaborate inappropriately?
Has the course become more difficult than it used to be? Or are the students less capable? Or have admission standards become less rigorous?
Maybe QS and Times Higher were on to something after all when they dethroned Harvard from the number one spot in their world rankings.
Subscribe to:
Posts (Atom)