Showing posts sorted by relevance for query MIT. Sort by date Show all posts
Showing posts sorted by relevance for query MIT. Sort by date Show all posts

Thursday, September 16, 2010

Alexandria University

According to the THE rankings Alexandria University in Egypt (no. 147 overall) is the fourth university in the world for research impact, surpassed only by Caltech, MIT and Princeton.

Alexandria is not ranked by Shanghai Jiao Tong University or HEEACT. It is way down the SCImago rankings. Webometrics puts it in 5,882nd place and 7,253rd for the "Scholar" indicator.

That is not the only strange result for this indicator, which looks as though it will spoil the rankings as a whole.

More on Alexandria and some other universities in a few hours.

Tuesday, October 04, 2011

The US News rankings

The U.S. News rankings of American colleges and universities were released on September 13th. For more information go here.

The top 10 national unuiversities are:

1.  Harvard
2.  Princeton
3.  Yale
4.  Columbia
5 = Caltech
5 = MIT
5= Stanford
5= Chicago
5= University of Pennsylvania
10. Duke

Tuesday, August 05, 2014

Webometrics: Ranking Web of Universities 2nd 2014 Edition


The Webometrics rankings are based on web-derived data. They cover more than 22,000 institutions, far more than conventional rankings, and should always be consulted as a check on the plausibility of the others. They are, however, extremely volatile and that reduces their reliability considerably.
Publisher

Cybermetrics Lab, CSIC, Madrid



Scope

Global. 22,000+ institutions.


Methodology

From the Webometrics site.


The current composite indicator is now built as follows:
Visibility (50%)
IMPACT. The quality of the contents is evaluated through a "virtual referendum", counting all the external inlinks that the University webdomain receives from third parties. Those links are recognizing the institutioof conventinal prestige, the academic performance, the value of the information, and the usefulness of the services as introduced in the webpages according to the criteria of millions of web editors from all over the world. The link visibility data is collected from the two most important providers of this information:Majestic SEO and ahrefs. Both use their own crawlers, generating different databases that should be used jointly for filling gaps or correcting mistakes. The indicator is the product of square root of the number of backlinks and the number of domainsoriginating those backlinks, so it is not only important the link popularity but even more the link diversity. The maximum of the normalized results is the impact indicator.
Activity (50%)
PRESENCE (1/3). The total number of webpages hosted in the main webdomain (including all the subdomains and directories) of the university as indexed by the largest commercial search engine (Google). It counts every webpage, including all the formats recognized individually by Google, both static and dynamic pages and other rich files. It is not possible to have a strong presence without the contribution of everybody in the organization as the top contenders are already able to publish millions of webpages. Having additional domains or alternative central ones for foreign languages or marketing purposes penalizes in this indicator and it is also very confusing for external users.
OPENNESS (1/3). The global effort to set up institutional research repositories is explicitly recognized in this indicator that takes into account the number of rich files (pdf, doc, docx, ppt) published in dedicated websites according to the academic search engine Google Scholar. Both the total files Both the total records and those with correctly formed file names are considered (for example, the Adobe Acrobat files should end with the suffix .pdf). The objective is to consider recent publications that now are those published between 2008 and 2012 (new period).
EXCELLENCE (1/3). The academic papers published in high impact international journals are playing a very important role in the ranking of Universities. Using simply the total number of papers can be misleading, so we are restricting the indicator to only those excellent publications, i.e. the university scientific output being part of the 10% most cited papers in their respective scientific fields. Although this is a measure of high quality output of research institutions, the data provider Scimago groupsupplied non-zero values for more than 5200 universities (period 2003-2010). In future editions it is intended to match the counting periods between Scholar and Scimago sources.

Top Ten

1.    Harvard University
2.    MIT
3.    Stanford University
4.    Cornell University
5.    University of Michigan
6.    University of California Berkeley
7=   Columbia University
8=   University of Washington
9.    University of Minnesota
10.  University of Pennsylvania

Countries with Universities in the Top Hundred

USA                      66
Canada                  7
UK                          4  
Germany                3
China                      3
Japan                     2
Switzerland            2
Netherlands           1
Australia                1
Italy                         1
South Korea          1
Taiwan                   1 
Belgium                 1
Hong Kong            1
Brazil                      1 
Austria                   1
Czech Republic    1
Singapore             1        
Mexico                   1



Top Ranked in Region

USA:                             Harvard
Canada:                       Toronto
Latin America:             Sao Paulo
Caribbean                    University of the West Indes
Europe:                        Oxford
Africa:                           University of Cape Town
Asia:                             Seoul National University
South Asia                   IIT Bombay
Southeast Asia           National University of Singapore
Middle East:                Hebrew University of Jerusalem
Arab World:                 King Saud University
Oceania                       Melbourne

Noise Index
Average position change of universities in the top 20 in 2013:

4.25

Comparison

Center for World University Rankings         --  0.90
Shanghai Rankings (ARWU): 2011-12      --  0.15
Shanghai Rankings (ARWU) 2012-13       --  0.25
THE WUR:  2012-13                                    --  1.20
QS  WUR    2012-13                                    --  1.70  


Average position change of universities in the top 100 in 2013

12.08

Comparison

Center for World University Rankings               --  10.59 
 Shanghai Rankings (ARWU): 2011-12               --  2.01
 Shanghai Rankings   2012-13                            --  1.66
THE WUR:  2012-13                                            --   5.36
QS  WUR    2012-13                                            --   3.97


Wednesday, September 10, 2014

America's Best Colleges

The US News & World Report's America's Best Colleges has just been published. There are no surprises at the top. Here are the top ten.

1.  Princeton
2.  Harvard
3.  Yale
4= Columbia
4= Stanford
4= Chicago
7.   MIT
8= Duke
8= University of Pennsylvania
10. Caltech

Analysis at the Washington Post indicates little movement at the top. Outside the elite there are some significant changes.

Liberal arts colleges
St. John's College, Annapolis from 123rd to 56th .
Bennington College from 122nd to 89th.

National universities
Northeastern University from 69th to 42nd.
Texas Christian University from 99th to 46th.




Wednesday, March 06, 2013

The THE Reputation Rankings

Times Higher Education have published their reputation rankings based on data collected from the World University Rankings of 2012.

They are not very interesting. Which is exactly what they should be. When rankings show massive changes from one year to another a certain amount of scepticism is required.

The same six, Harvard, MIT, Stanford, Berkeley, Oxford and Cambridge are well ahead of everybody else as they were in 2012 and in 2011.

Taking a quick look at the top fifty, there is little movement between 2011 and 2013. Four universities for the US, Japan, Netherlands and Germany have dropped out. In their place there is one more from Korea and from the the UK and two more from Australia.

I was under the impression that Australian universities were facing savage cuts in research funding and were going to be deserted by international students and researchers..

Maybe it is the other universities that are being cut. or maybe a bit of blood letting is good for the health.

I also noticed that the number of respondents went down a bit in 2012. It could be that the academic world is beginning to suffer from ranking fatigue.

Thursday, September 16, 2010

Highlights of the THE Rankings

The top ten are:
1. Harvard
2. Caltech
3. MIT
4. Stanford
5. Princeton
6. Cambridge
6. Oxford
8. UC Berkeley
9. Imperial College
10. Yale

The best Asian university is the University of Hong Kong. Sao Paulo is best in South America and Melbourne in Australia. Cape Town is top in Africa followed by the University of Alexandria which is ranked 149th, a rather surprising result.

Monday, November 19, 2007

The Imperial Ascendency

One of the most remarkable things about the THES-QS rankings is the steady rise of Imperial College London. It has now reached 5th place, just behind Oxford, Cambridge and Yale and ahead of Princeton, MIT, Stanford and Tokyo.

How did this happen? Imperial's research performance is rather lacklustre compared with many American universities. The Shanghai Jiao Tong index puts it at 23rd overall, 33rd for highly cited researchers , 28th for publications in Science and Nature, and 29th for citations in the Science Citation Index.

Google Scholar also indicates that Imperial does much worse than many other places. A quick search comes up with 22,500 items for research published since 2002, compared to 22, 700 for Seoul National University, 25,800 for McGill, 44,00 for Tokyo and 151,00 for Princeton.

Imperial does well on the THES QS rankings partly because of outstanding scores on the peer review (99 out of 100) , employer review (99) and international students (100).

It also comes first (along with 15 others with scores of 100) for student faculty ratio. Is this justified?

On its web site QS indicates that Imperial has 2,963 full time equivalent (FTE) faculty and 12,025 FTE students, a ratio of 4.06.

However, if we look at Imperial's site we find that the college claims 12,129 FTE students and 1,114 academic and 1,856 research staff.

It appears that QS has counted both academic and research staff when calculating Imperial's ratio. Looking at other universities, it appears that it is QS's standard practice to count research staff who do not teach as part of the faculty total. In contrast, Imperial itself calculates the ratio by dividing students by academic staff to produce a ratio of 11.2. If that ratio had applied Imperial would have been many places lower.


If QS has been counting research staff in the total faculty score it would lead to the truly bizarre result that universities could hire a large number of researchers and get a substantial boost for the student faculty score.

So far it looks as though this is s general procedure and not a special privilege granted to Imperial alone but it would introduce a definite bias in favour of those universities that, like Imperial, employ large numbers of non-teaching researchers.

Sunday, August 23, 2015

Changes in Ranking Methodology

This year and next the international university rankings appear to be set for more volatility with unusually large upward and downward movement, partly as a result of changes to the methodology for counting citations in the QS and THE rankings.

ARWU

The global ranking season kicked off last week with the publication of the latest edition of the Academic Ranking of World Universities from the ShanghaiRanking Consultancy (SRC), which I hope to discuss in detail in a little while. These rankings are rather dull and boring, which is exactly what they should be. Harvard is, as always, number one for all but one of the indicators. Oxford has slipped from joint ninth to tenth place. Warwick has leaped into the top 100 by virtue of a Fields medal. At the foot of the table there are new contenders from France, Korea and Iran.

Since they began in 2003 the Shanghai rankings have been characterised by a  generally stable methodology. In 2012, however, they had to deal with the recruitment of a large and unprecedented number of adjunct faculty by King Abdulaziz University. Previously SRC had simply divided the credit for the Highly Cited Researchers indicator equally between all institutions listed as affiliations. In 2012 and 2013 they wrote to all highly cited researchers with joint affiliations and thus determined the division of credit between primary and secondary affiliations. Then, in 2014 and this year they combined the old Thomson Reuters list, first issued in 2001, and the new one, issued in 2014, and excluded all secondary affiliations in the new list.

The result was that in 2014 the rankings showed an unusual degree of volatility although this year things are a lot more stable. My understanding is that Shanghai will move to counting only the new list next year, again without secondary affiliations, so there should be a lot of interesting changes then. It looks as though Stanford, Princeton, University of Wisconsin -- Madison, and Kyoto University will suffer because of the change while University of California Santa Cruz, Rice University, University of Exeter and University of Wollongong. will benefit.

While SRC has efficiently dealt with the issue of secondary affiliation with regard to its Highly Cited indicator, the issue has now resurfaced in the unusual high scores achieved  by King Abdulaziz University for publications largely because of its adjunct faculty. Expect more discussion over the next year or so. It would seem sensible for SRC to think about a five or ten year period rather than one year for their Publications indicator and academic publishers, the media and rankers in general may need to give some thought to the proliferation of secondary affiliations.


QS

On July 27 Quacquarelli Symonds (QS) announced that for 18 months they had been thinking about normalising the counting of citations across five broad subject areas. They observed that a typical institution would receive about half of its citations from the life sciences and medicine, over a quarter from the natural sciences but just 1% from the arts and humanities.

In their forthcoming rankings QS will assign a 20% weighting for citations to each of the five subject areas something, according to Ben Sowter Research Director at QS, that they have been doing for the academic opinion survey.

It would seem then that there are likely to be some big rises and big falls this September. I would guess that places strong in humanities, social sciences and engineering like LSE, New York University and Nanyang Technological University may go up and some of the large US state universities and Russian institutions may go down. That's a guess because it is difficult to tell what happens with the academic and employer surveys.

QS have also made an attempt to deal with the issue of hugely cited papers with hundreds, even thousands of "authors" -- contributors would be a better term -- mainly in physics, medicine and genetics. Their approach is to exclude all papers with more than 10 contributing institutions, that is 0.34% of all publications in the database.

This is rather disappointing. Papers with huge numbers of authors and citations obviously do have distorting effects but they have often dealt with fundamental and important issues. To exclude them altogether is to ignore a very significant body of research.

The obvious solution to the problem of multi-contributor papers is fractional counting, dividing the number of citations by the number of contributors or contributing institutions. QS claim that to do so would discourage collaboration, which does not sound very plausible.

In addition, QS will likely extend the life of  survey responses from three to five years. That could make the rankings more stable by smoothing out annual fluctuations in survey responses and reduce the volatility caused by the proposed changes in the counting of citations.

The shift to a moderate version of field normalisation is helpful as it will reduce the undue privilege given to medical research, without falling into the huge problems that result from using too many categories. It is unfortunate, however, that QS have not taken the plunge into fractional counting. One suspects that technical problems and financial considerations might be as significant as the altruistic desire not to discourage collaboration.

After a resorting in September the QS rankings are likely to become a bit more stable and and credible but their most serious problem, the structure, validity and excessive weighting of the academic survey, has still not been addressed.

THE

Meanwhile, Times Higher Education (THE) has also been grappling with the issue of authorship inflation. Phil Baty has announced that this year 649 papers with over 1,000 authors will be excluded from their calculation of citations because " we consider them to be so freakish that they have the potential to distort the global scientific landscape".

But it is not the papers that do the distorting. It is  methodology.  THE and their former data partners Thomson Reuters, like QS, have avoided  fractional counting (except for a small experimental African ranking) and so every one of those hundreds or thousands of authors gets full credit for the hundreds  or thousands of citations. This has given places like Tokyo Metropolitan University, Scuola Normale Superiore Pisa, Universite Cadi Ayyad in Morocco and Bogazici University in Turkey remarkably high scores  for Citations: Research Impact, much higher than their scores for the bundled research indicators.

THE have decided to simply exclude 649 papers, 0r 0.006% of the total from their calculations for the world rankings. This seems a lot less than QS. Again, this is a rather crude measure. Many of the "freaks" are major contributions to advanced research and deserve to be acknowledged by the rankings in some way. 

THE did use fractional counting in their recent experimental ranking of African universities and Baty indicates that they are considering doing so in the future.

It would be a big step forward for THE if they introduce fractional counting of citations. But they should not stop there. There are other bugs in the citations indicator that ought to be fixed.

First, it does not at present measure what it is supposed to measure. It does not measure a university's overall research impact. At best, it is a measure of the average quality of research papers no matter how few (above a certain threshold) they are.

Second, the "regional modification", which divides the university citation impact score by the square root of the the score of the country where the university is located, is another source  of distortion. It gives a bonus to universities simply for being located in  underperforming countries. THE or TR have justified the modification by suggesting that some universities deserve compensation because they lack funding or networking opportunities. Perhaps they do, but this can still lead to serious anomalies.

Thirdly, THE need to consider whether they should assign citations to so many fields since this increases the distortions that can arise when there is a highly cited paper in a normally lowly cited field.

Fourthly, should they assign a thirty per cent weighting to an indicator that may be useful for distinguishing between the likes of MIT and Caltech but may be of little relevance for the universities that are now signing up for the world rankings?
































Thursday, November 08, 2007

Preview of the Top One Hundred

Beerkens Blog has the top 100. Here are the top 20.

Rank Name Country
1 HARVARD University United States
2= University of CAMBRIDGE United Kingdom
2= YALE University United States
2= University of OXFORD United Kingdom
5 Imperial College LONDON United Kingdom
6 PRINCETON University United States
7= CALIFORNIA Institute of Technology (Caltech) United States
7= University of CHICAGO United States
9 UCL (University College LONDON) United Kingdom
10 MASSACHUSETTS Institute of Technology (MIT) United States
11 COLUMBIA University United States
12 MCGILL University Canada
13 DUKE University United States
14 University of PENNSYLVANIA United States
15 JOHNS HOPKINS University United States
16 AUSTRALIAN National University Australia
17 University of TOKYO Japan
18 University of HONG KONG Hong Kong
19 STANFORD University United States
20= CORNELL University United States
20= CARNEGIE MELLON University United States

Wednesday, January 06, 2016

Towards a transparent university ranking system


For the last few years global university rankings have been getting more complicated and more "sophisticated".

Data makes it way from branch campuses, research institutes and far flung faculties and departments and is analysed, decomposed, recomposed, scrutinised for anomalies and outliers and then enters the files of the rankers where it is normalised, standardised, square rooted, weighted and/or subjected to regional modification. Sometimes what comes out the other end makes sense: Harvard in first place, Chinese high fliers flying higher. Sometimes it stretches academic credulity: Alexandria University in fourth place in the world for research impact, King Abdulaziz University in the world's top ten for mathematics.

The transparency of the various indicators in the global rankings varies. Checking the scores for Nature and Science papers and indexed publications in the Shanghai rankings is easy if you have access to the Web of Knowledge. It is also not difficult to check the numbers of faculty and students on the QS, Times Higher Education (THE)and US News web sites.

On the other hand, getting into the data behind the THE citations is close to impossible. Citations are normalised by field, year of publication and year of citation. Then, until last year the score for each university was adjusted by division by the square root of the citation impact score of the country in which it was located. Now this applies to half the score for the indicator. Reproducing the THE citations score is impossible for almost everybody since it requires calculating the world average citation score for 250 or 300 fields and then the total citation score for every country.

It is now possible to access third party data from sources such as Google, World Intellectual Property Organisation and various social media such as LinkedIn. One promising development is the creation of public citation profiles by Google Scholar.

The Cybermetrics Lab in Spain, publishers of the Webometrics Ranking Web of Universities, has announced the beta version of a ranking based on nearly one million individual profiles in the Google Scholar Citations database. The object is to see whether this data can be included in future editions of the Ranking Web of Universities

It uses data from the institutional profiles and counts the citations in the top ten public profiles for each institution, excluding the first profile.

The ranking is incomplete since many researchers and institutions have not participated fully. There are, for example, no Russian institutions in the top 600. In addition, there are technical issues such as the duplication of profiles.

The leading university is Harvard which is well ahead of its closest rival, the University of Chicago. English speaking universities are dominant with 17 of the top 20 places going to US institutions and three, Oxford, Cambridge and University College London, going to the UK.

Overall the top twenty are:

  1.   Harvard University
  2.   University of Chicago
  3.   Stanford University
  4.   University of California Berkeley
  5.   Massachusetts Institute of Technology (MIT)
  6.   University of Oxford
  7.   University College London
  8.   University of Cambridge
  9.   Johns Hopkins University
  10.   University of Michigan
  11.   Michigan State University
  12.   Yale University
  13.   University of California San Diego
  14.   UCLA
  15.   Columbia University
  16.   Duke University
  17.   University of Washington
  18.   Princeton University
  19.   Carnegie Mellon University
  20.   Washington University St Louis.

The top universities in selected countries and regions are:

Africa: University of Cape Town, South Africa 244th
Arab Region: King Abdullah University of Science and Technology, Saudi Arabia 148th
Asia and Southeast Asia: National University of Singapore 40th
Australia and Oceana: Australian National University 57th
Canada: University of Toronto 22nd
China: Zhejiang University 85th
France: Université Paris 6 Pierre and Marie Curie 133rd
Germany: Ludwig Maximilians Universität München 194th
Japan: Kyoto University 100th
Latin America: Universidade de São Paulo 164th
Middle East: Hebrew University of Jerusalem 110th
South Asia: Indian Institute of Science Bangalore 420th.

This seems plausible and sensible so it is likely that the method could be extended and improved.

Tuesday, June 25, 2013

What about a Research Influence Ranking?

Keeping up with the current surge of global university rankings is becoming next to impossible. Still there are  a few niches that have remained unoccupied. One might be a ranking of universities according to their ability to spread new knowledge around the world. So it might be a good idea to have a Research Influence Ranking based on the citations indicator in the Times Higher Education -- Thomson Reuters World University Rankings.

Thomson Reuters are the world's leading collectors and analysts of citations data so such an index ought to provide invaluable data source for governments, corporations and other stakeholders deciding where to place research funding. Data for 400 universities can be found on the THE iPhone/iPad app.

The top place in the world would be jointly held by Rice University in Texas and Moscow State Engineering Physics Institute, closely followed by MIT and the University of California Santa Cruz.

Then there are the first places in various regions and counties. (MEPhI would be first in Europe and Rice in the US and North America.)

Canada
University of Toronto

Latin America
University of the Andes, Colombia

United Kingdom (and Western Europe)
Royal Holloway London

Africa
University of Cape Town

Middle East
Koc University, Turkey

Asia (and Japan)
Tokyo Metropolitan University

ASEAN
King Mongkut's University of Technology, Thailand

Australia and the Pacific
University of Melbourne

On second thoughts, perhaps not such a good idea.


Saturday, December 16, 2017

Measuring graduate employability; two rankings

Global university rankings are now well into their second decade. Since 2003, when the first Shanghai rankings appeared, there has been a steady growth of global and regional rankings. At the moment most global rankings are of two kinds, those that focus entirely or almost entirely on research and those such as the Russian Round Rankings, Times Higher Education (THE) and Quacquarelli Symonds (QS) that claim to also measure teaching, learning or graduate quality in some way, although even those are biased towards research when you scratch the surface a little.

The ranking industry has become adept at measuring research productivity and quality in various ways. But the assessment of undergraduate teaching and learning is another matter.

Several ranking organisations use faculty student ratio as a proxy for quality of teaching which in turn is assumed to have some connection with something that happens to students during their programmes. THE also count institutional income, research income and income from industry, again assuming that there is a significant association with academic excellence. Indicators like this are usually based on those supplied by institutions. For examples of problems here see an article by Alex Usher and a reply by Phil Baty.

An attempt to get at student quality is provided by the CWUR rankings now based in UAE, counting alumni who win international awards or who are CEOs of major companies. But obviously this is relevant only for a very small number of universities. A new pilot ranking from Moscow also counts international awards.

The only attempt to measure student quality  by the well known rankers that is relevant to most institutions is the survey of employers in the QS world and regional rankings. There are some obvious difficulties here. QS gets respondents from a variety of channels and this may allow some universities to influence the survey. In recent years some Latin American universities have done much better on this indicator than on any other.

THE now publish a global employability ranking which is conducted by two European firms, Trendence and Emerging. This is based on two surveys of recruiters in Argentina, Australia, Austria, Brazil, Canada, China, Germany, France, India, Israel, Italy, Japan, Mexico, Netherlands, Singapore, Spain, South Africa, South Korea, Turkey, UAE, UK, and USA. There were two panels with a total of over 6,000 respondents.

A global survey that does not include Chile, Sweden, Egypt, Nigeria, Saudi Arabia, Russia, Pakistan, Indonesia, Bangladesh, Poland, Malaysia or Taiwan can hardly claim to be representative of international employers. This limited representation may explain some oddities of the rankings such as the high places of the American University of Dubai and and the National Autonomous University of Mexico.

The first five places in these rankings are quite similar to the THE world rankings: Caltech, Harvard, Columbia, MIT, Cambridge.  But there some significant differences after that and some substantial changes since last year. Here Columbia, 14th in the world rankings, is in third place, up from 12th last year. Boston University is 6th here but 70th in the world rankings. Tokyo Institute of Technology in 19th place is in the 251-300 band in the world rankings. CentraleSupelec, is 41st,  but in the world  401-500 group.

These rankings are useful only for a small minority of universities, stakeholders and students. Only 150 schools are ranked and only a small proportion of the world's employers consulted.

QS have also released their global employability rankings with 500 universities. These combine the employer reputation survey, used in their world rankings with  other indicators: alumni outcomes, based on lists of high achievers, partnership with employers, that is research collaboration noted in the Scopus database, employer-student connections, that is employers actively present on campus, and graduate employment rate. There seems to be a close association, at least at the top, between overall scores, employer reputation and alumni outcomes. Overall the top three are Stanford, UCLA, Harvard. For employer reputation they are Cambridge, Oxford, Harvard and for alumni outcomes Harvard, Stanford Oxford.

The other  indicators are a different matter. For employer-student connections the top three are Huazhong University of Science and Technology, Arizona State University, and New York University. In fact seven out of the top ten on this measure  are Chinese. For graduate employment rate they are Politecnico di Torino, Moscow State Institute of International Relations, and Sungkyunkwan University and for partnership with employers Stanford, Surrey and Politecnico Milano. When the front runners in indicators are so different one has to wonder about their validity.

There are some very substantial differences in the ranks given to various universities in these rankings. Caltech is first in the Emerging-Trendence rankings and 73rd in QS. Hong Kong University of Science and Technology is 12th in Emerging-Trendence but not ranked at all by QS. The University of Sydney is 4th in QS and 48th in Emerging-Trendence. The American University of Dubai is in QS's 301-500 band but 138th for Emerging-Trendence

The  rankings published by THE could be some value to those students contemplating careers with the leading companies in the richest countries.

The QS rankings may be more helpful for those students or stakeholders looking at universities outside the very top of the global elite. Even so QS have ranked only a fraction of the world's universities.

It still seems that the way forward in the assessment of graduate outcomes and employability is through standardised testing along the lines of AHELO or the Collegiate Learning Assessment.




Saturday, October 27, 2012

More on MEPhI

Right after putting up the post on Moscow State Engineering Physics Institute and its "achievement" in getting the maximum score for research impact in the latest THE - TR World University Rankings, I found this exchange on Facebook.  See my comments at the end.

  • Valery Adzhiev So, the best university in the world in the "citation" (i.e. "research influence") category is Moscow State Engineering Physics Institute with maximum '100' score. This is remarkable achivement by any standards. At the same time it scored in "research" just 10.6 (out of 100) which is very, very low result. How on earth that can be?
  • Times Higher Education World University Rankings Hi Valery,

    Regarding MEPHI’s high citation impact, there are two causes: Firstly they have a couple of extremely highly cited papers out of a very low volume of papers.The two extremely highly cited papers are skewing what would ordinarily be a very g
    ood normalized citation impact to an even higher level.

    We also apply "regional modification" to the Normalized Citation Impact. This is an adjustment that we make to take into account the different citation cultures of each country (because of things like language and research policy). In the case of Russia, because the underlying citation impact of the country is low it means that Russian universities get a bit of a boost for the Normalized Citation Impact.

    MEPHI is right on the boundary for meeting the minimum requirement for the THE World University Rankings, and for this reason was excluded from the rankings in previous years. There is still a big concern with the number of papers being so low and I think we may see MEPHI’s citation impact change considerably over time as the effect of the above mentioned 2 papers go out of the system (although there will probably be new ones come in).

    Hope this helps to explain things.
    THE
  • Valery Adzhiev Thanks for your prompt reply. Unfortunately, the closer look at that case only adds rather awkward questions. "a couple of extremely highly cited papers are actually not "papers": they are biannual volumes titled "The Review of Particle Physics" that ...See More
  • Valery Adzhiev I continue. There are more than 200 authors (in fact, they are "editors") from more than 100 organisation from all over the world, who produce those volumes. Look: just one of them happened to be affiliated with MEPhI - and that rather modest fact (tha...See More
  • Valery Adzhiev Sorry, another addition: I'd just want to repeat that my point is not concerned only with MEPhI - Am talking about your methodology. Look at the "citation score" of some other universities. Royal Holloway, University of London having justt 27.7 in "res...See More
  • Alvin See Great observations, Valery.
  • Times Higher Education World University Rankings Hi Valery,

    Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for
    ...See More
  • Andrei Rostovtsev This is in fact rather philosofical point. There are also a number of very scandalous papers with definitively negative scientific impact, but making a lot of noise around. Those have also high contribution to the citation score, but negative impact t...See More

    It is true that two extremely highly cited publications combined with a low total number of publications skewed the results but what is equally or perhaps more important is that  these citations occur in the year or two years after publication when citations tend to be relatively infrequent compared to later years. The 2010 publication is a biennial review, like the 2008 publication, that will be cited copiously for two years after which it will no doubt be superseded by the 2012 edition.

    Also, we should note that in the ISI Web of Science, the 2008 publication is classified as "physics, multidisciplinary". Papers listed as multidisciplinary generally get relatively few citations so if the publication was compared to other multidisciplinary papers it would get an even larger weighting. 
    Valery has an excellent point when he points out that these publications have over 100 authors or contributors each (I am not sure whether they are actual researchers or administrators). Why then did not all the other contributors boost their instutitions' scores to similar heights? Partly because they were not in Russia and therefore did not get the regional weighting but also because they were publishing many more papers overall than MEPhI.  

    So basically, A. Romaniouk who contributed 1/173rd of one publication was considered as having more research impact than hundreds of researchers at Harvard, MIT, Caltech etc producing hundreds of papers cited hundreds of times.  Sorry, but is this a ranking of research quality or a lottery?

    The worse part of THE's reply is this:

    Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for all to see (and indeed scrutinise, which everyone is entitled to do).

    We welcome feedback, are constantly developing our system, and will definitely take your comments on board.

    The system is not balanced. Citations have a weighting of 30 %, much more than any other  indicator. Even the research reputation survey has a weighting of only 18%.  And to describe as comprehensive an indicator which allows a fraction of one or two publications to surpass massive amounts of original and influential research is really plumbing the depths of absurdity.

    I am just about to finish comparing the scores for research and research impact for the top 400 universities. There is a statistically significant correlation but it is quite modest. When research reputation, volume of publications and research income show such a modestcorrelation with research impact it is time to ask whether there is a serious problem with this indicator.

    Here is some advice for THE and TR.

    • First, and surely very obvious, if you are going to use field normalisation then calculate the score for discipline groups, natural sciences, social sciences and so on and aggregate the scores. So give MEPhI a 100 for physical or natural sciences if you think they deserve it but not for the arts and humanities.
    • Second, and also obvious, introduce fractional counting, that is dividing the number of citations by the number of authors of the cited paper.
    • Do not count citations to summaries, reviews or compilations of research.
    • Do not count citations of commercial material about computer programs. This would reduce the very high and implausible score for Gottingen which is derived from a single publication.
    • Do not assess research impact with only one indicator. See the Leiden ranking for the many ways of rating research.
    • Consider whether it is appropriate to have a regional weighting. This is after all an international ranking.
    • Reduce the weighting for this indicator.
    • Do not count self-citations. Better yet do  not count citations from researchers at the same university.
    • Strictly enforce your rule about  not including single subject institutions in the general rankings.
    • Increase the threshold number of publications for inclusion in the rankings from two hundred to four hundred.