Showing posts sorted by date for query MIT. Sort by relevance Show all posts
Showing posts sorted by date for query MIT. Sort by relevance Show all posts

Monday, April 18, 2016

Round University Rankings


The latest Round University Rankings have been released by the Russian company, RUR Rankings Agency. These are essentially holistic rankings that attempt to go beyond the measurement of research output and quality. There are twenty indicators, although some of them such as Teaching Reputation, International Teaching Reputation and Research Reputation and International Students and International Bachelors are so similar that the information they provide is limited.

Basically these rankings cover much the same ground as the Times Higher Education (THE) World University Rankings. The income from industry indicator is not included but there are an additional eight indicators. The data is taken from Thomson Reuters' Global Institutional Profiles Project (GIPP) which was used by THE for their rankings from 2010 to 2014.

Unlike THE, which lumps its indicators together into groups,  the scores in the RUR are listed separately in the profiles. In addition, the rankings provide data for seven continuous years from 2010 to 2016. This provides an unusual opportunity to examine in detail the development of universities over a period of seven years, measured by 20 indicators. This is not the case with other rankings which have fewer indicators or which have changed their methodology.

It should be noted that participation in the GIPP is voluntary and therefore the universities in each edition could be different. For example, in 2015 100 universities dropped out of the project and 62 joined.

It is, however,  possible to examine a number of claims that have been made about changes in university quality over the last few years. I will  take a look at these in the next few posts.

For the moment, here are the top five in the overall rankings and the dimension rankings.

Overall
1.   Caltech
2.   Harvard
3.   Stanford
4.   MIT
5.   Chicago


Teaching
1.   Caltech
2.   Harvard
3.   Stanford
4.   MIT
5.   Duke

Research
1.   Caltech
2.   Harvard
3.   Stanford
4.   Northwestern University
5.   Erasmus University Rotterdam

International Diversity
1.   EPF Lausanne
2.   Imperial College London
3.   National University of Singapore
4.   University College London
5.   Oxford

Financial Sustainability
1.   Caltech
2.   Harvard
3.   Scuola Normale Superiore Pisa
4.   Pohang University of Science and Technology
5.   Karolinska Institute

Unfortunately these rankings have received little or no recognition outside Russia. Here are some examples.


MIPT entered the top four universities in Russia according to the Round University Ranking

Russian Universities in the lead in terms of growth in the international ranking of Round University Ranking

TSU [Tomsk State University]  has entered the 100 best universities for the quality of teaching

[St Petersburg]

Russian universities to top intl rankings by 2020 – Education Minister Livanov to RT


Sunday, January 10, 2016

Diversity Makes You Brighter ... if You're a Latino Stockpicker in Texas or Chinese in Singapore


Nearly everybody, or at least those who run the western mainstream media, agrees that some things are sacred. Unfortunately,  this is not always obvious to the uncredentialled who from time to time need to be beaten about their empty heads with the "findings" of "studies".

So we find that academic papers often with small or completely inappropriate samples, minimal effect sizes, marginal significance levels, dubious data collection procedures, unreproduced results or implausible assumptions are published in top flight journals, cited all over the Internet or even showcased in the pages of the "quality" or mass market press.

For example, anyone with any sort of mind knows that the environment is the only thing that determines intelligence.

So in 2009 we had an article in the Journal of Neuroscience that supposedly proves that a stimulating environment will not only make its beneficiaries more intelligent but also the children of the experimental subjects.

A headline in the Daily Mail proclaimed that " Mothers who enjoyed a stimulating childhood 'have brainier babies"

The first sentence of the reports claims that "[a] mother's childhood experiences may influence not only her own brain development but also that of her sons and daughters, a study suggests."

Wonderful. This could, of course, be an argument for allowing programs like Head Start to run for another three decades so that that their effects would show up in the next generation. Then the next sentence gives the game away.

"Researchers in the US found that a stimulating environment early in life improved the memory of female mice with a genetic learning defect."

Notice that experiment involved mice and not humans or any other mammal bigger than a ferret, it improved memory and nothing else, and the subjects had a genetic learning defect.

Still, that did not stop the MIT Technology Review from reporting Moshe Szyf of McGill University a saying “[i]f the findings can be conveyed to human, it means that girls’ education is important not just to their generation but to the next one,”

All of this, if confirmed, would be a serious blow against modern evolutionary theory. The MIT Technology Review got it right when it spoke about a comeback for Lamarckianism. But if there is anything scientists should have learnt over the last few decades it is that an experiment that appears to overthrow current theory, not to mention common sense and observation, is often flawed in some way. Confronted with evidence in 2011 that neutrinos were travelling faster than light, physicists with CERN reviewed their experimental procedures until they found that the apparent theory busting observation was caused by a loose fibre optic cable.

If a study had shown that a stimulating environment had a negative effect on the subjects or on the next generation or that it was stimulation for fathers that made the difference, would it have been cited in the Daily Mail or the MIT Technology Review? Would it even have been published in the Journal of Neuroscience? Wouldn't everybody have been looking for the equivalent of a loose cable?

A related idea that has reached the status of unassailable truth is that the famous academic achievement gap between Asians and Whites, and African Americans and Hispanics, could be eradicated by some sort of environmental manipulation such as spending money, providing safe spaces or laptops,  boosting self esteem or fine tuning teaching methods.

A few years ago Science, the apex of scientific research, published a paper by Geoffrey L. Cohen, Julio Garcia, Nancy Apfel and Allison Master that claimed a few minutes writing a essay affirming students' values (the control group wrote about somebody else's values) would start a process leading to an improvement in their relative academic performance. This applied only to low-achieving African American students.

I suspect that anyone with any sort of experience of secondary school classrooms would be surprised by the claim that such a brief exercise could have such a disproportionate impact.

The authors in their conclusion say:

"Finally, our apparently disproportionate results rested on an obvious precondition: the existence in the school of adequate material, social, and psychological resources and support to permit and sustain positive academic outcomes. Students must also have had the skills to perform significantly better. What appear to be small or brief events in isolation may in reality be the last element required to set in motion a process whose other necessary conditions already lay, not fully realised, in the situation."

In other words the experiment would not work unless there were "adequate material, social, and psychological resources and support" in the school, and unless students "have had the skills to perform significantly.

Is it possible that a school with all those resources, support and skills might also be one where students, mentors, teachers or classmates might just somehow leak who was in the experimental and who was in the control group?

Perhaps the experiment really is valid. If so we can expect to see millions of US secondary school students and perhaps university students writing their self affirmation essays and watch the achievement gap wither away.

In 2012, this study made the top 20 of studies that Psychfiledrawer would like to see reproduced, along with studies that showed that participants were more likely to give up trying to solve a puzzle if they ate radishes than if they ate cookies, that anxiety reducing interventions boost exam scores, music training raises IQ,  and, of course, Rosenthal and Jacobsons' famous study showing that teacher expectations can change students' IQ.

Geoffrey Cohen has provided a short list of studies that he claims replicate his findings. I suspect that only someone already convinced of the reality of self affirmation would be impressed.

Another variant of the environmental determinism creed is that diversity (racial or maybe gender although certainly not intellectual or ideological) is a wonderful thing that enriches the lives of everybody. There are powerful economic motives for universities to believe this and so we find that a succession of dubious studies are show cased as though they are the last and definitive word on the topic.

The latest such study is by Sheen S. Levine, David Stark and others and was the basis for an op ed in the New York Times (NYT).

The background is that the US Supreme Court back in 2003 had decided that universities could not admit students on the basis of race but they could try to recruit more minority students because having large numbers of a minority group would be good for everybody. Now the court is revisiting the issue and asking whether racial preferences can be justified by the benefits they supposedly provide for everyone.

Levine and Stark in their NYT piece claim that they can and refer to a study that they published with four other authors in the Proceedings of the American Academy of Sciences. Essentially, this involved an experiment in simulating stock trading  and it was found that  homogenous "markets" in Singapore and Kingsville, Texas, (ethnically Chinese and Latino respectively) were less accurate in pricing  stocks than those that were ethnically diverse with participants from minority groups (Indian and Malay in Singapore, non-Hispanic White, Black and Asian in Texas).

They argue that:

"racial and ethnic diversity matter for learning, the core purpose of a university. Increasing diversity is not only a way to let the historically disadvantaged into college, but also to promote sharper thinking for everyone.

Our research provides such evidence. Diversity improves the way people think. By disrupting conformity, racial and ethnic diversity prompts people to scrutinize facts, think more deeply and develop their own opinions. Our findings show that such diversity actually benefits everyone, minorities and majority alike."

From this very specific exercise the authors  conclude that diversity is beneficial for American universities which are surely not comparable to a simulated stock market.

Frankly, if this is the best they can do to justify diversity then it looks as though affirmative action in US education is doomed.

Looking at the original paper also suggests that quite different conclusions could be drawn. It is true that in each country the diverse market was more accurate than the homogenous one (Chinese in Singapore, Latino in Texas) but the homogenous Singapore market was more accurate than the diverse Texas market (see fig. 2) and very much more accurate than the homogenous Texas market. Notice that this difference is obscured by the way the data is presented.

There is a moral case for affirmative action provided that it is limited to the descendants of the enslaved and the dispossessed but it is wasting everybody's time to cherry-pick studies like these to support questionable empirical claims and to stretch their generalisability well beyond reasonable limits.








Wednesday, January 06, 2016

Towards a transparent university ranking system


For the last few years global university rankings have been getting more complicated and more "sophisticated".

Data makes it way from branch campuses, research institutes and far flung faculties and departments and is analysed, decomposed, recomposed, scrutinised for anomalies and outliers and then enters the files of the rankers where it is normalised, standardised, square rooted, weighted and/or subjected to regional modification. Sometimes what comes out the other end makes sense: Harvard in first place, Chinese high fliers flying higher. Sometimes it stretches academic credulity: Alexandria University in fourth place in the world for research impact, King Abdulaziz University in the world's top ten for mathematics.

The transparency of the various indicators in the global rankings varies. Checking the scores for Nature and Science papers and indexed publications in the Shanghai rankings is easy if you have access to the Web of Knowledge. It is also not difficult to check the numbers of faculty and students on the QS, Times Higher Education (THE)and US News web sites.

On the other hand, getting into the data behind the THE citations is close to impossible. Citations are normalised by field, year of publication and year of citation. Then, until last year the score for each university was adjusted by division by the square root of the citation impact score of the country in which it was located. Now this applies to half the score for the indicator. Reproducing the THE citations score is impossible for almost everybody since it requires calculating the world average citation score for 250 or 300 fields and then the total citation score for every country.

It is now possible to access third party data from sources such as Google, World Intellectual Property Organisation and various social media such as LinkedIn. One promising development is the creation of public citation profiles by Google Scholar.

The Cybermetrics Lab in Spain, publishers of the Webometrics Ranking Web of Universities, has announced the beta version of a ranking based on nearly one million individual profiles in the Google Scholar Citations database. The object is to see whether this data can be included in future editions of the Ranking Web of Universities

It uses data from the institutional profiles and counts the citations in the top ten public profiles for each institution, excluding the first profile.

The ranking is incomplete since many researchers and institutions have not participated fully. There are, for example, no Russian institutions in the top 600. In addition, there are technical issues such as the duplication of profiles.

The leading university is Harvard which is well ahead of its closest rival, the University of Chicago. English speaking universities are dominant with 17 of the top 20 places going to US institutions and three, Oxford, Cambridge and University College London, going to the UK.

Overall the top twenty are:

  1.   Harvard University
  2.   University of Chicago
  3.   Stanford University
  4.   University of California Berkeley
  5.   Massachusetts Institute of Technology (MIT)
  6.   University of Oxford
  7.   University College London
  8.   University of Cambridge
  9.   Johns Hopkins University
  10.   University of Michigan
  11.   Michigan State University
  12.   Yale University
  13.   University of California San Diego
  14.   UCLA
  15.   Columbia University
  16.   Duke University
  17.   University of Washington
  18.   Princeton University
  19.   Carnegie Mellon University
  20.   Washington University St Louis.

The top universities in selected countries and regions are:

Africa: University of Cape Town, South Africa 244th
Arab Region: King Abdullah University of Science and Technology, Saudi Arabia 148th
Asia and Southeast Asia: National University of Singapore 40th
Australia and Oceana: Australian National University 57th
Canada: University of Toronto 22nd
China: Zhejiang University 85th
France: Université Paris 6 Pierre and Marie Curie 133rd
Germany: Ludwig Maximilians Universität München 194th
Japan: Kyoto University 100th
Latin America: Universidade de São Paulo 164th
Middle East: Hebrew University of Jerusalem 110th
South Asia: Indian Institute of Science Bangalore 420th.

This seems plausible and sensible so it is likely that the method could be extended and improved.

Monday, November 16, 2015

Comparing Engineering Rankings

Times Higher Education (THE) have just come out with another subject ranking, this time for Engineering and Technology. Here are the top five.

1.   Stanford
2.   Caltech
3.   MIT
4.   Cambridge
5.   Berkeley

Nanyang Technological University is 20th, Tsinghua University 26th, and Zhejiang University 47th.

These rankings are very different from the US News ranking for Engineering.

There the top five are:

1.   Tsinghua
2.   MIT
3.   Berkeley
4.   Zhejiang
5.   Nanyang Technological University.

Stanford is 8th, Cambridge 35th and Caltech 62nd.

So what could possibly explain such a huge difference?

Basically, the two rankings are measuring rather different things. THE give a third of their weighting to reputation. Supposedly there are two indicators -- postgraduate teaching reputation and research reputation -- but it is likely that they are so closely correlated that they are really measuring the same thing. Another chunk goes to income in three flavors, institutional, research, and industry. Another 30% goes to citations normalised by field and year.

The US News ranking puts more emphasis on measures of quantity rather quality and output rather than input, and ignores teaching reputation, international faculty and  students and faculty student ratio. In these rankings Tsinghua is first for publications and Caltech 165th while Caltech is 46th for normalised citation impact and Tsinghua 186th.

On balance, I suspect that it is more likely that there will be a transition from quantity to quality than the other way round so we can expect Tsinghua and Zhejiang to close the gap in the THE rankings if they continue in their present form.





Saturday, September 19, 2015

Who's Interested in the QS World University Rankings?

And here are the first ten results (excluding this blog and the QS page) from a Google search for this year's QS world rankings. Compare with ARWU and RUR. Does anyone notice any patterns?


Canada falls in World University Rankings' 2015 list

UBC places 50th, SFU 225th in QS World University Rankings



















Who's Interrested in the Shanghai Rankings?

First results from a Google search for responses to the latest edition of the Shanghai world rankings.

Radboud University: 132nd place on ARWU/Shanghai ranking 2015







Monday, August 31, 2015

Update on changes in ranking methodology

Times Higher Education (THE) have been preparing the ground for methodological changes in their world rankings. A recent article by Phil Baty  announced that the new world rankings scheduled for September 30 will not count the citations to 649 papers, mainly in particle physics, with more than 1000 authors.

This is perhaps the best that is technically and/or commercially feasible at this moment but it is far from satisfactory. Some of these publications are dealing with the most basic questions about the nature of physical reality and it is a serious distortion not to include them in the ranking methodology. There have been complaints about this. Pavel Krokovny's comment was noted in a previous post while Mete Yeyisoglu argues that:
"Fractional counting is the ultimate solution. I wish you could have worked it out to use fractional counting for the 2015-16 rankings.
The current interim approach you came up with is objectionable.
Why 1,000 authors? How was the limit set? What about 999 authored-articles?
Although the institution I work for will probably benefit from this interim approach, I think you should have kept the same old methodology until you come up with an ultimate solution.
This year's interim fluctuation will adversely affect the image of university rankings."

Baty provides a reasonable answer to the question why the cut-off point is 1,000 authors.

But there is a fundamental issue developing here that goes beyond ranking procedure. The concept of authorship of a philosophy paper written entirely by a single person or a sociological study from a small research team is very different from that of the huge multi-national capital and labour intensive publications in which the number of collaborating institutions exceeds the number of  paragraphs and there are more authors than sentences.

Fractional counting does seem to be the only fair and sensible way forward and it is now apparently on THE's agenda although they have still not committed themselves.

The objection could be raised that while the current THE system gives a huge reward to even the least significant contributing institution, fractional counting would give major research universities insufficient credit for their role in important research projects.

A long term solution might be to draw a distinction between the contributors to and the authors of the mega papers. For most publications there would be no need to draw such a distinction but for those with some sort of input from dozens, hundreds or thousands of people it might be feasible for to allot half the credit to all those who had anything to do with the project and the other half to those who meet the standard criteria of authorship. There would no doubt be a lot of politicking about who gets the credit but that would be nothing new.

Duncan Ross, the new Data and Analytics Director at THE, seems to be thinking along these lines.
"In the longer term there are one technical and one structural approach that would be viable.  The technical approach is to use a fractional counting approach (2932 authors? Well you each get 0.034% of the credit).  The structural approach is more of a long term solution: to persuade the academic community to adopt metadata that adequately explains the relationship of individuals to the paper that they are ‘authoring’.  Unfortunately I’m not holding my breath on that one."
The counting of citations to mega papers is not the only problem with the THE citations indicator. Another is the practice of giving a boost to universities in underperforming countries. Another item by Phil Baty quotes this justification from Thomson Reuters, THE's former data partner.

“The concept of the regional modification is to overcome the differences between publication and citation behaviour between different countries and regions. For example some regions will have English as their primary language and all the publications will be in English, this will give them an advantage over a region that publishes some of its papers in other languages (because non-English publications will have a limited audience of readers and therefore a limited ability to be cited). There are also factors to consider such as the size of the research network in that region, the ability of its researchers and academics to network at conferences and the local research, evaluation and funding policies that may influence publishing practice.”

THE now appear to agree that this is indefensible in the long run and hope that a more inclusive academic survey and the shift to Scopus, with broader coverage than the Web of Science, will lead to this adjustment being phased out.

It is a bit odd that TR and THE should have introduced income, in three separate indicators, and international outlook, in another three, as markers of excellence, but then included a regional modification to compensate for limited funding and international contacts.

THE are to be congratulated for having put fractional counting and phasing out the regional modification on their agenda. Let's hope it doesn't take too long.

While we are on the topic, there are some more things about the citation indicator to think about . First, to repeat a couple of points mentioned in the earlier post.

  • Reducing the number of fields or doing away with normalisation by year of citation. The more boxes into which any given citation can be dropped the greater the chance of statistical anomalies when a cluster of citations meets a low world average of citations for that particular year of citations, year of publication and field (300 in Scopus?)

  • Reducing the weighting for this indicator. Perhaps citations per paper normalized by field is a useful instrument for comparing the quality of research of MIT, Caltech, Harvard and the like but it might be of little value when comparing the research performance of Panjab University and IIT Bombay or Istanbul University and  Bogazici.

Some other things THE could think about.

  • Adding a measure of overall research impact, perhaps simply by counting citations. At the very least stop calling field- and year- normalised regionally modified citations per paper a measure of research impact. Call it research quality or something like that.

  • Doing something about secondary affiliations. So far this seems to have been a problem mainly  for the Highly Cited Researchers indicator in the Shanghai ARWU but it may not be very long before more universities realise  that a few million dollars for adjunct faculty could have a disproportion impact on publication and citation counts.

  • Also, perhaps THE should consider excluding self-citations (or even citations within the same institution although that would obviously be technically difficult). Self-citation caused a problem in 2010 when Dr El Naschie's diligent citation of himself and a few friends lifted Alexandria University to fourth place in the world for research impact. Something similar might happen again now that THE are using a larger and less selective database.


Tuesday, August 25, 2015

Not fair to call papers freaky

A comment by Pavel Krokovny of Heidelberg University about THE's proposal to exclude papers with 1,000+ authors from their citations indicator in the World University Rankings.

"It is true that all 3k+ authors do not draft the paper together, on the contrary, only a small part of them are involved in this very final step of a giant research work leading to a sound result. It is as well true that making the research performed public and disseminating the knowledge obtained is a crucial step of the whole project. 
But what you probably missed is that this key stage would not be possible at all without a unique setup which was built and operated by profoundly more physicists and engineers than those who processed raw data and wrote a paper. Without that "hidden part of the iceberg" there would be no results at all. And it would be completely wrong to assume that the authors who did the data analysis and wrote the paper should be given the highest credit in the paper. It is very specific for the experimental HEP field that has gone far beyond the situation that was common still in the first half of 20th century when one scientist or a small group of them might produce some interesting results. The "insignificant" right tail in your distribution of papers on number of coauthors contains the hot part of the modern physics with high impact results topped by the discovery of Higgs-boson. And in your next rankings you are going to dishonour those universities that contributed to this discovery."

and


"the point is that frequent fluctuations of the ranking methodology might damage the credibility of the THE. Certainly, I do not imply here large and well-esteemed universities like Harvard or MIT. I believe their high rankings positions not to be affected by nearly any reasonable changes in the methodology. However, the highest attention to the rankings is attracted from numerous ordinary institutions across the world and their potential applicants and employees. In my opinion, these are the most concerned customers of the THE product. As I already pointed out above, it's very questionable whether participation in large HEP experiments (or genome studies) should be considered "unfair" for those institutions."

Sunday, August 23, 2015

Changes in Ranking Methodology

This year and next the international university rankings appear to be set for more volatility with unusually large upward and downward movement, partly as a result of changes to the methodology for counting citations in the QS and THE rankings.

ARWU

The global ranking season kicked off last week with the publication of the latest edition of the Academic Ranking of World Universities from the ShanghaiRanking Consultancy (SRC), which I hope to discuss in detail in a little while. These rankings are rather dull and boring, which is exactly what they should be. Harvard is, as always, number one for all but one of the indicators. Oxford has slipped from joint ninth to tenth place. Warwick has leaped into the top 100 by virtue of a Fields medal. At the foot of the table there are new contenders from France, Korea and Iran.

Since they began in 2003 the Shanghai rankings have been characterised by a  generally stable methodology. In 2012, however, they had to deal with the recruitment of a large and unprecedented number of adjunct faculty by King Abdulaziz University. Previously SRC had simply divided the credit for the Highly Cited Researchers indicator equally between all institutions listed as affiliations. In 2012 and 2013 they wrote to all highly cited researchers with joint affiliations and thus determined the division of credit between primary and secondary affiliations. Then, in 2014 and this year they combined the old Thomson Reuters list, first issued in 2001, and the new one, issued in 2014, and excluded all secondary affiliations in the new list.

The result was that in 2014 the rankings showed an unusual degree of volatility although this year things are a lot more stable. My understanding is that Shanghai will move to counting only the new list next year, again without secondary affiliations, so there should be a lot of interesting changes then. It looks as though Stanford, Princeton, University of Wisconsin -- Madison, and Kyoto University will suffer because of the change while University of California Santa Cruz, Rice University, University of Exeter and University of Wollongong. will benefit.

While SRC has efficiently dealt with the issue of secondary affiliation with regard to its Highly Cited indicator, the issue has now resurfaced in the unusual high scores achieved  by King Abdulaziz University for publications largely because of its adjunct faculty. Expect more discussion over the next year or so. It would seem sensible for SRC to think about a five or ten year period rather than one year for their Publications indicator and academic publishers, the media and rankers in general may need to give some thought to the proliferation of secondary affiliations.


QS

On July 27 Quacquarelli Symonds (QS) announced that for 18 months they had been thinking about normalising the counting of citations across five broad subject areas. They observed that a typical institution would receive about half of its citations from the life sciences and medicine, over a quarter from the natural sciences but just 1% from the arts and humanities.

In their forthcoming rankings QS will assign a 20% weighting for citations to each of the five subject areas something, according to Ben Sowter Research Director at QS, that they have been doing for the academic opinion survey.

It would seem then that there are likely to be some big rises and big falls this September. I would guess that places strong in humanities, social sciences and engineering like LSE, New York University and Nanyang Technological University may go up and some of the large US state universities and Russian institutions may go down. That's a guess because it is difficult to tell what happens with the academic and employer surveys.

QS have also made an attempt to deal with the issue of hugely cited papers with hundreds, even thousands of "authors" -- contributors would be a better term -- mainly in physics, medicine and genetics. Their approach is to exclude all papers with more than 10 contributing institutions, that is 0.34% of all publications in the database.

This is rather disappointing. Papers with huge numbers of authors and citations obviously do have distorting effects but they have often dealt with fundamental and important issues. To exclude them altogether is to ignore a very significant body of research.

The obvious solution to the problem of multi-contributor papers is fractional counting, dividing the number of citations by the number of contributors or contributing institutions. QS claim that to do so would discourage collaboration, which does not sound very plausible.

In addition, QS will likely extend the life of  survey responses from three to five years. That could make the rankings more stable by smoothing out annual fluctuations in survey responses and reduce the volatility caused by the proposed changes in the counting of citations.

The shift to a moderate version of field normalisation is helpful as it will reduce the undue privilege given to medical research, without falling into the huge problems that result from using too many categories. It is unfortunate, however, that QS have not taken the plunge into fractional counting. One suspects that technical problems and financial considerations might be as significant as the altruistic desire not to discourage collaboration.

After a resorting in September the QS rankings are likely to become a bit more stable and and credible but their most serious problem, the structure, validity and excessive weighting of the academic survey, has still not been addressed.

THE

Meanwhile, Times Higher Education (THE) has also been grappling with the issue of authorship inflation. Phil Baty has announced that this year 649 papers with over 1,000 authors will be excluded from their calculation of citations because " we consider them to be so freakish that they have the potential to distort the global scientific landscape".

But it is not the papers that do the distorting. It is  methodology.  THE and their former data partners Thomson Reuters, like QS, have avoided  fractional counting (except for a small experimental African ranking) and so every one of those hundreds or thousands of authors gets full credit for the hundreds  or thousands of citations. This has given places like Tokyo Metropolitan University, Scuola Normale Superiore Pisa, Universite Cadi Ayyad in Morocco and Bogazici University in Turkey remarkably high scores  for Citations: Research Impact, much higher than their scores for the bundled research indicators.

THE have decided to simply exclude 649 papers, 0r 0.006% of the total from their calculations for the world rankings. This seems a lot less than QS. Again, this is a rather crude measure. Many of the "freaks" are major contributions to advanced research and deserve to be acknowledged by the rankings in some way. 

THE did use fractional counting in their recent experimental ranking of African universities and Baty indicates that they are considering doing so in the future.

It would be a big step forward for THE if they introduce fractional counting of citations. But they should not stop there. There are other bugs in the citations indicator that ought to be fixed.

First, it does not at present measure what it is supposed to measure. It does not measure a university's overall research impact. At best, it is a measure of the average quality of research papers no matter how few (above a certain threshold) they are.

Second, the "regional modification", which divides the university citation impact score by the square root of the the score of the country where the university is located, is another source  of distortion. It gives a bonus to universities simply for being located in  underperforming countries. THE or TR have justified the modification by suggesting that some universities deserve compensation because they lack funding or networking opportunities. Perhaps they do, but this can still lead to serious anomalies.

Thirdly, THE need to consider whether they should assign citations to so many fields since this increases the distortions that can arise when there is a highly cited paper in a normally lowly cited field.

Fourthly, should they assign a thirty per cent weighting to an indicator that may be useful for distinguishing between the likes of MIT and Caltech but may be of little relevance for the universities that are now signing up for the world rankings?
































Monday, August 03, 2015

The CWUR Rankings 2015

The Center for World University Rankings, based in Jeddah, Saudi Arabia, has produced the latest edition of its global ranking of 1,000 universities.  The Center is headed by Nadim Mahassen, an Assistant Professor at King Abdulaziz University.

The rankings include five indicators that measure various aspects of publication and research: publications in "reputable journals", research papers in "highly influential" journals, citations, h-index and patents.

These indicators are given a combined weighting of 25%.

Another 25% goes to Quality of Education, which is measured by the number of alumni receiving major international awards relative to size (current number of students according to national agencies). This is obviously a  crude measure which fails to distinguish among the great mass of universities that have never won an award.

Similarly, another quarter is assigned to Quality of Faculty measured by the number of faculty receiving such awards and another quarter to Alumni Employment measured by the number of CEOs of top corporations. Again, these indicators are of little or no relevance to all but a few hundred institutions.

Alumni employment gets another 25%. This is measured by alumni holding CEO positions in top companies. Again, this would be of relevance to a limited number of universities.

The Top Ten are:

1.    Harvard
2.    Stanford
3.    MIT
4.    Cambridge
5.    Oxford
6.    Columbia
7.    Berkeley
8.    Chicago
9.    Princeton
10.  Cornell.

The only change from last year is that Cornell has replaced Yale in tenth place.


Countries with Universities in the Top Hundred in 2015 and 2014


CountryUniversities in
 top 100 2015  
2014
US 5553          
UK77
Japan78
Switzerland  44
France44
Canada33
Israel33
South Korea21
Germany24
Australia2 2
China22
Netherlands21
Russia11
Taiwan11
Belgium11
Norway10
Sweden12
Singapore11
Denmark11
Italy              0                       1            


Top Ranked in Region or Country

USA:                                       Harvard
Canada:                                  Toronto
Asia:                                       Tokyo
South Asia:                             IIT Delhi
Southeast Asia :                      National University of Singapore
Europe:                                   Cambridge
Central and Eastern Europe:    Lomonosov Moscow State University
Arab World:                             King Saud University
Middle East:                             Hebrew University of Jerusalem
Latin America:                         Sao Paulo
Africa:                                      University of the Witwatersrand
Carribbean:                              University of Puerto Rico at Mayagüez



Noise Index


In the top 20, the CWUR rankings are more stable than THE and QS but less stable than the Shanghai rankings.

Average position change of universities in the top 20 in 2014: 0.5

Comparison

CWUR 2013-14:            0.9
Shanghai Rankings (ARWU)
2011-12:                        0.15
2012-13:                        0.25
THE WUR  2012-13:      1.2
QS  WUR    2012-13      1.7

With regard to the top 100, the CWUR rankings are more stable this year, with a volatility similar to the QS and THE rankings although significantly less so than ARWU.



Average position change of universities in the top 100 in 2014: 4.15

Comparison

CWUR 2013-14:           10.59

Shanghai Rankings (ARWU

2011-12:                           2.01
2012-13:                           1.66
THE WUR  2012-13:         5.36
QS  WUR    2012-13:    -   3.97


Monday, May 11, 2015

The Geography of Excellence: the Importance of Weighting


So finally, the 2015 QS subject rankings were published. It seems that the first attempt was postponed when the original methodology produced implausible fluctuations, probably resulting from the volatility that is inevitable when there are a small number of data points -- citations and survey responses -- outside the top 50 for certain subjects.

QS have done some tweaking, some of it aimed at smoothing out the fluctuations in the responses to their academic and employer surveys.

These rankings look at bit different from the World University Rankings. Cambridge has the most top ten placings (31), followed by Oxford and Stanford (29 each), Harvard (28), Berkeley (26) and MIT (16).

But in the world rankings MIT is in first place, Cambridge second, Imperial College London third, Harvard fourth and Oxford and University College London joint fifth.

The subject rankings use two indicators from the world, the academic survey and the employer survey but not internationalisation, student faculty ratio and citations per faculty. They add two indicators, citations per paper and h-index.

The result is that the London colleges do less well in the subject rankings since they do not benefit from their large numbers of international students and faculty. Caltech, Princeton and Yale also do relatively badly probably because the new rankings do not take account of their low faculty student faculty ratios.

The lesson of this is that if weighting is not everything, it is definitely very important.

Below is a list of universities ordered by the number of top five placings. There are signs of the Asian advance --  Peking, Hong Kong and the National University of Singapore -- but it is an East Asian advance.

Europe is there too but it is Cold Europe -- Switzerland, Netherlands and Sweden -- not the Mediterranean.


RankUniversityCountryNumber of Top Five Places
1   HarvardUSA26
2CambridgeUK20
3OxfordUK18
4   StanfordUSA17
5=MITUSA16
5=UC BerkeleyUSA16
7London School of EconomicsUK7
8=University College LondonUK3
8=ETH ZurichSwitzerland 3
10=New York UniversityUSA2
10=Yale  USA2
10=Delft University of TechnologyNetherlands2
10=National University of SingaporeSingapore2
10=UC Los AngelesUSA2
10=UC DavisUSA2
10=Cornell USA2
10=Wisconsin - MadisonUSA2
10-MichiganUSA2
10=Imperial College LondonUK2
20=WagenginenNetherlands1
20=University of Southern California USA1
20=Pratt Institute, New YorkUSA1
20=Rhode Island School of DesignUSA1
20=Parsons: the New School for Design USA1
20=Royal College of Arts LondonUK1
20=MelbourneAustralia1
20=Texas-AustinUSA1
20=Sciences PoFrance1
20=PrincetonUSA1
20=YaleUSA1
20=ChicagoUSA1
20=ManchesterUK1
20=University of PennsylvaniaUSA1
20=DurhamUK1
20=INSEADFrance1
20=London Business SchoolUK1
20=NorthwesternUSA1
20=UtrechtNetherlands1
20=GuelphCanada1
20=Royal Veterinary College LondonUK1
20=UC San FranciscoUSA1
20=Johns  HopkinsUSA1
20=KU LeuvenUSA1
20=GothenburgSweden1
20=Hong KongHong Kong1
20=Karolinska InstituteSweden1
20=SussexUK1
20=Carnegie Mellon UniversityUSA1
20=RutgersUSA1
20=PittsburghUSA1
20=PekingChina1
20=PurdueUSA1
20=Georgia Institute ofTechnologyUSA1
20=EdinburghUK1

Tuesday, October 07, 2014

The Times Higher Education World University Rankings





Publisher

Times Higher Education



Scope

Global. Data provided for 400 universities. Over 800 ranked.


Top Ten


PlaceUniversity
1California Institute of Technology (Caltech) 
2Harvard University
3Oxford University
4Stanford University
5Cambridge University
6Massachusetts Institute of Technology (MIT)                       
7Princeton University
8University of California Berkeley
9=Imperial College London
9=Yale University



Countries with Universities in the Top Hundred


Country      Number of Universities
USA45
UK11
Germany6
Netherlands                                              6
Australia5
Canada4
Switzerland3
Sweden3
South Korea3
Japan2
Singapore2
Hong Kong2
China2
France2
Belgium2
Italy1
Turkey1



Top Ranked in Region


North America 
California Institute of Technology (Caltech)
AfricaUniversity of Cape Town
EuropeOxford University
Latin AmericaUniversidade de Sao Paulo                                    
AsiaUniversity of Tokyo                                 
Central and Eastern Europe  Lomonosov Moscow State University                                   
Arab WorldUniversity of Marrakech Cadi Ayyad                                    
Middle EastMiddle East Technical University                                 
OceaniaUniversity of Melbourne                              



Noise Index

In the top 20, this year's THE world rankings are less volatile than the previous edition and this year's QS rankings. They are still slightly less stable than the Shanghai rankings.


RankingAverage Place Change
 of Universities in the top 20 
THE World rankings 2013-140.70
THE World Rankings 2012-20131.20
QS World Rankings 2013-20141.45
ARWU 2013 -2014 0.65
Webometrics 2013-20144.25
Center for World University Ranking (Jeddah)
2013-2014 
0.90


Looking at the top 100 universities, the  THE rankings are more stable than last year. The average university in the top 100 in 2013 rose or fell 4.34 places. The QS rankings are now more stable than the THE or Shanghai rankings.

RankingAverage Place Change
 of Universities in the top 100 
THE World Rankings 2013-20144.34
THE World Rankings 2012-20135.36
QS World Rankings 2013-143.94
ARWU 2013 -2014 4.92
Webometrics 2013-201412.08
Center for World University Ranking (Jeddah)
2013-2014 
10.59


Note: universities falling out of the top 100 are treated as though they fell to 101st position.


Methodology

See here