Sunday, May 08, 2022

Has China Really Stopped Rising? Looking at the QS Subject Rankings

For the last few years the publication of global rankings has led to headlines about the rise of Asia. If these were to be believed we would expect a few Asian universities to be orbiting above the stratosphere by now.

The Asian ascent was always somewhat exaggerated. It was true largely for China and perhaps Southeast Asia and the Gulf States. Japan, however, has been relatively stable or even declining a bit and India so far has made little progress as far as research or innovation is concerned. Now, it seems that the Chinese research wave may be slowing down. The latest edition of the QS subject rankings suggests that that  the quality of Chinese is levelling off and perhaps even declining. 

A bit of explanation here. QS publishes rankings for broad subject fields such as Arts and Humanities and for narrow subject areas  such as Archaeology. All tables include indicators for H-index, citations per paper, academic review, and employer review, with varying weightings. This year, QS has added a new indicator, International Research Network (IRN), with "broad" -- does that mean not unanimous? -- support from its Advisory Board, which is based on the number of international collaborators and their countries or territories. Chinese universities do much less well here than on the other indicators.

With QS, as with the other rankings, we should always be careful when there is any sort of methodological change. The first rule of ranking analysis is that any non-trivial change in rank is likely to be the result of methodological changes.

So lets take a look at the broad field tables. In Arts and Humanities  the top Chinese university is Peking University which fell seven places from 36th to 43rd between 2021 and 2022.

It was the same for other broad areas. In Engineering and Technology, Tsinghua fell from 10th to 14th, and in Natural Sciences from 15th to 23rd. (In this table Peking moved slightly ahead of Tsinghua into 21st place). In Social Sciences and Management Peking went from 21st to 26th 

There was one exception. In Life Sciences and Medicine Peking rose from 62nd to 53rd, although its overall score remained the same at 79.

However, before assuming that this is evidence of Chinese decline we should note the possible impact of the new indicator where Chinese institutions, including Peking and Tsinghua, do relatively poorly. In Life Sciences and Medicine every single one of the 22 Chinese universities listed do better for H-Index and Citations than for IRN. 

It looks as though the ostensible fall of Chinese universities  is partly or largely due to QS's addition of the IRN metric.

Looking at  Pitations per paper, which is a fairly good  proxy for research quality, we find that for most subject areas, the best Chinese universities have improved since last year. In Engineering and Technology Tsinghua has risen from 89.1 to 89.6. In Life Sciences and Medicine Peking has gone from 79.2 to 80.6 and in Social Science and Management from 89.7 to 90.7.

For Natural Science Tsinghau, had a score for citations of 88.6. It fell this year but was surpassed by Peking with a score of 90.1.

If Citations per Paper are consider the arbiter of research excellence then Chinese universities have been improving over the last year and the apparent decline in the broad subject areas is largely the result of the new indicator. One wonders if the QS management knew this was going to happen.

That is not the end of the discussion. There may well be areas where the Chinese advance is faltering or at least reaching a plateau and this might be revealed by a scrutiny of the narrow subject tables.



Monday, March 28, 2022

Where does reputation come from?

THE announced the latest edition of its reputation rankings last October. The amount of information is quite limited: scores are given for only the top fifty universities. But even that provides a few interesting insights.

First, there is really no point in providing separate data for teaching and research reputation. The correlation between the two for the top fifty is .99. This is unsurprising. THE surveys researchers who have published in Scopus indexed journals and so there is a very obvious halo effect. Respondents have no choice but to refer to their knowledge of research competence when trying to assess teaching performance. If THE are going to improve their current methodology they need to recognise that their reputation surveys are measuring the same thing. Maybe they could try to find another source of respondents for the teaching survey, such as school advisors, students or faculty at predominantly teaching institutions. 

Next, after plugging in a few indicators from other rankings, it is clear that that the metrics most closely associated with teaching and research reputation are publications in Nature and Science (Shanghai), highly cited researchers (Shanghai), and papers in highly reputed journals (Leiden).

The correlation with scores in the RUR and QS reputation rankings, citations (THE and QS), and international faculty was modest.

There was no correlation at all with the proportion of papers with female or male authors (Leiden).

So it seems that the best way to acquire a reputation for good teaching and research is publish papers in the top journals and get lots of citations. That, of course, applies only to this very limited group of institutions.



Sunday, March 20, 2022

What should Rankers Do About the Ukraine Crisis?

Over the last few days there have been calls for the global rankers to boycott or delist Russian universities to protest the Russian invasion of Ukraine. There have also been demands that journals should reject submissions from Russian authors and universities and research bodies stop collaborating with Russian authors.

So far, four European ranking agencies have announced some sort of sanctions.

U-Multirank has announced that Russian universities will be suspended "until they again share in the core values of the European higher education area."

QS will not promote Russia as a study area and will pause business engagement. It will also redact Russian universities from new rankings.

Webometrics will "limit the value added information" for Russian and Belarusian universities.

Times Higher Education (THE) will stop business activities with Russia but will not remove Russian universities from its rankings. 

The crisis has highlighted a fundamental ambiguity in the nature of global rankings. Are they devices for promoting the business interests of institutions or do they provide relevant and useful information for researchers, students and the public?

Refraining from doing business with Russia until it withdraws from Ukraine is a welcome rebuke to the current government. If, however, rankings contain useful information about Russian scientific and research capabilities then that information should continue to be made available.



Sunday, September 26, 2021

What is a University Really for ?

Louise Richardson, Vice-Chancellor of the University of Oxford, has seen fit to enlighten us about the true purpose of a university. It is, it seems, to inculcate appropriate deference to the class of certified experts.

Professor Richardson remarked at the latest Times Higher Education (THE) academic summit that she was embarrassed that "we" had educated the Conservative politician Michael Gove who said, while talking about Brexit, that people had had enough of experts.

So now we know what universities are really about.  Not about critical discussion, cutting-edge research, skepticism, the disinterested pursuit of truth but about teaching respect for experts.

A few years ago I wrote a post suggesting we were now in a world where the expertise of the accredited experts was declining along with public deference. I referred to the failure of political scientists to predict the nomination of Trump, the election of Trump, the rise of Leicester City, the Brexit vote. It looks like respect for experts has continued to decline, not entirely without reason.

Professor Richardson thinks that Gove's disdain for the Brexit experts is cause for embarrassment. While it is early years for the real effects of Brexit to become clear it is as yet far from obvious that it has been an unmitigated disaster.  It is, moreover, a little ironic that the remark was made at the latest THE academic summit where the annual world rankings were announced.  Richardson remarked that she was delighted that her university was once again ranked number one.

The irony is that the THE world rankings are probably the least expert of the global rankings although they are apparently the most prestigious at least among those institutions that are known for being prestigious.

Let's have another look at THE's Citations Indicator which is supposed to measure research quality or impact and accounts for nearly a third of the total weighting. (Regular readers of this blog can skim or skip the next few lines. ) Here are the top five from this year's rankings.

1,   University of Cape Coast

2,   Duy Tan University

3,   An Najah National University

4.   Aswan University

5.   Brighton and Sussex Medical School.

This is not an academic version of the imaginary football league tables that nine-year-old children used to construct. Nor is it the result of massive cheating by the universities concerned. It is quite simply the outcome of a hopelessly flawed system. THE, or rather its data analysts, appear to be aware of the inadequacies of this indicator but somehow meaningful reform keeps getting postponed. One day historians will search the THE archives to findr the causes of this inability to take very simple and obvious measures to produce a sensible and credible ranking. I suspect that the people in control of THE policy are averse to anything that might involve any distraction from the priority of monetising as much data as possible. Nor is there any compelling reason for a rush to reform when universities like Oxford are unconcerned about the inadequacies of the current system.

Here are the top five for income from industry which is supposed to have something to do with innovation.

1.   Asia University Taiwan

2.   Istanbul Technical University

3.   Khalifa University

4.   Korean Advanced Institute of Science and Technology (KAIST)

5.   LMU Munich.

This is a bit better. It is not implausible that KAIST or Munich is a world leader for innovation. But in general, this indicator is also inadequate for any purpose other than providing fodder for publicity. See a scathing review by Alex Usher

Would any tutor or examiner at Oxford give any credit to any student who thought that Ghana, Vietnam and Palestine were centers of international research impact. They are all doing a remarkable job of teachng in many reseapects but that is not what THE is ostensibly giving them credit for.

In addition, the THE world rankings fail to meet satisfactory standards with regard to basic validity. Looking at the indicator scores for the top 200 universities in the most recent world rankings we can see that the correlation between research and teaching is 0.92. In effect these are not two distinct metrics. They are measuring essentially the same thing. A quick look at the methodology suggests that what they are comparing is income (total institutional income for teaching, research income for research), reputation (the opinion surveys for research and teaching) and investment in doctoral programmes.

On the other hand , the citations indicator does not correlate significantly with research or teaching and correlates negatively with industry income.

One can hardly blame THE for wanting to make as much money as possible. But surely we can expect something better from supposedly elite institutions that claim to value intellectual and scientific excellence. If Oxford and its peers wish to restore public confidence in the experts there is no better way than saying to THE that we will not submit data to THE until you produce something a little less embarrassing.




Wednesday, August 25, 2021

THE World University Rankings: Indicator Correlations

I was going to wait until next week to do this but the publication of the latest edition of the THE world rankings is coming and there may be a new methodology.

The current THE methodology is based on five indicators or indicator groups: Teaching (5 indicators), Research (3 indicators), Citations, Income from Industry, International Outlook (3 indicators).

Looking at the analysis of 1526 cases (using PSPP), we can see that the correlation between Teaching and Research is very high, .89, and fairly good between those two and Citations. Teaching and Research both include surveys of teaching and research, which have been shown to yield vary similar results. Also, Teaching includes Institutional Income and Research Income, which are likely to be closely related.

The Citations indicator has a moderate correlation with Teaching and Research, as noted, and also with International Outlook.

The correlations between Industry Income and Teaching and Research are moderate and those with Citations and International Outlook are low, .20 and .18 respectively. The Industry Income indicator is close to worthless since the definition of income is apparently interpreted in several different ways and may have little relation to financial reality. International Outlook correlates modestly with the other indicators except for Industry Income.

It seems there is little point in distinguishing between the Teaching and Research indicators since they are both influenced by income, reputation, and large doctoral programmes. The Industry Income indicator has little validity and will probably, with very good reason, be removed, from the THE rankings.


CORRELATIONS

CORRELATION
/VARIABLES = teaching research citations industry international weightedtotal
/PRINT = TWOTAIL SIG.
Correlations
teachingresearchcitationsindustryinternationalweightedtotal
teachingPearson Correlation1.00.89.51.45.38.83
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
researchPearson Correlation.891.00.59.53.54.90
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
citationsPearson Correlation.51.591.00.20.57.87
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
industryPearson Correlation.45.53.201.00.18.42
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
internationalPearson Correlation.38.54.57.181.00.65
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
weightedtotalPearson Correlation.83.90.87.42.651.00
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526


Most people are probably more concerned with distinctions among the world's elite or would be elite universities. Turning to the top 200 of the THE rankings, the correlation between Teaching and Research, is again very high, suggesting that these are measuring virtually the same thing.

The Citations indicator has a low correlation with International Outlook, a low and insignificant correlation with Teaching and Research, and a negative and insignificant correlation with Industry Income. 

Industry Income  has low correlations with Research and Teaching and negative with Citations and International Outlook.

It would seem that THE world rankings are not helpful for evaluating the quality of the global elite. A new methodology will be most welcome.


CORRELATIONS

CORRELATION
/VARIABLES = teaching research citations industry international weightedtotal
/PRINT = TWOTAIL SIG.
Correlations
teachingresearchcitationsindustryinternationalweightedtotal
teachingPearson Correlation1.00.90.02.23-.11.89
Sig. (2-tailed).000.768.001.114.000
N200200200200200200
researchPearson Correlation.901.00.06.28.05.92
Sig. (2-tailed).000.411.000.471.000
N200200200200200200
citationsPearson Correlation.02.061.00-.30.22.39
Sig. (2-tailed).768.411.000.001.000
N200200200200200200
industryPearson Correlation.23.28-.301.00-.10.17
Sig. (2-tailed).001.000.000.149.014
N200200200200200200
internationalPearson Correlation-.11.05.22-.101.00.17
Sig. (2-tailed).114.471.001.149.017
N200200200200200200
weightedtotalPearson Correlation.89.92.39.17.171.00
Sig. (2-tailed).000.000.000.014.017
N200200200200200200





Monday, August 23, 2021

Shanghai Rankings: Correlations Between Indicators

This is, I hope, the first of  a series. Maybe THE and QS next week.

If we want to compare  the utility of university rankings one attribute to consider is internal consistency. Here, the correlation between the various indicators can tell us a lot. If the correlation between a pair of indicators is 0.90 or above we can assume that these indicators are essentially measuring the same thing.

On the other hand, if  there is no correlation or one that is low, insignificant or even negative we might have doubts about the validity of one or both of the indicators. It is reasonable that if a university scores well for one metric it will do well for others providing they both represent highly valued attributes. A university producing high quality research or collecting large numbers of citations should also score well for reputation. If it does not there might be a methodological problem somewhere.

So, we can assume that if the indicators are valid and are not measuring the same thing the correlation between indicators will probably be somewhere between 0.5 and 0.9.

Let's have a look at the Shanghai ARWU for 2019. The indicator scores were extracted and analysed using PSPP. (It is very difficult to analyse the 2020 edition because of a recent change in presentation.) These rankings have six indicators: alumni and faculty receiving Nobel and Fields awards, papers in Nature and Science, highly cited researchers, publications in the Web of Science, and productivity per capita.

Looking at all 1000 institutions in the Shanghai Rankings, Alumni, Awards, and Nature and Science all correlate well with each other Highly Cited Researchers correlates well with Nature and Science and Publications but less so with Alumni and Awards. Nature and Science correlates well with all the other indicators.

The Publications indicator does not correlate well with Alumni and Awards. This is to be expected since Publications refers to 2018 while the Alumni and Awards indicators go back several decades.

Overall, the correlations are quite good although there is a noticeable divergence between Publications and Alumni and Awards, which cover very different time periods. 

CORRELATIONS

CORRELATION
/VARIABLES = alumni awards highlycited naturescience publications pcp finaltotal
/PRINT = TWOTAIL NOSIG.
Correlations
alumniawardshighlycitednaturesciencepublicationspcpfinaltotal
alumniPearson Correlation1.00.78.51.72.45.63.76
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
awardsPearson Correlation.781.00.57.75.44.67.82
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
highlycitedPearson Correlation.51.571.00.79.72.64.87
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
naturesciencePearson Correlation.72.75.791.00.69.73.93
Sig. (2-tailed).000.000.000.000.000.000
N992992992992992992992
publicationsPearson Correlation.45.44.72.691.00.50.81
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
pcpPearson Correlation.63.67.64.73.501.00.78
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
finaltotalPearson Correlation.76.82.87.93.81.781.00
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000



Most observers of ARWU and other global rankings are interested in the top levels where elite schools and national flagships jostle for dominance. Analysing correlations among indicators for the top 200 in ARWU, there are high correlations between Alumni, Awards, Nature and Science, and Productivity per Capita, ranging from .69 to .79.

There is also a high correlation of .72 between Nature and Science and Highly Cited Researchers. It is, however, noticeable that the correlation between Publications and other indicators is low for Highly Cited Researchers and very low for Productivity per Capita, Alumni and Awards.

It seems that, especially among the top 200 places, there is a big gap opening between the old traditional elite of Oxbridge, the Ivy League and the like who continue to get credit for long dead Nobel laureates and the new rising stars of Asia and Europe who are surging ahead for WOS papers and beginning to produce or recruit superstar researchers.




Correlations
alumniawardshighlycitednaturesciencepublicationspcpfinaltotal
alumniPearson Correlation1.00.79.36.69.21.62.78
Sig. (2-tailed).000.000.000.003.000.000
N200200200199200200200
awardsPearson Correlation.791.00.44.74.14.67.84
Sig. (2-tailed).000.000.000.044.000.000
N200200200199200200200
highlycitedPearson Correlation.36.441.00.72.57.49.78
Sig. (2-tailed).000.000.000.000.000.000
N200200200199200200200
naturesciencePearson Correlation.69.74.721.00.44.65.92
Sig. (2-tailed).000.000.000.000.000.000
N199199199199199199199
publicationsPearson Correlation.21.14.57.441.00.12.55
Sig. (2-tailed).003.044.000.000.083.000
N200200200199200200200
pcpPearson Correlation.62.67.49.65.121.00.72
Sig. (2-tailed).000.000.000.000.083.000
N200200200199200200200
finaltotalPearson Correlation.78.84.78.92.55.721.00
Sig. (2-tailed).000.000.000.000.000.000
N200200200199200200200


Thursday, August 12, 2021

THE's Caucus Ranking


In Alice in Wonderland there is a "caucus race" in which everyone runs around frantically in different directions and eventually everyone wins a prize. Unfortunately, there are not quite enough sweets to go around as prizes and so poor Alice has to make do with her thimble which she gives to the Dodo who then presents it to her.

It seems that THE has come up with a caucus ranking. In the THE Impact Rankings universities expend a lot of energy, do a lot of amazing, astounding and very different things in very different ways and a lot of them get some sort of prize for something. 

These rankings are another example of the growing complexity of the ranking scene. Global university rankings used to be simple. Shanghai Jiao Tong University started its  ARWU rankings in 2000 with just 500 ranked institutions and six indicators. Since then the number of rankings has proliferated and there have been more and more spin-offs, young university, regional, business schools, national, subject  rankings and so on with more indicators and increasingly complex and often opaque methodologies. We are getting to the point where a university is incompetent or excessively honest if it cannot find a ranking indicator, perhaps finely sliced by age, size, mission, and/or subject, in which they can scrape into the top five hundred, or at the very least the top thousand, and therefore into the top 3 or 4 % in the world. 

Some of the recent rankings seem redundant or pointless, going over the same old ground or making granular distinctions that are of little interest. It is no doubt nice to be acclaimed as the best young university for social science in South Asia, and maybe that can be used in advertising, but is it really necessary?

Now we have the third edition of the THE Impact rankings. These, as THE boasts, are the only rankings to measure universities according to their commitment to the UN's Sustainable Development Goals. But that is not very original. Universitas Indonesia's GreenMetric was doing something similar several years ago although not tied explicitly to the UN goals. They have indicators related to energy, infrastructure, climate change, water, waste, transportation, and education.

It seems a little odd that the UN should be accepted as the authority on the achievement of gender equality when its "peacekeeping" forces have repeatedly been accused of rape and sexual assault. Is the UN really the right body to lay down guidelines about health and well-being considering the dubious performance of the WHO during the pandemic crisis?

One also wonders why THE should venture into ranking contributions to sustainability when after a decade it has still failed to come up with a credible citations indicator, which would seem a much easier task. 

It is noticeable that participation in these rankings is very uneven. There are 1,118 universities in the latest edition but only 13 Chinese and only 45 American, of which precisely two are in California, supposedly the homeland of environmental consciousness. The higher education elite of the USA, UK and China are largely absent. On the other hand, Iraq, Egypt, Brazil and Iran are much better represented here than in the research based rankings.

The top of these rankings is dominated by English-speaking universities outside the USA. The overall top twenty contains seven Australian, five British, three Canadian, and one each from Denmark, Ireland, the USA, New Zealand and Italy.

The popularity of the Impact Rankings seems linked to the current problems of many western universities. Public funding has been drying up, academic standards eroding, research output stagnating. Many universities have resorted to importing international, often Chinese, students and faculty to keep up standards, bring in tuition money, fill up postgraduate classes, and do the work of junior researchers.

The international students and researchers have left or are leaving and may not return in significant numbers, although THE "believes" that they will. This is happening as universities trying to reopen face the prospect of unprepared students, dwindling funds, and a lack of interest from employers. Eventually this will impact the position of universities in the global ranking systems. Those universities once dependent on international researchers for their reputation and ranking scores will start to suffer.

It looks as though western universities are losing interest in research and instruction in professional and academic subjects and and are reinventing themselves as purveyors of transformative experiences to the children of the affluent and ambitious, guardians of the purity of cultural discourse, or as saviours of the planet.     

The Financial Post of Canada has published a caustic comment on the joyful proclamations by Queen's University about its ascent to fifth place in the Impact Rankings. A trustee, John Stackhouse, has claimed that its success there meant that it was fulfilling "the true purpose of a university." The article observes that those "who believe the true purpose of a university is to pursue academic excellence and ensure that students who pass through its doors have the skills to build prosperous lives for themselves as productive members of their community, might differ."  In the THE World University Rankings and others Queen's is doing much less well. 

The methodology of the impact rankings does little to inspire confidence. For each of the indicators there is a weighting of 27% for bibliometric measures, such as the amount of research on hunger, health, water, or clean energy. It is easy to see how this could be gamed. Then there is a variety of data submitted by the institutions. Even if every university administrator is a sea-green incorruptible there are many ways in which such data can be massaged or stretched.

Added to that, THE does not appear to be doing a rigorous validation. Universities are not  assessed  the same things, except for the partnership for the goals indicator. The University of Sydney, overall second this year, is ranked for clean water and sanitation, sustainable cities and communities, and life on land. Clean water and sanitation includes supporting water conservation off campus and the reuse of water across the university.

RMIT University, in third place, is ranked for decent work and economic growth, industry innovation and infrastructure and reduced inequalities.  Decent work and economic growth includes expenditure per employee and policies for ending discrimination. So, essentially THE is trying to figure out whether Sydney is better at reusing water than RMIT is at announcing policies that are supposed to reduce discrimination. Comparing research output and impact across disciplines is, as THE ought to know, far from easy. Comparing performance in using water with discrimination policy would seem close to impossible especially since THE does not always use objective criteria but merely examples of best practices. Evidence "is evaluated against a set oef criteria and decisions are cross-validated where there is uncertainty. Evidence is not required to be exhaustive -- we are looking for examples that demonstrate best practice at the institutions concerned."

But it seems that the a substantial number of universities will find these rankings a useful tool in their quest for income and publicity and there will be more editions, and probably sub-rankings of one sort or another, for years to come. 



 

Sunday, June 13, 2021

The Remarkable Revival of Oxford and Cambridge


There is nearly always a theme for the publication of global rankings. Often it is the rise of Asia, or parts of it. For a while it was the malign grasp of Brexit which was crushing the life out of British research or the resilience of American science in the face of the frenzied hostility of the great orange beast. This year it seems that the latest QS world rankings are about the triumph of Oxford and other elite UK institutions and their leapfrogging their US rivals. Around the world, quite a few other places are also showcasing their splendid achievements.

In the recent QS rankings Oxford has moved up from overall fifth to second place and Cambridge from seventh to third while University College London, Imperial College London, and Edinburgh have also advanced. No doubt we will soon hear that this is because of transformative leadership, the strength that diversity brings, working together as a team or a family, although I doubt whether any actual teachers or researchers will get a bonus or a promotion for their contributions to these achievements.

But was it leadership or team spirit that pushed Oxford and Cambridge into the top five? That is very improbable. Whenever there is a big fuss about universities rising or falling significantly in the rankings in a single year it is a safe bet that it is the result of an error, the correction of an error, or a methodological flaw or tweak of some kind.

Anyway, this year's Oxbridge advances had as much to do with leadership,  internationalization, or reputation as goodness had with Mae West's diamonds. It was entirely due to a remarkable rise for both places in the score for citations per faculty, Oxford from 81.3 to 96, and Cambridge from 69.2 to 92.1. There was no such change for any of the other indicators.

Normally, there are three ways in which a university can rise in QS's citations indicator. One is to increase the number of publications while maintaining the citation rate. Another is to improve the citation rate while keeping output constant. The third is to reduce the number of faculty physically or statistically.

None of these seem to have happened at Oxford and Cambridge. The number of publications and citations has been increasing but not sufficiently to cause such a big jump. Nor does there appear to have been a drastic reduction of faculty in either place.

In any case it seems that Oxbridge is not alone in its remarkable progress this year. For citations, ETH Zurich rose from 96.4 to 99.8, University of Melbourne from 75 to 89.7, National University of Singapore from 72.9 to 90.6, Michigan from 58 to 70.5. It seems that at the top levels of these rankings nearly everybody is rising except for MIT which has the top score of 100 but it is noticeable that as we get near the top the increase gets smaller.

It is theoretically possible that this might be the result of a collapse of the raw scores of citations front runner MIT which would raise everybody else's scores if it still remained at the top but there is no evidence of either a massive collapse in citations or a massive expansion of research and teaching staff.

But then as we go to the other end of the ranking we find universities' citations scores falling, University College Cork from 23.4 to 21.8, Universitas Gadjah Mada from 1.7 to 1.5, UCSI University Malaysia from 4.4 to 3.6, American University  in Cairo from 5.7 to 4.2.

It seems there is a bug in the QS methodology. The indicator scores that are published by QS are not raw data but standardized scores based on standard deviations from the mean The mean score is set at fifty and the top score at one hundred. Over the last few years the number of ranked universities has been increasing and the new ones tend to perform less well than the the established ones, especially for citations. In consequence, the  mean number of citations per faculty has declined and therefore universities scoring above the mean will increase their standardized scores which is derived from the standard deviation from the mean. If this interpretation is incorrect I'm very willing to be corrected.

This has an impact on the relative positions of Oxbridge and leading American universities. Oxford and Cambridge rely on their  scores in the academic and employer survey and international faculty and staff to keep in the top ten. Compared to Harvard, Stanford and MIT they are do not perform well for quantity or quality of research. So the general inflation of citations scores gives them more of a boost than the US leaders and so their total score rises.

It is likely that Oxford and Cambridge's moment of glory will be brief since QS in the next couple of years will have to do some recentering in order to prevent citation indicator scores bunching up in the high nineties. The two universities will fall again although  it that will probably not be attributed to a sudden collapse of leadership or failure to work as a team.

It will be interesting to see if any of this year's rising universities will make an announcement that they don't really deserve any praise for their illusory success in the rankings.