Wednesday, August 25, 2021

THE World University Rankings: Indicator Correlations

I was going to wait until next week to do this but the publication of the latest edition of the THE world rankings is coming and there may be a new methodology.

The current THE methodology is based on five indicators or indicator groups: Teaching (5 indicators), Research (3 indicators), Citations, Income from Industry, International Outlook (3 indicators).

Looking at the analysis of 1526 cases (using PSPP), we can see that the correlation between Teaching and Research is very high, .89, and fairly good between those two and Citations. Teaching and Research both include surveys of teaching and research, which have been shown to yield vary similar results. Also, Teaching includes Institutional Income and Research Income, which are likely to be closely related.

The Citations indicator has a moderate correlation with Teaching and Research, as noted, and also with International Outlook.

The correlations between Industry Income and Teaching and Research are moderate and those with Citations and International Outlook are low, .20 and .18 respectively. The Industry Income indicator is close to worthless since the definition of income is apparently interpreted in several different ways and may have little relation to financial reality. International Outlook correlates modestly with the other indicators except for Industry Income.

It seems there is little point in distinguishing between the Teaching and Research indicators since they are both influenced by income, reputation, and large doctoral programmes. The Industry Income indicator has little validity and will probably, with very good reason, be removed, from the THE rankings.


CORRELATIONS

CORRELATION
/VARIABLES = teaching research citations industry international weightedtotal
/PRINT = TWOTAIL SIG.
Correlations
teachingresearchcitationsindustryinternationalweightedtotal
teachingPearson Correlation1.00.89.51.45.38.83
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
researchPearson Correlation.891.00.59.53.54.90
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
citationsPearson Correlation.51.591.00.20.57.87
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
industryPearson Correlation.45.53.201.00.18.42
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
internationalPearson Correlation.38.54.57.181.00.65
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
weightedtotalPearson Correlation.83.90.87.42.651.00
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526


Most people are probably more concerned with distinctions among the world's elite or would be elite universities. Turning to the top 200 of the THE rankings, the correlation between Teaching and Research, is again very high, suggesting that these are measuring virtually the same thing.

The Citations indicator has a low correlation with International Outlook, a low and insignificant correlation with Teaching and Research, and a negative and insignificant correlation with Industry Income. 

Industry Income  has low correlations with Research and Teaching and negative with Citations and International Outlook.

It would seem that THE world rankings are not helpful for evaluating the quality of the global elite. A new methodology will be most welcome.


CORRELATIONS

CORRELATION
/VARIABLES = teaching research citations industry international weightedtotal
/PRINT = TWOTAIL SIG.
Correlations
teachingresearchcitationsindustryinternationalweightedtotal
teachingPearson Correlation1.00.90.02.23-.11.89
Sig. (2-tailed).000.768.001.114.000
N200200200200200200
researchPearson Correlation.901.00.06.28.05.92
Sig. (2-tailed).000.411.000.471.000
N200200200200200200
citationsPearson Correlation.02.061.00-.30.22.39
Sig. (2-tailed).768.411.000.001.000
N200200200200200200
industryPearson Correlation.23.28-.301.00-.10.17
Sig. (2-tailed).001.000.000.149.014
N200200200200200200
internationalPearson Correlation-.11.05.22-.101.00.17
Sig. (2-tailed).114.471.001.149.017
N200200200200200200
weightedtotalPearson Correlation.89.92.39.17.171.00
Sig. (2-tailed).000.000.000.014.017
N200200200200200200





Monday, August 23, 2021

Shanghai Rankings: Correlations Between Indicators

This is, I hope, the first of  a series. Maybe THE and QS next week.

If we want to compare  the utility of university rankings one attribute to consider is internal consistency. Here, the correlation between the various indicators can tell us a lot. If the correlation between a pair of indicators is 0.90 or above we can assume that these indicators are essentially measuring the same thing.

On the other hand, if  there is no correlation or one that is low, insignificant or even negative we might have doubts about the validity of one or both of the indicators. It is reasonable that if a university scores well for one metric it will do well for others providing they both represent highly valued attributes. A university producing high quality research or collecting large numbers of citations should also score well for reputation. If it does not there might be a methodological problem somewhere.

So, we can assume that if the indicators are valid and are not measuring the same thing the correlation between indicators will probably be somewhere between 0.5 and 0.9.

Let's have a look at the Shanghai ARWU for 2019. The indicator scores were extracted and analysed using PSPP. (It is very difficult to analyse the 2020 edition because of a recent change in presentation.) These rankings have six indicators: alumni and faculty receiving Nobel and Fields awards, papers in Nature and Science, highly cited researchers, publications in the Web of Science, and productivity per capita.

Looking at all 1000 institutions in the Shanghai Rankings, Alumni, Awards, and Nature and Science all correlate well with each other Highly Cited Researchers correlates well with Nature and Science and Publications but less so with Alumni and Awards. Nature and Science correlates well with all the other indicators.

The Publications indicator does not correlate well with Alumni and Awards. This is to be expected since Publications refers to 2018 while the Alumni and Awards indicators go back several decades.

Overall, the correlations are quite good although there is a noticeable divergence between Publications and Alumni and Awards, which cover very different time periods. 

CORRELATIONS

CORRELATION
/VARIABLES = alumni awards highlycited naturescience publications pcp finaltotal
/PRINT = TWOTAIL NOSIG.
Correlations
alumniawardshighlycitednaturesciencepublicationspcpfinaltotal
alumniPearson Correlation1.00.78.51.72.45.63.76
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
awardsPearson Correlation.781.00.57.75.44.67.82
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
highlycitedPearson Correlation.51.571.00.79.72.64.87
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
naturesciencePearson Correlation.72.75.791.00.69.73.93
Sig. (2-tailed).000.000.000.000.000.000
N992992992992992992992
publicationsPearson Correlation.45.44.72.691.00.50.81
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
pcpPearson Correlation.63.67.64.73.501.00.78
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
finaltotalPearson Correlation.76.82.87.93.81.781.00
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000



Most observers of ARWU and other global rankings are interested in the top levels where elite schools and national flagships jostle for dominance. Analysing correlations among indicators for the top 200 in ARWU, there are high correlations between Alumni, Awards, Nature and Science, and Productivity per Capita, ranging from .69 to .79.

There is also a high correlation of .72 between Nature and Science and Highly Cited Researchers. It is, however, noticeable that the correlation between Publications and other indicators is low for Highly Cited Researchers and very low for Productivity per Capita, Alumni and Awards.

It seems that, especially among the top 200 places, there is a big gap opening between the old traditional elite of Oxbridge, the Ivy League and the like who continue to get credit for long dead Nobel laureates and the new rising stars of Asia and Europe who are surging ahead for WOS papers and beginning to produce or recruit superstar researchers.




Correlations
alumniawardshighlycitednaturesciencepublicationspcpfinaltotal
alumniPearson Correlation1.00.79.36.69.21.62.78
Sig. (2-tailed).000.000.000.003.000.000
N200200200199200200200
awardsPearson Correlation.791.00.44.74.14.67.84
Sig. (2-tailed).000.000.000.044.000.000
N200200200199200200200
highlycitedPearson Correlation.36.441.00.72.57.49.78
Sig. (2-tailed).000.000.000.000.000.000
N200200200199200200200
naturesciencePearson Correlation.69.74.721.00.44.65.92
Sig. (2-tailed).000.000.000.000.000.000
N199199199199199199199
publicationsPearson Correlation.21.14.57.441.00.12.55
Sig. (2-tailed).003.044.000.000.083.000
N200200200199200200200
pcpPearson Correlation.62.67.49.65.121.00.72
Sig. (2-tailed).000.000.000.000.083.000
N200200200199200200200
finaltotalPearson Correlation.78.84.78.92.55.721.00
Sig. (2-tailed).000.000.000.000.000.000
N200200200199200200200


Thursday, August 12, 2021

THE's Caucus Ranking


In Alice in Wonderland there is a "caucus race" in which everyone runs around frantically in different directions and eventually everyone wins a prize. Unfortunately, there are not quite enough sweets to go around as prizes and so poor Alice has to make do with her thimble which she gives to the Dodo who then presents it to her.

It seems that THE has come up with a caucus ranking. In the THE Impact Rankings universities expend a lot of energy, do a lot of amazing, astounding and very different things in very different ways and a lot of them get some sort of prize for something. 

These rankings are another example of the growing complexity of the ranking scene. Global university rankings used to be simple. Shanghai Jiao Tong University started its  ARWU rankings in 2000 with just 500 ranked institutions and six indicators. Since then the number of rankings has proliferated and there have been more and more spin-offs, young university, regional, business schools, national, subject  rankings and so on with more indicators and increasingly complex and often opaque methodologies. We are getting to the point where a university is incompetent or excessively honest if it cannot find a ranking indicator, perhaps finely sliced by age, size, mission, and/or subject, in which they can scrape into the top five hundred, or at the very least the top thousand, and therefore into the top 3 or 4 % in the world. 

Some of the recent rankings seem redundant or pointless, going over the same old ground or making granular distinctions that are of little interest. It is no doubt nice to be acclaimed as the best young university for social science in South Asia, and maybe that can be used in advertising, but is it really necessary?

Now we have the third edition of the THE Impact rankings. These, as THE boasts, are the only rankings to measure universities according to their commitment to the UN's Sustainable Development Goals. But that is not very original. Universitas Indonesia's GreenMetric was doing something similar several years ago although not tied explicitly to the UN goals. They have indicators related to energy, infrastructure, climate change, water, waste, transportation, and education.

It seems a little odd that the UN should be accepted as the authority on the achievement of gender equality when its "peacekeeping" forces have repeatedly been accused of rape and sexual assault. Is the UN really the right body to lay down guidelines about health and well-being considering the dubious performance of the WHO during the pandemic crisis?

One also wonders why THE should venture into ranking contributions to sustainability when after a decade it has still failed to come up with a credible citations indicator, which would seem a much easier task. 

It is noticeable that participation in these rankings is very uneven. There are 1,118 universities in the latest edition but only 13 Chinese and only 45 American, of which precisely two are in California, supposedly the homeland of environmental consciousness. The higher education elite of the USA, UK and China are largely absent. On the other hand, Iraq, Egypt, Brazil and Iran are much better represented here than in the research based rankings.

The top of these rankings is dominated by English-speaking universities outside the USA. The overall top twenty contains seven Australian, five British, three Canadian, and one each from Denmark, Ireland, the USA, New Zealand and Italy.

The popularity of the Impact Rankings seems linked to the current problems of many western universities. Public funding has been drying up, academic standards eroding, research output stagnating. Many universities have resorted to importing international, often Chinese, students and faculty to keep up standards, bring in tuition money, fill up postgraduate classes, and do the work of junior researchers.

The international students and researchers have left or are leaving and may not return in significant numbers, although THE "believes" that they will. This is happening as universities trying to reopen face the prospect of unprepared students, dwindling funds, and a lack of interest from employers. Eventually this will impact the position of universities in the global ranking systems. Those universities once dependent on international researchers for their reputation and ranking scores will start to suffer.

It looks as though western universities are losing interest in research and instruction in professional and academic subjects and and are reinventing themselves as purveyors of transformative experiences to the children of the affluent and ambitious, guardians of the purity of cultural discourse, or as saviours of the planet.     

The Financial Post of Canada has published a caustic comment on the joyful proclamations by Queen's University about its ascent to fifth place in the Impact Rankings. A trustee, John Stackhouse, has claimed that its success there meant that it was fulfilling "the true purpose of a university." The article observes that those "who believe the true purpose of a university is to pursue academic excellence and ensure that students who pass through its doors have the skills to build prosperous lives for themselves as productive members of their community, might differ."  In the THE World University Rankings and others Queen's is doing much less well. 

The methodology of the impact rankings does little to inspire confidence. For each of the indicators there is a weighting of 27% for bibliometric measures, such as the amount of research on hunger, health, water, or clean energy. It is easy to see how this could be gamed. Then there is a variety of data submitted by the institutions. Even if every university administrator is a sea-green incorruptible there are many ways in which such data can be massaged or stretched.

Added to that, THE does not appear to be doing a rigorous validation. Universities are not  assessed  the same things, except for the partnership for the goals indicator. The University of Sydney, overall second this year, is ranked for clean water and sanitation, sustainable cities and communities, and life on land. Clean water and sanitation includes supporting water conservation off campus and the reuse of water across the university.

RMIT University, in third place, is ranked for decent work and economic growth, industry innovation and infrastructure and reduced inequalities.  Decent work and economic growth includes expenditure per employee and policies for ending discrimination. So, essentially THE is trying to figure out whether Sydney is better at reusing water than RMIT is at announcing policies that are supposed to reduce discrimination. Comparing research output and impact across disciplines is, as THE ought to know, far from easy. Comparing performance in using water with discrimination policy would seem close to impossible especially since THE does not always use objective criteria but merely examples of best practices. Evidence "is evaluated against a set oef criteria and decisions are cross-validated where there is uncertainty. Evidence is not required to be exhaustive -- we are looking for examples that demonstrate best practice at the institutions concerned."

But it seems that the a substantial number of universities will find these rankings a useful tool in their quest for income and publicity and there will be more editions, and probably sub-rankings of one sort or another, for years to come. 



 

Sunday, June 13, 2021

The Remarkable Revival of Oxford and Cambridge


There is nearly always a theme for the publication of global rankings. Often it is the rise of Asia, or parts of it. For a while it was the malign grasp of Brexit which was crushing the life out of British research or the resilience of American science in the face of the frenzied hostility of the great orange beast. This year it seems that the latest QS world rankings are about the triumph of Oxford and other elite UK institutions and their leapfrogging their US rivals. Around the world, quite a few other places are also showcasing their splendid achievements.

In the recent QS rankings Oxford has moved up from overall fifth to second place and Cambridge from seventh to third while University College London, Imperial College London, and Edinburgh have also advanced. No doubt we will soon hear that this is because of transformative leadership, the strength that diversity brings, working together as a team or a family, although I doubt whether any actual teachers or researchers will get a bonus or a promotion for their contributions to these achievements.

But was it leadership or team spirit that pushed Oxford and Cambridge into the top five? That is very improbable. Whenever there is a big fuss about universities rising or falling significantly in the rankings in a single year it is a safe bet that it is the result of an error, the correction of an error, or a methodological flaw or tweak of some kind.

Anyway, this year's Oxbridge advances had as much to do with leadership,  internationalization, or reputation as goodness had with Mae West's diamonds. It was entirely due to a remarkable rise for both places in the score for citations per faculty, Oxford from 81.3 to 96, and Cambridge from 69.2 to 92.1. There was no such change for any of the other indicators.

Normally, there are three ways in which a university can rise in QS's citations indicator. One is to increase the number of publications while maintaining the citation rate. Another is to improve the citation rate while keeping output constant. The third is to reduce the number of faculty physically or statistically.

None of these seem to have happened at Oxford and Cambridge. The number of publications and citations has been increasing but not sufficiently to cause such a big jump. Nor does there appear to have been a drastic reduction of faculty in either place.

In any case it seems that Oxbridge is not alone in its remarkable progress this year. For citations, ETH Zurich rose from 96.4 to 99.8, University of Melbourne from 75 to 89.7, National University of Singapore from 72.9 to 90.6, Michigan from 58 to 70.5. It seems that at the top levels of these rankings nearly everybody is rising except for MIT which has the top score of 100 but it is noticeable that as we get near the top the increase gets smaller.

It is theoretically possible that this might be the result of a collapse of the raw scores of citations front runner MIT which would raise everybody else's scores if it still remained at the top but there is no evidence of either a massive collapse in citations or a massive expansion of research and teaching staff.

But then as we go to the other end of the ranking we find universities' citations scores falling, University College Cork from 23.4 to 21.8, Universitas Gadjah Mada from 1.7 to 1.5, UCSI University Malaysia from 4.4 to 3.6, American University  in Cairo from 5.7 to 4.2.

It seems there is a bug in the QS methodology. The indicator scores that are published by QS are not raw data but standardized scores based on standard deviations from the mean The mean score is set at fifty and the top score at one hundred. Over the last few years the number of ranked universities has been increasing and the new ones tend to perform less well than the the established ones, especially for citations. In consequence, the  mean number of citations per faculty has declined and therefore universities scoring above the mean will increase their standardized scores which is derived from the standard deviation from the mean. If this interpretation is incorrect I'm very willing to be corrected.

This has an impact on the relative positions of Oxbridge and leading American universities. Oxford and Cambridge rely on their  scores in the academic and employer survey and international faculty and staff to keep in the top ten. Compared to Harvard, Stanford and MIT they are do not perform well for quantity or quality of research. So the general inflation of citations scores gives them more of a boost than the US leaders and so their total score rises.

It is likely that Oxford and Cambridge's moment of glory will be brief since QS in the next couple of years will have to do some recentering in order to prevent citation indicator scores bunching up in the high nineties. The two universities will fall again although  it that will probably not be attributed to a sudden collapse of leadership or failure to work as a team.

It will be interesting to see if any of this year's rising universities will make an announcement that they don't really deserve any praise for their illusory success in the rankings.



Sunday, June 06, 2021

The Decline of American Research



Looking through the  data of the latest SCImago Journal and Country Rank is a sobering experience. If you just look at the data for the quarter century from 1996 to 2020 then there is nothing very surprising. But an examination of the data year by year shows a clear and  frightening picture of scientific decline in the United States.

Over the whole of the quarter century the United States has produced 11,986,435 citable documents, followed by China with 7,229,532, and the UK with 3,347,117.

Quantity, of course, is not everything when comparing research capability. We also need to look at quality. SJCR also supplies data on citations which are admittedly an imperfect assessment of quality: they can be easily gamed with self-citations, mutual citations, and so on. They often measure what is fashionable, not that which contributes to public welfare or advances in fundamental science. They are at the moment, however, if collected and analysed carefully and competently, perhaps the least unreliable and subjective metric available for describing the quality of research.

Looking at the number of citations per paper, we have to set  a threshold otherwise the top country for quality would be Anguilla, followed by the Federated States of Micronesia, and Tokelau. So we wrote in a 50,000 paper threshold over the 25 years.

Top place for citations   in 1996-2020 goes to Switzerland, followed by the Netherlands, Denmark, and Sweden, with the USA in fifth place and the UK in tenth. Apart from the US it looks like the leaders in research are Cold Europe, the countries around the North and Baltic seas plus Switzerland.

China is well down the list in 51st place.

Things look very different when we look at the data year by year. In 1996 the USA was well ahead of everybody for  the output of citable documents with 350,258 documents, followed by Japan with 89,430. Then came the UK, Germany, France, Russia. China is a long way behind in nineth place with 30,856 documents.

Fast forward to 2020. Looking at output, China has now overtaken the US for citable documents and India has overtaken Germany and Japan. That is something that has been anticipated for quite some time.

That's quantity. Bear in mind that China and India have bigger populations than the USA so the output per capita is a lot less. For the moment.

Looking at citations per paper published in 1996 the United states had 42.14 citations or 1.62 per year over the quarter century. By 2020 this had shrunk to 1.22 per year.

It is possible that the annual score will pick up in later years as the US recovers from the lockdown and the virus before its competitors. It is equally possible that those papers may get fewer citations as time passes.

But Switzerland, which  was slightly ahead of the US in 1996, is in 2020 well in front, having improved its annual count of citations per paper from 1.65 to 1.68. Then there a cluster of other countries that have overtaken the US -- Australia, the  Netherlands, the UK.

And now China has also overtaken the US for quality with 1.23 citations per paper per year.

It looks as though the current pandemic will just accelerate what was happening anyway. Unless something drastic happens we can look forward to China steadily attaining and maintaining hegemony over the natural sciences.


Wednesday, March 31, 2021

Energetic and Proactive Leadership or a Defective Indicator?

It is now normal for university administrators to use global rankings to market their institutions, reward themselves and, all too often, justify demands for more public funding. Unfortunately, some ranking agencies are more than happy to go along with this.

One example of the misuse of rankings is from July of last year  when the Vice-Chancellor of the University of the West Indies, Sir Hilary Beckles, proclaimed a triple first  in the 2020 Times Higher Education (THE) rankings.

The original  target was to be be in the top 3% of ranked universities by the end of the current strategic planning cycle. The Vice-Chancellor was very happy that as a result of  the work of "an energetic and proactive leadership team of campus principals and others" the university was in the top 1% of the THE Latin America and Caribbean rankings and the top 1% of golden age universities, and was the only ranked university in the Caribbean.

Another article has just appeared lauding the achievements of the university. The Vice-Chancellor has asserted that "(t)he Times Higher Education informed us that what we have achieved is quite spectacular. That many universities had taken 30 years to achieve what we have achieved in a mere three years."

If this is an accurate report of what THE said then the magazine is being very irresponsible. Universities are complex structures, and cannot be turned around with just a few million dollars or a few dozen highly cited researchers. Without drastic restructuring or amalgamation, such a remarkable change is almost invariably the result of methodological changes or methodological defects. The latter is the case here.

UWI 's performance might appear  quite impressive but it seems that we have another case of  THE seeing things that nobody else can. THE is not the only global university ranking. In fact, it is in some important respects not a good one. Let's take a look at some other rankings. UWI is not ranked in Leiden Ranking, the Shanghai Academic Ranking of World Universities, the US News Best Global Universities, or the QS World University Rankings.

It does make an appearance in University Ranking by Academic Performance (URAP) published by Middle East Technical University where it is ranked 1622th and it is 1938th in the Center for World University Rankings.

But in the THE World University Rankings the university is in the top 600 and it is 18th in Latin America and the Caribbean. This is because of a single defective indicator.

UWI has an apparently healthy 81.1 in the THE world rankings citations indicator, supposedly a measure of research influence or impact, which at first sight seems odd since it has a very low score, 10.4, for research. In the Latin American rankings, where competition is less severe, that becomes  93.6 for citations and 75.8 for research. How could a university do so brilliantly for research influence when it has a poor reputation for research, doesn't publish many papers, and has little research income?

What has happened is that UWI has benefitted from THE's peculiar citations indicator that does not use fractional counting for papers with less than a hundred authors, plus hyper-normalisation, plus a country bonus for being located in a low performing country, resulting in a few multi-author, multi-cited papers pushing universities into positions that they could never achieve anywhere else.

UWI has recently contributed to a few papers with a large number of contributors and many citations in genetics and medicine in top journals including Lancet, Science, Nature, and the International Journal of Surgery. Five of these papers are linked with the Global Burden of Disease Study.

UWI is  to be congratulated for having a number of affiliated scientists who are taking part in cutting edge projects. Nonetheless, it is true that these have an impact only because of the technical defects of the THE rankings. In the last few years a succession of unlikely places -- Anglia Ruskin, Peradeniya, Reykjavik, Aswan, Brighton and Sussex Medical School, Durban University of Technology, have risen to the top of the citations charts in the THE rankings.

They have done so nowhere else. The exalted status of UWI in the THE WUR and its various offshoots is due to an eccentric methodology. If THE reforms that methodology, something that they have been talking about for a long time, or abandons the world rankings, the university will once again be consigned to the outer darkness of the unranked.



Sunday, March 14, 2021

The sensational fall of Panjab University and the sensational coming rise*

*If present methodology continues

The Hindustan Times has a report on the fall of Panjab University (PU) in the latest THE Emerging Economies Rankings. PU has fallen from 150th in 2019 to 201-250 this year. As expected, rivals and the media have launched a chorus of jeers against the maladministration of a once thriving institution.

But that isn't half of it. In 2014, the first year of these rankings, PU was ranked 14th in the emerging world and first in India ahead even of the highly regarded IITs. Since then the decline has been unrelenting. PU was 39th in 2015, 130th in 2018, 166th in 2020.

Since 2014 PU's scores for Research, Teaching and Industry Income have risen. That for International Outlook has fallen but that did not have much of an effect since it accounts for only 10 per cent of the total weighting.

What really hurt PU was the Citations indicator. In 2014 PU received a score of 84.7 for citations largely because of its participation in the mega-papers radiating from the Large Hadron Collider project, which have thousands of contributors and thousands of citations. But then THE stopped counting such papers and consequently in 2016 PU's citation score fell to 41 and its overall place to 121st. It has been downhill nearly every year since then.

The Director of PU's Internal Quality Assurance Cell said that the university has improved for research and teaching but needed to do better for international visibility and that proactive steps had been taken. The former Vice-Chancellor said that other universities were improving but PU was stagnating and was not even trying to overcome its weaknesses.

In fact, PU's decline had  nothing or very little  to do with international visibility or the policy failures of the administration just as its remarkable success in 2014 had nothing to do with working as a team, the strength brought by diversity, dynamic transformational leadership, or anything else plucked from the bureaucrats' big bag of clichés.

PU bet the farm on citations of particle physics mega-papers and for a couple of years that worked well. But then THE changed the rules of the game and stopped counting them, although after another year they received a reduced amount of credit. PU, along with the  Middle East Technical University and some other Turkish, Korean and French institutions that had overinvested in the CERN projects, tumbled down the THE rankings.

But PU may be about to make a comeback. Readers of this blog will probably guess what is coming next.

When THE stopped favouring papers with thousands of contributors they abolished the physics privilege that elevated places like Tokyo Metropolitan University, Federico Santa Maria Technical University, and Colorado School of Mines to the top of the research impact charts and replaced it with medical research privilege .

In 2018 PU started to publish papers that were part of the Gates-backed Global Burden of Disease Study (GBDS). Those papers have hundreds, but not thousands, of authors and so they are able to slip through the mega paper filter. The GBDS has helped Anglia Ruskin University, Brighton and Sussex Medical School, Aswan University, and Kurdistan University of Medical Sciences rise to the top of the research impact metric.

So far there have not been enough citations to make a difference for PU. But as those citations start to accumulate, providing of course that their impact is not diluted by too many other publications, PU will begin to rise again.

Assuming of course that THE's Methodology does not change.







Thursday, January 07, 2021

An Indisputable Ranking Scorecard? Not Really.

The University of New South Wales (UNSW) has produced an aggregate ranking of global universities, known as ARTU. This is based on the "Big Three" rankers, QS, Times Higher Education (THE) and the Shanghai ARWU. The scores that are given are not an average but a aggregate of their ranks, which is then inverted. Nor surprisingly, Australian universities do well and the University of Melbourne is the best in Australia.

Nicholas Fisk, Deputy Vice Chancellor of Research, hopes that this ranking will become "the international scoreboard, like the ATP tennis rankings" and "the indisputable scoreboard for where people fit in on the academic rankings."

This is not a new idea. I had a go at producing an aggregate ranking a few years ago, called Global Ranking of Academic Performance or GRAPE. It was going to be the Comparative Ranking of Academic Performance: maybe I was right the first time.  It was justifiably criticised by Ben Sowter of QS. I think though that it was quite right to note that some of the rankings of the time underrated the top Japanese universities and overrated British and Australian schools.

The ARTU is an another example of the emergence of a cartel or near cartel of the three global rankings that are apparently considered the only ones worthy of attention by academic administrators and the official media.

There are in fact a lot more and these three are not even the best three rankings, far from it. A pilot study, Rating the Rankers,  conducted by the International Network of Research Management Systems (INORMS), has found that on four significant dimensions, transparency, governance, measuring what matters, and rigour, the performance of six well known rankings is variable and that of the big three is generally unimpressive. That of THE is especially deficient.

Seriously, should  we consider as indisputable a ranking that includes indicators that proclaim Anglia Ruskin University as a world leader for research impact and Anadolu University as tops for innovation, another that counts that long dead winners of Nobel and Fields awards, and another that gives disproportionate weight to a survey with more respondents from Australia than from China?

There does seem to be a new mood of ranking skepticism emerging in many parts of the international research community. Rating the Rankers has been announced in an article in Nature. The critical analysis of rankings will, I hope, do more to create fair and valid systems of comparative assessment than simply adding up a bunch of flawed and opaque indicators.

Tuesday, November 17, 2020

Indian University Performance to be Judged by Rankings

I have commented on Indian responses to the rankings before and many times on problems with the better known rankings so I apologize for repeating myself. 


The influence of global university rankings continues to expand. There seem to be few areas of higher education or research where they are not consulted or used for appointments, promotion, admissions, project evaluation, grant approval, assessment, publicity and so on.

The latest example is from India.  The Indian Express reports that the Minister of Education has announced that the progress of "institutions of eminence" [IoEs]will be charted using the "renowned" QS and THE rankings. Apparently, "an incentive mechanism will be developed for those institutes which are performing well." That is definitely not a good idea: it will reward behaviour that leads to improved ranking performance not to improved output or quality.

Recently, some of the leading Indian Institutes of Technology (IITs), four of which are on the list of IoEs, announced that they would be boycotting the THE rankings.  I am not sure whether this means that there is now a split within the higher education sector in India or whether the IITs are rethinking their opposition to the rankings.

There is nothing wrong with evaluating and comparing universities, research centers, researchers, or departments. Indeed it would seem very helpful if a country is going to maintain an effective higher education system.  But it is questionable whether these rankings are the best way or even a good way of doing it. Research might be evaluated by panels of peer researchers, provided these are unbiased and fair, by international experts, surveys, or by bibliometric and scientometric indicators. The quality of teaching and learning is more problematic but national rankings around the world have used several measures that, although not very satisfactory, might provide a rough and imperfect assessment.

There is now a broad range of international rankings covering publications, citations, innovation, web presence, and other metrics. The IREG Inventory of International Rankings identified 17 global rankings in addition to regional and specialist ones and there are now more. If the Indian government wanted to use a ranking to measure research output and quality then it would probably be better to refer to the Leiden Ranking produced by the CWTS at Leiden University, or other straightforward research-based rankings such as URAP, published by the Middle East Technical University in Ankara,  the Shanghai Rankings, or the National Taiwan University Rankings, which have a generally stable and transparent methodology. Another possibility is the Scimago Institution Rankings which include indicators measuring web activity and altmetrics. Round University Rankings uses several metrics that might have a relationship to teaching quality.

It is, however, debatable whether the THE rankings are useful for evaluating Indian universities. There are several serious problems, which I have been talking about since 2011. I will discuss just three of them.  

The THE world rankings lack transparency. Eleven of its 13 indicators are bundled in three super-indicators so that it is impossible to figure out exactly what is doing what. If a university, for example, gets an improved score for Teaching: The Learning Environment this could be because of an improved score for teaching reputation, an increase in the number of staff, a reduction in the number of students, an increase in the number of doctorates awarded, a reduction in the number of bachelor degrees, a decrease in the number of academic staff, an increase in institutional income, or a combination of two or more of these.

THE does make disaggregated data available to universities but that is of little use for students or other stakeholders.

Another problem is the face validity of the two stand-alone indicators. Take a look at the citations indicator results for 2020-2021, which are supposed to measure research impact or research quality These are the top universities for 2020-2021: Anglia Ruskin University, Babol Noshirvani University of Technology, Brighton and Sussex Medical School,  Cankaya University, the Indian Institute of Technology Ropar, Kurdistan University of Medical Sciences. 

Similarly with the industry income indicator, which is presented as a measure of innovation.. At the top of this indicator are Anadolu University, Asia University Taiwan, University of Freiburg, Istanbul Technical University, Khalifa University, LMU Munich, the Korea Advanced Institute of Science and Technology, and Makerere University. The German and Korean universities seem plausible but one wonders about the others.

THE has not discovered a brilliant new method of finding diamonds in the rough. It is just using a flawed and eccentric methodology, one that it has repeatedly claimed that it will reform but somehow has never quite got round to doing so.

Third, the THE rankings include indicators that dramatically favour high status western universities with money, prestige and large numbers of postgraduate students. There are three measure of income, two reputation surveys accounting for  a third of the total weighting, and two measures counting doctoral students.

The QS rankings are somewhat better but there are issues here as well. There is only one indicator for research, citations per faculty, and only one that is directly related to teaching quality, that is the employer reputation indicator with a ten per cent weighting.

The QS rankings are heavily overweight on research reputation, which has a 40% weighting and is hugely biased to certain countries. There are more respondents from Malaysia than from China  and more from Kazakhstan than from India.

Using either of these rankings opens the way to attempts to manipulate the system. It is possible to get a high score in the THE rankings by recruiting somebody involved in the Global Burden of Disease Study. Doing well in the QS rankings might be influenced by signing up for a reputation management program.

It seems, however, that there is a new mood of scepticism about rankings in academia. One sign is the Rating the Rankings project by the Research Evaluation Working group of the International Network of Research Management Systems. This is a rating, definitely not a ranking, of six rankings by an international  team of expert reviewers who did an evaluation according to four criteria: good governance, transparency, measure what matters, and rigour.

The results are interesting. No ranking is perfect but it seem that the famous brands are more likely to fall short of the criteria. 

The Indian government and others would be wise to take note of the analysis and criticism that is available before committing themselves to using rankings for the assessment of research or higher education.





Saturday, July 25, 2020

How will the COVID-19 crisis affect the global rankings?


My article has just been published in University World News. 

You can read there and comment here.