Sunday, June 13, 2021

The Remarkable Revival of Oxford and Cambridge

There is nearly always a theme for the publication of global rankings. Often it is the rise of Asia, or parts of it. For a while it was the malign grasp of Brexit which was crushing the life out of British research or the resilience of American science in the face of the frenzied hostility of the great orange beast. This year it seems that the latest QS world rankings are about the triumph of Oxford and other elite UK institutions and their leapfrogging their US rivals. Around the world, quite a few other places are also showcasing their splendid achievements.

In the recent QS rankings Oxford has moved up from overall fifth to second place and Cambridge from seventh to third while University College London, Imperial College London, and Edinburgh have also advanced. No doubt we will soon hear that this is because of transformative leadership, the strength that diversity brings, working together as a team or a family, although I doubt whether any actual teachers or researchers will get a bonus or a promotion for their contributions to these achievements.

But was it leadership or team spirit that pushed Oxford and Cambridge into the top five? That is very improbable. Whenever there is a big fuss about universities rising or falling significantly in the rankings in a single year it is a safe bet that it is the result of an error, the correction of an error, or a methodological flaw or tweak of some kind.

Anyway, this year's Oxbridge advances had as much to do with leadership,  internationalization, or reputation as goodness had with Mae West's diamonds. It was entirely due to a remarkable rise for both places in the score for citations per faculty, Oxford from 81.3 to 96, and Cambridge from 69.2 to 92.1. There was no such change for any of the other indicators.

Normally, there are three ways in which a university can rise in QS's citations indicator. One is to increase the number of publications while maintaining the citation rate. Another is to improve the citation rate while keeping output constant. The third is to reduce the number of faculty physically or statistically.

None of these seem to have happened at Oxford and Cambridge. The number of publications and citations has been increasing but not sufficiently to cause such a big jump. Nor does there appear to have been a drastic reduction of faculty in either place.

In any case it seems that Oxbridge is not alone in its remarkable progress this year. For citations, ETH Zurich rose from 96.4 to 99.8, University of Melbourne from 75 to 89.7, National University of Singapore from 72.9 to 90.6, Michigan from 58 to 70.5. It seems that at the top levels of these rankings nearly everybody is rising except for MIT which has the top score of 100 but it is noticeable that as we get near the top the increase gets smaller.

It is theoretically possible that this might be the result of a collapse of the raw scores of citations front runner MIT which would raise everybody else's scores if it still remained at the top but there is no evidence of either a massive collapse in citations or a massive expansion of research and teaching staff.

But then as we go to the other end of the ranking we find universities' citations scores falling, University College Cork from 23.4 to 21.8, Universitas Gadjah Mada from 1.7 to 1.5, UCSI University Malaysia from 4.4 to 3.6, American University  in Cairo from 5.7 to 4.2.

It seems there is a bug in the QS methodology. The indicator scores that are published by QS are not raw data but standardized scores based on standard deviations from the mean The mean score is set at fifty and the top score at one hundred. Over the last few years the number of ranked universities has been increasing and the new ones tend to perform less well than the the established ones, especially for citations. In consequence, the  mean number of citations per faculty has declined and therefore universities scoring above the mean will increase their standardized scores which is derived from the standard deviation from the mean. If this interpretation is incorrect I'm very willing to be corrected.

This has an impact on the relative positions of Oxbridge and leading American universities. Oxford and Cambridge rely on their  scores in the academic and employer survey and international faculty and staff to keep in the top ten. Compared to Harvard, Stanford and MIT they are do not perform well for quantity or quality of research. So the general inflation of citations scores gives them more of a boost than the US leaders and so their total score rises.

It is likely that Oxford and Cambridge's moment of glory will be brief since QS in the next couple of years will have to do some recentering in order to prevent citation indicator scores bunching up in the high nineties. The two universities will fall again although  it that will probably not be attributed to a sudden collapse of leadership or failure to work as a team.

It will be interesting to see if any of this year's rising universities will make an announcement that they don't really deserve any praise for their illusory success in the rankings.



Sunday, June 06, 2021

The Decline of American Research



Looking through the  data of the latest SCImago Journal and Country Rank is a sobering experience. If you just look at the data for the quarter century from 1996 to 2020 then there is nothing very surprising. But an examination of the data year by year shows a clear and  frightening picture of scientific decline in the United States.

Over the whole of the quarter century the United States has produced 11,986,435 citable documents, followed by China with 7,229,532, and the UK with 3,347,117.

Quantity, of course, is not everything when comparing research capability. We also need to look at quality. SJCR also supplies data on citations which are admittedly an imperfect assessment of quality: they can be easily gamed with self-citations, mutual citations, and so on. They often measure what is fashionable, not that which contributes to public welfare or advances in fundamental science. They are at the moment, however, if collected and analysed carefully and competently, perhaps the least unreliable and subjective metric available for describing the quality of research.

Looking at the number of citations per paper, we have to set  a threshold otherwise the top country for quality would be Anguilla, followed by the Federated States of Micronesia, and Tokelau. So we wrote in a 50,000 paper threshold over the 25 years.

Top place for citations   in 1996-2020 goes to Switzerland, followed by the Netherlands, Denmark, and Sweden, with the USA in fifth place and the UK in tenth. Apart from the US it looks like the leaders in research are Cold Europe, the countries around the North and Baltic seas plus Switzerland.

China is well down the list in 51st place.

Things look very different when we look at the data year by year. In 1996 the USA was well ahead of everybody for  the output of citable documents with 350,258 documents, followed by Japan with 89,430. Then came the UK, Germany, France, Russia. China is a long way behind in nineth place with 30,856 documents.

Fast forward to 2020. Looking at output, China has now overtaken the US for citable documents and India has overtaken Germany and Japan. That is something that has been anticipated for quite some time.

That's quantity. Bear in mind that China and India have bigger populations than the USA so the output per capita is a lot less. For the moment.

Looking at citations per paper published in 1996 the United states had 42.14 citations or 1.62 per year over the quarter century. By 2020 this had shrunk to 1.22 per year.

It is possible that the annual score will pick up in later years as the US recovers from the lockdown and the virus before its competitors. It is equally possible that those papers may get fewer citations as time passes.

But Switzerland, which  was slightly ahead of the US in 1996, is in 2020 well in front, having improved its annual count of citations per paper from 1.65 to 1.68. Then there a cluster of other countries that have overtaken the US -- Australia, the  Netherlands, the UK.

And now China has also overtaken the US for quality with 1.23 citations per paper per year.

It looks as though the current pandemic will just accelerate what was happening anyway. Unless something drastic happens we can look forward to China steadily attaining and maintaining hegemony over the natural sciences.


Wednesday, March 31, 2021

Energetic and Proactive Leadership or a Defective Indicator?

It is now normal for university administrators to use global rankings to market their institutions, reward themselves and, all too often, justify demands for more public funding. Unfortunately, some ranking agencies are more than happy to go along with this.

One example of the misuse of rankings is from July of last year  when the Vice-Chancellor of the University of the West Indies, Sir Hilary Beckles, proclaimed a triple first  in the 2020 Times Higher Education (THE) rankings.

The original  target was to be be in the top 3% of ranked universities by the end of the current strategic planning cycle. The Vice-Chancellor was very happy that as a result of  the work of "an energetic and proactive leadership team of campus principals and others" the university was in the top 1% of the THE Latin America and Caribbean rankings and the top 1% of golden age universities, and was the only ranked university in the Caribbean.

Another article has just appeared lauding the achievements of the university. The Vice-Chancellor has asserted that "(t)he Times Higher Education informed us that what we have achieved is quite spectacular. That many universities had taken 30 years to achieve what we have achieved in a mere three years."

If this is an accurate report of what THE said then the magazine is being very irresponsible. Universities are complex structures, and cannot be turned around with just a few million dollars or a few dozen highly cited researchers. Without drastic restructuring or amalgamation, such a remarkable change is almost invariably the result of methodological changes or methodological defects. The latter is the case here.

UWI 's performance might appear  quite impressive but it seems that we have another case of  THE seeing things that nobody else can. THE is not the only global university ranking. In fact, it is in some important respects not a good one. Let's take a look at some other rankings. UWI is not ranked in Leiden Ranking, the Shanghai Academic Ranking of World Universities, the US News Best Global Universities, or the QS World University Rankings.

It does make an appearance in University Ranking by Academic Performance (URAP) published by Middle East Technical University where it is ranked 1622th and it is 1938th in the Center for World University Rankings.

But in the THE World University Rankings the university is in the top 600 and it is 18th in Latin America and the Caribbean. This is because of a single defective indicator.

UWI has an apparently healthy 81.1 in the THE world rankings citations indicator, supposedly a measure of research influence or impact, which at first sight seems odd since it has a very low score, 10.4, for research. In the Latin American rankings, where competition is less severe, that becomes  93.6 for citations and 75.8 for research. How could a university do so brilliantly for research influence when it has a poor reputation for research, doesn't publish many papers, and has little research income?

What has happened is that UWI has benefitted from THE's peculiar citations indicator that does not use fractional counting for papers with less than a hundred authors, plus hyper-normalisation, plus a country bonus for being located in a low performing country, resulting in a few multi-author, multi-cited papers pushing universities into positions that they could never achieve anywhere else.

UWI has recently contributed to a few papers with a large number of contributors and many citations in genetics and medicine in top journals including Lancet, Science, Nature, and the International Journal of Surgery. Five of these papers are linked with the Global Burden of Disease Study.

UWI is  to be congratulated for having a number of affiliated scientists who are taking part in cutting edge projects. Nonetheless, it is true that these have an impact only because of the technical defects of the THE rankings. In the last few years a succession of unlikely places -- Anglia Ruskin, Peradeniya, Reykjavik, Aswan, Brighton and Sussex Medical School, Durban University of Technology, have risen to the top of the citations charts in the THE rankings.

They have done so nowhere else. The exalted status of UWI in the THE WUR and its various offshoots is due to an eccentric methodology. If THE reforms that methodology, something that they have been talking about for a long time, or abandons the world rankings, the university will once again be consigned to the outer darkness of the unranked.



Sunday, March 14, 2021

The sensational fall of Panjab University and the sensational coming rise*

*If present methodology continues

The Hindustan Times has a report on the fall of Panjab University (PU) in the latest THE Emerging Economies Rankings. PU has fallen from 150th in 2019 to 201-250 this year. As expected, rivals and the media have launched a chorus of jeers against the maladministration of a once thriving institution.

But that isn't half of it. In 2014, the first year of these rankings, PU was ranked 14th in the emerging world and first in India ahead even of the highly regarded IITs. Since then the decline has been unrelenting. PU was 39th in 2015, 130th in 2018, 166th in 2020.

Since 2014 PU's scores for Research, Teaching and Industry Income have risen. That for International Outlook has fallen but that did not have much of an effect since it accounts for only 10 per cent of the total weighting.

What really hurt PU was the Citations indicator. In 2014 PU received a score of 84.7 for citations largely because of its participation in the mega-papers radiating from the Large Hadron Collider project, which have thousands of contributors and thousands of citations. But then THE stopped counting such papers and consequently in 2016 PU's citation score fell to 41 and its overall place to 121st. It has been downhill nearly every year since then.

The Director of PU's Internal Quality Assurance Cell said that the university has improved for research and teaching but needed to do better for international visibility and that proactive steps had been taken. The former Vice-Chancellor said that other universities were improving but PU was stagnating and was not even trying to overcome its weaknesses.

In fact, PU's decline had  nothing or very little  to do with international visibility or the policy failures of the administration just as its remarkable success in 2014 had nothing to do with working as a team, the strength brought by diversity, dynamic transformational leadership, or anything else plucked from the bureaucrats' big bag of clich├ęs.

PU bet the farm on citations of particle physics mega-papers and for a couple of years that worked well. But then THE changed the rules of the game and stopped counting them, although after another year they received a reduced amount of credit. PU, along with the  Middle East Technical University and some other Turkish, Korean and French institutions that had overinvested in the CERN projects, tumbled down the THE rankings.

But PU may be about to make a comeback. Readers of this blog will probably guess what is coming next.

When THE stopped favouring papers with thousands of contributors they abolished the physics privilege that elevated places like Tokyo Metropolitan University, Federico Santa Maria Technical University, and Colorado School of Mines to the top of the research impact charts and replaced it with medical research privilege .

In 2018 PU started to publish papers that were part of the Gates-backed Global Burden of Disease Study (GBDS). Those papers have hundreds, but not thousands, of authors and so they are able to slip through the mega paper filter. The GBDS has helped Anglia Ruskin University, Brighton and Sussex Medical School, Aswan University, and Kurdistan University of Medical Sciences rise to the top of the research impact metric.

So far there have not been enough citations to make a difference for PU. But as those citations start to accumulate, providing of course that their impact is not diluted by too many other publications, PU will begin to rise again.

Assuming of course that THE's Methodology does not change.







Thursday, January 07, 2021

An Indisputable Ranking Scorecard? Not Really.

The University of New South Wales (UNSW) has produced an aggregate ranking of global universities, known as ARTU. This is based on the "Big Three" rankers, QS, Times Higher Education (THE) and the Shanghai ARWU. The scores that are given are not an average but a aggregate of their ranks, which is then inverted. Nor surprisingly, Australian universities do well and the University of Melbourne is the best in Australia.

Nicholas Fisk, Deputy Vice Chancellor of Research, hopes that this ranking will become "the international scoreboard, like the ATP tennis rankings" and "the indisputable scoreboard for where people fit in on the academic rankings."

This is not a new idea. I had a go at producing an aggregate ranking a few years ago, called Global Ranking of Academic Performance or GRAPE. It was going to be the Comparative Ranking of Academic Performance: maybe I was right the first time.  It was justifiably criticised by Ben Sowter of QS. I think though that it was quite right to note that some of the rankings of the time underrated the top Japanese universities and overrated British and Australian schools.

The ARTU is an another example of the emergence of a cartel or near cartel of the three global rankings that are apparently considered the only ones worthy of attention by academic administrators and the official media.

There are in fact a lot more and these three are not even the best three rankings, far from it. A pilot study, Rating the Rankers,  conducted by the International Network of Research Management Systems (INORMS), has found that on four significant dimensions, transparency, governance, measuring what matters, and rigour, the performance of six well known rankings is variable and that of the big three is generally unimpressive. That of THE is especially deficient.

Seriously, should  we consider as indisputable a ranking that includes indicators that proclaim Anglia Ruskin University as a world leader for research impact and Anadolu University as tops for innovation, another that counts that long dead winners of Nobel and Fields awards, and another that gives disproportionate weight to a survey with more respondents from Australia than from China?

There does seem to be a new mood of ranking skepticism emerging in many parts of the international research community. Rating the Rankers has been announced in an article in Nature. The critical analysis of rankings will, I hope, do more to create fair and valid systems of comparative assessment than simply adding up a bunch of flawed and opaque indicators.

Tuesday, November 17, 2020

Indian University Performance to be Judged by Rankings

I have commented on Indian responses to the rankings before and many times on problems with the better known rankings so I apologize for repeating myself. 


The influence of global university rankings continues to expand. There seem to be few areas of higher education or research where they are not consulted or used for appointments, promotion, admissions, project evaluation, grant approval, assessment, publicity and so on.

The latest example is from India.  The Indian Express reports that the Minister of Education has announced that the progress of "institutions of eminence" [IoEs]will be charted using the "renowned" QS and THE rankings. Apparently, "an incentive mechanism will be developed for those institutes which are performing well." That is definitely not a good idea: it will reward behaviour that leads to improved ranking performance not to improved output or quality.

Recently, some of the leading Indian Institutes of Technology (IITs), four of which are on the list of IoEs, announced that they would be boycotting the THE rankings.  I am not sure whether this means that there is now a split within the higher education sector in India or whether the IITs are rethinking their opposition to the rankings.

There is nothing wrong with evaluating and comparing universities, research centers, researchers, or departments. Indeed it would seem very helpful if a country is going to maintain an effective higher education system.  But it is questionable whether these rankings are the best way or even a good way of doing it. Research might be evaluated by panels of peer researchers, provided these are unbiased and fair, by international experts, surveys, or by bibliometric and scientometric indicators. The quality of teaching and learning is more problematic but national rankings around the world have used several measures that, although not very satisfactory, might provide a rough and imperfect assessment.

There is now a broad range of international rankings covering publications, citations, innovation, web presence, and other metrics. The IREG Inventory of International Rankings identified 17 global rankings in addition to regional and specialist ones and there are now more. If the Indian government wanted to use a ranking to measure research output and quality then it would probably be better to refer to the Leiden Ranking produced by the CWTS at Leiden University, or other straightforward research-based rankings such as URAP, published by the Middle East Technical University in Ankara,  the Shanghai Rankings, or the National Taiwan University Rankings, which have a generally stable and transparent methodology. Another possibility is the Scimago Institution Rankings which include indicators measuring web activity and altmetrics. Round University Rankings uses several metrics that might have a relationship to teaching quality.

It is, however, debatable whether the THE rankings are useful for evaluating Indian universities. There are several serious problems, which I have been talking about since 2011. I will discuss just three of them.  

The THE world rankings lack transparency. Eleven of its 13 indicators are bundled in three super-indicators so that it is impossible to figure out exactly what is doing what. If a university, for example, gets an improved score for Teaching: The Learning Environment this could be because of an improved score for teaching reputation, an increase in the number of staff, a reduction in the number of students, an increase in the number of doctorates awarded, a reduction in the number of bachelor degrees, a decrease in the number of academic staff, an increase in institutional income, or a combination of two or more of these.

THE does make disaggregated data available to universities but that is of little use for students or other stakeholders.

Another problem is the face validity of the two stand-alone indicators. Take a look at the citations indicator results for 2020-2021, which are supposed to measure research impact or research quality These are the top universities for 2020-2021: Anglia Ruskin University, Babol Noshirvani University of Technology, Brighton and Sussex Medical School,  Cankaya University, the Indian Institute of Technology Ropar, Kurdistan University of Medical Sciences. 

Similarly with the industry income indicator, which is presented as a measure of innovation.. At the top of this indicator are Anadolu University, Asia University Taiwan, University of Freiburg, Istanbul Technical University, Khalifa University, LMU Munich, the Korea Advanced Institute of Science and Technology, and Makerere University. The German and Korean universities seem plausible but one wonders about the others.

THE has not discovered a brilliant new method of finding diamonds in the rough. It is just using a flawed and eccentric methodology, one that it has repeatedly claimed that it will reform but somehow has never quite got round to doing so.

Third, the THE rankings include indicators that dramatically favour high status western universities with money, prestige and large numbers of postgraduate students. There are three measure of income, two reputation surveys accounting for  a third of the total weighting, and two measures counting doctoral students.

The QS rankings are somewhat better but there are issues here as well. There is only one indicator for research, citations per faculty, and only one that is directly related to teaching quality, that is the employer reputation indicator with a ten per cent weighting.

The QS rankings are heavily overweight on research reputation, which has a 40% weighting and is hugely biased to certain countries. There are more respondents from Malaysia than from China  and more from Kazakhstan than from India.

Using either of these rankings opens the way to attempts to manipulate the system. It is possible to get a high score in the THE rankings by recruiting somebody involved in the Global Burden of Disease Study. Doing well in the QS rankings might be influenced by signing up for a reputation management program.

It seems, however, that there is a new mood of scepticism about rankings in academia. One sign is the Rating the Rankings project by the Research Evaluation Working group of the International Network of Research Management Systems. This is a rating, definitely not a ranking, of six rankings by an international  team of expert reviewers who did an evaluation according to four criteria: good governance, transparency, measure what matters, and rigour.

The results are interesting. No ranking is perfect but it seem that the famous brands are more likely to fall short of the criteria. 

The Indian government and others would be wise to take note of the analysis and criticism that is available before committing themselves to using rankings for the assessment of research or higher education.





Saturday, July 25, 2020

How will the COVID-19 crisis affect the global rankings?


My article has just been published in University World News. 

You can read there and comment here.

Monday, June 08, 2020

The Great Acceleration

A major impact of the current virus crisis is that trends that were developing gradually have suddenly gathered new momentum. In many places around the world we can see the introduction of something like a universal basic income, the curtailment of carbon consuming transport, moves to abolish prisons and the police, and the continued rise of online education, online shopping, online almost anything.

Western universities have been facing an economic crisis for some time. Costs have soared and the supply of competent students has been drying up. For a while, the gap was filled with international, mainly Chinese, students but even before the virus that supply was dwindling. It seems likely that as universities open up, wholly or partly, there will be a lot fewer Chinese students and not many from other places. Meanwhile, local students and their parents will wonder whether there's any point in going deep into debt to pay for online or hybrid courses that will lead nowhere.

Organisations, like organisms, struggle to grow or to survive and universities need students to bring in revenue. If capable students are going to stay away then universities will have to recruit less capable students, close shop, merge, or become some other kind of institution. 

Another factor is the desperate need to calibrate the demographic composition of the student body, administration and faculty with that of the region or country or the world. Standardised testing has long been a problem here. Tests like the SAT, ACT, GRE, LSAT, and GMAT  are good predictors of academic ability and cognitive skills but they invariably give better scores to Whites and East Asians than to African Americans, Hispanics and Native Americans.

For many years there have been repeated demands that American universities abandon objective testing for admission and placement. One element in these demands was the observation that there was a correlation between family income and test scores. This was attributed to the ability of rich white parents to provide expensive test preparation courses for their  children.

There was an element of hypocrisy in these claims. If test prep courses were the cause of racial differences why not just make them a required part of the high school curriculum or pay the test centre fees and travel costs of low-income students? The failure to propose such measures suggests that everyone knows that this is not really the cause of racial or social achievements. 

The University of California (UC), once the exemplar of public tertiary education, has now decided to remove the SAT and ACT from the admissions process. The implications of this are large. Almost certainly many other public and perhaps some private universities will follow suit. The  UC Board of Regents has announced a plan to phase out standardised testing for undergraduate students by 2025.

In 2021 and 2022 UC will be test-optional. Students can submit test scores if they wish and UC campuses may use them as they see fit.

In 2023 and 2024 UC will be test-blind. Test scores will not be used for admissions although they might be used for course placement and scholarships. If I understand it correctly test scores could still be used for the admission of out of state and international students.

In 2025 the SAT/ACT will be phased out altogether and supposedly replaced by a new test "that more closely aligns with what we expect incoming students to know to demonstrate their preparedness for UC."

I would like to emulate Wayne Rooney and declare that I will never make a prediction but I cannot resist this one: there will never be a standardised test that will reconcile the need to predict academic ability with the demand for zero or minimal disparate racial impact.

UC will most probably end up with a complicated and holistic system of admission that will try to combine a semblance of selectivity with a mix of metrics that relate to subjective and marginal traits like grit, response to adversity, social awareness and so on and which produced an acceptable mix of groups.

It is very likely that the academic competence and cognitive skills of undergraduates and postgraduates at UC campuses will go into sharp decline. No doubt there will be compensations. Students will have be grittier, more diverse, more aware. Whether that is sufficient to balance the decline in cognitive ability remains to be seen

Meanwhile in  China and India, growing political authoritarianism and centralisation may lead to some decline in academic rigour but comparison to the US and Europe that still seems fairly limited.

Thursday, May 07, 2020

Observations on the Indian Ranking Boycott


Seven Indian Institutes of Technology (IITs) -- Delhi, Bombay, Guwahati, Kanpur, Kharagpur, Madras, and Roorkee -- have announced that they will be boycotting this year's Times Higher Education (THE) World University Rankings. The move has been coming for some time. Indian universities have not performed well in most rankings but they have done especially badly in THE's.

Take a look at the at the latest THE world rankings and the performance of three elite institutions. IIT Delhi (IITD) and IIT Bombay (IITB) are in the 401-500 band, and the Indian Institute of Science (IISc) is 301-350.

It is noticeable that these three all do much better in the QS world rankings where IIT Delhi is 182nd, IIT Bombay 152nd, and IISc  184th. That no doubt explains why these Institutes are boycotting THE but still engaging with QS. 

It should be pointed out that with regard to research THE probably  treats the Institutes better than they deserve. The Shanghai rankings, which are concerned only with research, have IITD and IITB in the 701-800 band, and IISc at 401-500. In the US News Best Global Universities IITD is 654th, ITB 513th, and IISc 530th.

The dissatisfaction with THE is understandable. Indeed it might be surprising that the IITs have taken so long to take action. They complain about transparency and the parameters. They have a point, in fact several points. The THE rankings are uniquely opaque: they combine eleven indicators into three clusters so it is impossible for a reader to figure out exactly why a university is doing so well or so badly for teaching or research. THE's income and international metrics, three of each, also work against Indian universities.

It is, however, noticeable that a few Indian universities have done surprisingly well in the THE world rankings: IIT Ropar and IIT Indore are in the top 400 and IIT Gandhinagar in the top 600 thanks to high scores for citations. IIT Ropar is credited with a score of 100, making it fourth in the world behind those giants of research impact: Aswan University, Brandeis University, and Brighton and Sussex Medical School. 

Regular readers of this blog will know what is coming next. IIT Ropar has contributed to 15 papers related to the multi-author and hugely cited Global Burden of Disease Study (GBDS), which is slightly less than 1.5% of its total papers over the relevant period but well over 40% of citations. 

It would be relatively simple for the mutinous seven to recruit one or two researchers involved in the GBDS and in a few years -- assuming the current methodology or something like it continues -- they too would be getting near "perfect" scores for citations and heading for top three hundred spots.

They may, however, have judged that the THE methodology is going to be changed sooner or later -- now looking like a little bit later -- or that aiming for the QS reputation surveys is more cost effective. Or perhaps they were simply unaware of exactly how to get a good score in the THE rankings.

It is sad that the Indian debate over ranking has largely been limited to comparisons between THE and QS. There are other rankings that are technically better in some ways and are certainly better suited to Indian circumstances. The Round University Ranking which has 20 indicators and a balanced weighting has IISC in 62nd place with extremely good scores for financial sustainability and doctoral students.

The boycott is long overdue. If it leads to a more critical and sceptical approach to ranking then it may do everybody a lot of good.




Sunday, April 19, 2020

THE's WUR 3.0 is on the way

Alert to readers. Some of this post covers ground I have been over before. See here, here and here. I plead guilty to self-plagiarism.

Times Higher Education (THE) is talking about a 3.0 version of its World University Rankings to be announced at this year's academic summit in Toronto and implemented in 2021, a timetable that may not survive the current virus crisis. I will discuss what is wrong with the rankings, what THE could do, and what it might do.

The magazine has achieved an enviable position in the university rankings industry. Global rankings produced by reliable university researchers with sensible methodologies, such as the CWTS Leiden Ranking, University Ranking by Academic Performance (Middle East Technical University) and the National Taiwan University Rankings are largely ignored by the media, celebrities and university administrators. In contrast, THE is almost always one of the Big Four rankings (the others are QS, US News, and Shanghai Ranking), the Big Three or the Big Two and sometimes the only global ranking that is discussed. 

The exalted status of THE is remarkable considering that it has many defects. It seems that the prestigious name -- there are still people who think that is the Times newspaper or part of it -- and  skillful public relations campaigns replete with events, workshops. gala dinners and networking lunches have eroded the common sense and critical capacity of the education media and the administrators of the Ivy League, the Russell Group and their imitators.

There are few things more indicative of the inadequacy of the current leadership of Western higher education than their toleration of a ranking that puts Aswan University top of the world for  research impact by virtue of its participation in the Gates funded Global Burden of Disease Study and Anadolu University top for innovation because it reported its income from private online courses as research income from industry. Would they really accept that sort of thing from a master's thesis candidate? It is true that the "Sokal squared" hoax has shown that that the capacity for critical thought has been seriously attenuated in the humanities and social sciences but one would expect better from philosophers, physicists and engineers.    

The THE world and regional rankings are distinctively flawed in several ways. First, a substantial amount of their data comes directly from institutions. Even if universities are 100% honest and transparent the probability that data will flow smoothly and accurately from branch campuses, research centres and far flung campuses through the committees tasked with data submission and on to the THE team is not very high.

THE has implemented an audit by PricewaterhouseCooper (PwC) but that seems to be about "testing the key controls to capture and handle data, and a full reperformance of the calculation of the rankings" and does not extend to checking the validity of the data before it enters the mysterious machinery of the rankings. PwC state that this is a "limited assurance engagement."

Second, THE is unique among the well-known rankings in bundling eleven of its 13 indicators in three groups with composite scores. That drastically reduces the utility of the rankings since it is impossible to figure out whether, for example, an improvement for research results from an increase in the number of published papers, an increase in research income, a decline in the number of research and academic staff, a better score for research reputation, or some combination of these. Individual universities can gain access to more detailed information but that is not necessarily helpful to students or other stakeholders.

Third, the THE rankings give a substantial weighting to various input metrics. One of these is income which is measured by three separate indicators, total institutional income, research income, and research income from industry. Of the  other world rankings only the Russian Round University Rankings do this. 

There is of course some relationship between funding and productivity but it is far from absolute and universal. The Universitas 21 system rankings, for example, show that countries like Malaysia and Saudi Arabia have substantial resources but so far have achieved only a  modest scientific output while Ireland has done very well in maintaining output despite a limited and declining resource base.    

The established universities of the world seem to be quite happy with these income indicators which, whatever happens, are greatly to their advantage. If their overall score goes down this can be plausibly attributed to a decline in funding that can be used to demand money from national resources. At a time when austerity has threatened the well being of many vulnerable groups, with more suffering to come in the next few months, it is arguable that universities are not those most deserving of state funding. 

Fourth, another problem arises from THE counting doctoral students in two indicators. It is difficult to see how the number of doctoral students or degrees can in itself add to the quality of undergraduate or master's teaching and this could act to the detriment of liberal arts colleges like Williams or Harvey Mudd which have an impressive record of produced employable graduates.

These indicators may also have the perverse consequence of  forcing people who would benefit from a master's or post graduate diploma course into doctoral programs with high rates of non-completion. 

Fifthly, the two stand alone indicators are very problematic. The industry income indicator purports to represent universities' contributions to innovation. An article by Alex Usher found that the indicator appeared to be based on very dubious data. See here for a reply by Phil Baty that is almost entirely tangential to the criticism. Even if the data were accurate it is a big stretch to claim that this is a valid measure of a university's contribution to innovation.

The citations indicator which is supposed to measure research impact, influence or quality is a disaster. Or it should be: the defects of this metric seem to have passed unnoticed everywhere it matters.

The original sin of the citations indicator goes back to the early days of the THE rankings after that unpleasant divorce from QS. THE used data from the ISI database, as it was then known, and in return agreed to give prominence to an indicator that was almost the same as the InCites platform that was a big-selling product.

The indicator is assigned a weighting of 30% which is much higher than that given to publications and higher than given to citations by QS, Shanghai US News or RUR. In fact this understates the weighting. THE has a regional modification or country bonus that divides the impact score of a university by the square root of the impact score of the country where it is located. The effect of this is that the scores of  universities in the top country will remain unchanged but everybody else will get an increase, a big one for low scoring countries, a smaller one for those scoring higher. Previously the bonus applied to the whole of the indicator but now it is 50%. Basically this means that universities are rewarded for being in a low scoring country.

The reason originally given for this was that some countries lack the networking and funds to nurture citation rich research. Apparently, such a problem has no relevance to international indicators. This was in fact probably an ad hoc way of getting round the massive gap between the world's elite and other universities with regard to citations, much bigger than most other metrics. 

The effect of this was to give a big advantage to mediocre universities surrounded by low achieving peers. Combined with other defects it has produced big distortions in the indicator.

This indicator is overnormalised. Citation scores are based not on a simple count of citations but rather on a comparison with the world average of citations according to year of publication, type of publication, and academic field, over three hundred of them. A few years ago someone told THE that absolute counting of citations was  a mortal sin and that seems to have become holy scripture. There is clearly a need to take account of disciplinary variations, such as the relative scarcity of citations in literary studies and philosophy and their proliferation in medical research and physics  but the finer the analysis gets the more chance there is that outliers  will exert a disproportionate effect on the impact score.

Perhaps the biggest problem with the THE rankings is the failure to use fractional counting of citations. There is an increasing problem with papers with scores, hundreds, occasionally thousands of "authors", in particle physics, medicine and genetics. Such papers often attract thousands of citations partly because of their scientific importance, partly because many of their authors will find opportunities to cite themselves.

The result is that until 2014-15  a university with  a modest contribution to a project like the Large Hadron Collider Project could get a massive score for citations especially if its overall output of papers was not high and especially if it was located in a country were citations were generally low.

The 2014-15 THE world rankings included among the world's leaders for citations Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology and Bogazici University.

Then THE introduced some reforms. Papers with over a thousand authors were excluded from the citation count, the country bonus was halved, and the source of bibliometric data was switched from  ISI to Scopus. This was disastrous for those universities that had over-invested in physics especially in Turkey, South Korea and France. 

The next year THE started counting the mega-papers again but introduced a modified form of fractional counting. Papers with a thousand plus papers were counted according to their contribution to the paper with a minimum of five per cent.

The effect of these changes was to replace physics privilege with medicine privilege. Fractional counting did not apply to papers with hundreds of authors but less than a thousand and so a new batch of improbable universities started getting near perfect scores for citations and began to break into the top five hundred or thousand in the world. Last year these included Aswan University, the Indian University of Technology Ropar, the University of Peradeniya, Anglia Ruskin University, the University of Reykjavik, and the University of Environmental and Occupational Health Japan.

They did so because of participation in the Global  Burden of Disease Study combined with  a modest overall output of papers and/or the good fortunate to be located in a country with a low impact score.

There is something else about the indicator that should be noted. THE includes self-citations and on a couple of occasions has said that this does not make any significant difference. Perhaps not in the aggregate, but there have been occasions when self-citers have in fact made a large difference to the scores of specific universities. In 2009 Alexandria University broke into the top 200 world universities by virtue of a self-citer and a few friends. In 2017 Veltech University was the third best university in India and the best in Asia for citations all because of exactly one self-citing author, In 2018 the university had for some reason completely disappeared from the Asian rankings.

So here are some fairly obvious things that THE ought to do:
  • change the structure of the rankings to give more prominence to publications and less to citations
  • remove the income indicators or reduce their weighting
  • replace the income from industry indicator with a count of patents preferably those accepted rather than filed
  • in general, where possible replace self-submitted with third party data
  • if postgraduate students are to be counted then count master's as well as doctoral students
  • get rid of the country bonus which exaggerates the scores of mediocre or sub-mediocre institutions because they are in the poorly performing countries
  • adopt a moderate form of normalisation with a dozen or a score of fields rather than the present 300+ 
  • use full-scale fractional counting 
  • do not count self citations, even better do not count intra-institutional citations
  • do not count secondary affiliations, although that is something that is more the responsibility of publishers
  • introduce two or more measures of citations.

 But what will THE actually do?

Duncan Ross, THE data director, has published a few articles setting out some talking points (here, here, here,  here).
    He suggests that in the citation indicator THE should take the 75th percentile as the benchmark rather than the mean when calculating  field impact scores. If I understand it correctly this would reduce the extreme salience of outliers in this metric.

It seems that a number of new citations measures are being considered with the proportion of most cited publications apparently  getting the most favourable consideration. Unfortunately it seems that they are not going any further with fractional counting, supposedly because it will discourage collaboration. 

Ross  mentions changing the weighting of the indicators but does not seem enthusiastic about this. He also discusses the importance of measuring cross-disciplinary research.

THE is also considering the doctoral student  measures with the proportion of doctoral students who eventually graduate. They are  thinking about replacing institutional income with "a more precise measure," perhaps spending on teaching and teaching related activities. That would probably not be a good idea. I can think of all sorts of ways in which institutions could massage the data so that in the end it would be as questionable as the current industry income indicator.

It seems likely that patents will replace income from industry as the proxy for innovation.

So it appears that there will be some progress in reforming the THE world rankings. Whether it will be enough remains to be seen.


Tuesday, April 14, 2020

Who's doing research on Covid-19?

This is crude and simple. I typed in Covid-19 with fields Article title, Abstract, Keywords.

The first and oldest item of 1,500 to appear was "The New Coronavirus, the Current King of China." by S A Plotkin in the Journal of the Pediatric Infectious Diseases Society. I wonder if there will be a sequel, "The Current King of the World."

The top universities for number of publications are:

1.   Tongji Medical College
2.   Huazhong University of Science and Technology
3.   Chinese Academy of Medical Sciences and Peking Union Medical College
4.   "School of Medicine"
5.   London School of Hygiene and Tropical Medicine
6.   Wuhan University
7.   Chinese Academy of Sciences
8.   Capital Medical University
9.   Fudan University
10. National University of Singapore.

The top funding agencies are:

1.   National Natural Sciences Foundation of China
2.   National Basic Research Program of China
3.   National Institutes of Health
4.   Fundamental Research Funds for the Central Universities 
5.   Wellcome Trust
6.   Chinese Academy of Sciences
7.   Canadian Institutes of Health Research
8.   National Science Foundation
9.   Agence Nationale de la Recherche
10.  Chinese Academy of Medical Research


Tuesday, March 10, 2020

University of California Riverside:Is it America's Fastest Rising University?




Not really.

It seems that one major function of rankings is to cover up the decline or stagnation of major western universities. An example is the University of California Riverside (UCR), a leading public institution. 

Recently there was a tweet and a web page extolling UCR as "America's fastest rising university" on the strength of its ascent in four different rankings: Forbes 80 places in two years, US News 33 places in two years, Times Higher Education (THE) 83 places in two years and the Center for World University Rankings (CWUR) 41 places in one year.

This surprised me a bit because I was under the impression that the University of California system had been declining in the research-based rankings and that UCR had not done so well in the THE world rankings. So I had a quick look. In 2019-20 UCR was in the 251-300 band in the THE world rankings. Two years before that it  was 198th and in 2010-11 it was 117th. I have little trust in THE but that is evidence of a serious decline by any standards. 

But it seems that the tweeter was thinking about the THE/Wall Street Journal US Teaching Rankings. In 2019-20 Riverside was 189th there, in 2018-19 212th, in 2017-18 272nd, and in 2016-17 it was 364th.

That is definitely a substantial rise and it is rather impressive considering the this rise occurred while the UCR was falling in THE's world rankings. Most of the rise occurred in the outcomes "pillar" and probably was a result of the introduction in 2018 of a new indicator that measured student debt. 

UCR's rise had nothing to do with resources, environment or engagement. It is not possible to disentangle the components within THE's four pillars but it is a very plausible hypothesis that a good part of UCR's success is the result of a methodological change that introduced an element where this university was especially strong.

Another ranking where UCR does well is the Center for World University Ranking now published in the UAE. Last year  UCR was 204th, in 2018-19  245th, and in 2017-18 218th. 

The fall and rise of UCR over two years follows the weighting given to research. When the weighting was 25% UCR's position was 218th. When the research weighting was increased to 70 % UCR fell to 245th. When it fell to 50% UCR rose to 204th. So, here again UCR's rank is dependent on methodological tweaking. It does well when the weighting for research is reduced and less well when it is increased.

I assume that the US News (USN) rankings refer to America's Best Colleges, where UCR does very well. It was 91st last year among national universities and 1st for social mobility. 

The rise of UCR in these rankings is also the result of methodological changes. In 2019 USN began to shift away from academic excellence to social mobility, which basically means admitting and graduating  larger numbers of recognised and protected groups. The acceptance rate criterion has been scrapped and metrics related to social mobility such as the graduation rates of low income students have been introduced. UCR has not necessarily improved: what has happened is that the ranking has put more weight on things that it is good at and less on those where it performs less well.

The Forbes ranking also refers to changes announced in 2017 that "better align this list with what FORBES values most: entrepreneurship, success, impact and the consumer experience." It is  likely that these had a favourable impact on UCR's performance here as well.

When it comes to international research based rankings over several years the story is a different one of steady decline. Starting with total publications in the publications indicator of the CWTS Leiden Ranking, UCR went from 270th in 2006-09 to 392nd in 2014-17. Much of this was due to the general decline of American universities but even within the US group there was a decline from 88th to 93rd.

The decline is starker if we look at the most rigorous expression of quality, the proportion of publications in the top 1% of journals. In the same period UCR fell from 12th to 130th worldwide and 11th to 59th in the USA.

Turning to the Scimago Institution Rankings which include patents and altmetrics, Riverside fell from 151st in 2011 to 228th in 2019. Among US institutions it fell from  70th  in 2011 to 85th in 2019.

That is the situation with regard to research based rankings. Moving on the rankings that include things related to teaching UCR fell from 271st in the QS world rankings in 2017 to 454th in 2020. In the Round University Rankings it fell from 197th in 2010 to 231st in 2916 and then stopped participating from 2017 onward.

It seems fairly clearly that UCR has been in declining in indicators relating to research, especially research of the highest quality. It also performs poorly in those rankings that combine teaching withresearch and internationalisation metrics. Exactly why must wait for another post but I strongly suspect that the underlying reason is the declining ability of incoming students and the retreat from meritocracy in graduate school admissions and faculty appointments and promotions.


Friday, February 28, 2020

Polish Universities in International Rankings

My short article on Polish universities and international rankings has just been published by Forum Akademikie. The article in Polish can be accessed here. Translation and editing by Piotr Kieracinski. The full journal issue is here.




Here is the English version.



Richard Holmes

Polish Universities and International Rankings

A Brief History of International University Rankings

After a false start with the Asiaweek rankings of 1999-2000, international university rankings took off in 2003 with the Academic Ranking of World Universities (ARWU), published by Shanghai Jiao Tong University and then by the Shanghai Ranking Consultancy.

In 2004 two new rankings appeared: the Ranking Web of Universities, better known as Webometrics, which originally measured only web activity, and the Times Higher Education Supplement (THES) – Quacquarelli Symonds (QS) World University Rankings, which emphasised research and also included faculty resources and internationalisation indicators.

Since then the number of rankings, metrics and data points has increased prompting ranking consultant Daniel Guhr to talk about “vast data lakes”. Rankings have become more complex and sophisticated and often use statistical techniques such as standardisation and field normalisation.

In addition to global rankings, specialist rankings of regions, subjects, and business schools have appeared. International rankings continue to have a bias towards research but some try to find a way of capturing data that might be relevant to teaching and learning or to university third missions such as sustainability, gender equity and open access. They have also become significant in shaping national higher education policy and institutional strategy.

Although the media usually talk about the big four rankings or sometimes the big three or big two, there are now many more. The IREG Inventory of International Rankings includes 17 global rankings in addition to regional and specialised rankings and various spin offs. Since the publication of the inventory more global rankings have appeared and no doubt there are more to come.

Media Perceptions of Rankings

It is unfortunate that the media and public perception of global rankings has little relation to reality. The Times Higher Education (THE) World University Rankings are by far the most prestigious but they have serious defects. They lack transparency with eleven indicators bundled into three broad groups. They rely on subjective surveys and questionable data submitted by institutions. They are unbalanced with a 30% weighting going towards a citation indicator that can be influenced by a handful of papers in a multi-author international project and that has elevated a succession of little-known places to world research leadership. These include the University of Reykjavik, Babol Noshirvani University of Technology, Aswan University, and Anglia Ruskin University.

The problems of the THE world rankings are illustrated by looking at the fate of a leading Polish university in recent editions. In the 2014-15 rankings the University of Warsaw was ranked 301-350 but in 2015-16 it fell to 501-600. This was entirely the result of a dramatic fall in the score for citations and that was entirely the result of a methodological change. In 2015 THE stopped counting citations to papers with over a thousand “authors”. This badly affected the University of Warsaw which, along with Warsaw University of Technology, had been contributing to the Large Hadron Collider project a producer of many such papers. The University of Warsaw’s decline in the rankings had nothing to do with any defect. It was simply the result of THE’s tweaking.

Although they receive little attention from the media there are now several global rankings published by universities and research councils that include more universities, cover a broader range of indicators and are technically as good as or better than the Big Four. These include the National Taiwan University Rankings, University Ranking by Academic Performance published by Middle East Technical University, the Scimago Institution Rankings and CWTS Leiden Ranking.

Polish Universities in Global Rankings

Turning to the current position of Polish universities in international rankings there is a great deal of variation. There are 14 in the THE rankings with 4 in the top 1000, but 410 in the Webometrics rankings of which 10 are in the top 1000. The ranking with the best representation of Polish universities is Scimago with 54 in the top 1000.
Of the “big four” rankings -- THE, QS, Shanghai, US News -- the best for analysing the current standing of the International Visibility Project (IntVP) universities is the US News Best Global Universities (BGU). THE and QS are unbalanced with too much emphasis on a single indicator, citations and academic survey respectively. The Shanghai rankings include Nobel and Fields awards some of which are several decades old. It should be noted that BGU is an entirely research based ranking.

The list below indicates the world rank of Polish universities in the latest US News BGU:

University of Warsaw 286
Jagiellonian University 343
Warsaw University of Technology 611
AGH University of Science and Technology 635
Adam Mickiewicz University 799
University of Wroclaw 833
Medical University of Wroclaw 926
Wroclaw University of Science and Technology 961
Nicholas Copernicus University 984
Medical University of Gdansk 995
Medical University of Warsaw 1033
University of Silesia 1082
University of Gdansk 1096
University of Lodz 1119
Gdansk University of Technology 1148
Poznan University of Technology 1148
Lodz University of Technology 1194
Lodz Medical University 1203
Warsaw University of Life Sciences 1221
Pomeranian Medical University 1303  
Poznan University of Medical Sciences 1312
Silesian University of Technology 1351
Krakow University of Technology   1363
University of Warmia 1363
Medical University Silesia    1399
Medical University of Lublin 1414
University of Rseszov 1430
Technical University Czestochowa 1445
Poznan University of Life Sciences 1457
Wroclaw University of Life and Environmental Sciences 1465
Agricultural University of Lublin. No overall rank. 214 for agriculture
Medical University of Bialystok. No overall rank. 680 for clinical medicine.

One thing that emerges from this list is that the Polish university system suffers from a serious handicap in this ranking and in others due to the existence of independent specialist universities of technology, business and medicine. Consolidation of small specialist institutions could bring about significant improvements as has recently happened in France.

There is also some variation in the rank of the best performing Polish universities. In most rankings the top scoring Polish university is in the 300s or 490s. There are, however, some exceptions. The default indicator in Leiden Ranking, total publications, has Jagiellonian University at 247, GreenMetric has Adam Mickiewicz University at 160 and the Moscow Three Missions University Rankings puts the University of Warsaw at 113. On the other hand, no Polish university gets higher than the 600-800 band in the THE world rankings.

The various rankings have very different methodologies and indicators. THE for example includes income in three indicators. The QS rankings gave a combined weighting of 50 % to reputation surveys. Scimago counts patents and the Center for World University Rankings (CWUR), now based in the Arab Gulf, the achievements of alumni. It would be a good idea to look carefully at the content and format of all the rankings before using them for evaluation or benchmarking.

Polish Universities: Strengths and Weaknesses

Poland has certain advantages with regard to international rankings. It has an excellent secondary school system as shown by above average performance in PISA and other international standardised tests. Current data indicates that it has adequate teaching resources, shown by statistics for staff student ratio. It has cultural and economic links to the East, with the Anglosphere and within the EU that are likely in the future to produce fruitful research partnerships and networks.

On the other hand, the evidence of current rankings is that Polish universities relatively underfunded and that doctoral education is still relatively limited. The reputation for research in the world is not very high although is better for regional reputation.

One exception to the limited international visibility of Polish universities is a recent British film, The Last Passenger, in which a hijacked train is saved by a few heroes one of whom has an engineering degree from Gdansk University of Technology.

Poland and the Rankings

It would be unwise for Poland, or indeed any country, to focus on a single ranking. Some rankings have changed their methodology and will probably continue to do so and this might lead to unexpected rises or falls. THE has announced that there will be a new 3.0 version of the world rankings towards the end of this year.

Any university or university system wishing to engage with the rankings should be aware that they often favour certain types of institutions. The Shanghai rankings, for example, privilege medical research and totally ignore the arts and humanities. Scimago includes data about patents which would give technological universities an advantage. THE’s current methodology gives a massive privilege to participants in multi-contributor projects. QS uses several channels to obtain respondents for its academic and employer surveys. One of these is a list of potential respondents provided by the universities. Universities are very likely to nominate those who are likely to support them in the surveys. 

Guidelines for Polish universities as they seek to establish and extend their international presence.

First, improving international visibility will take time. Quick fixes such as recruiting highly cited adjunct faculty or taking part in high profile projects may be counterproductive, especially if there is an unannounced change in methodology.

Second, before launching a campaign to rise in the global rankings some universities might consider regional or specialist rankings first, such as the THE Europe Teaching Rankings, the QS graduate employability ranking, the Indonesian GreenMetric rankings or business school rankings.

Third, Universities should also consider the cost of taking part in the rankings. US News, QS and THE require universities to submit data and this can be time consuming especially for THE which asks for data in ten subjects. Many universities seem to need three or four staff dedicated to rankings.

Fourth, it would be wise to monitor all international rankings for data that can be used for internal evaluation or publicity.

Fifth, universities should match their profiles, strengths and weaknesses with the methodology of specific rankings. Universities with strengths in medical research might perform well in the Shanghai rankings.

Sixth, it is not a good idea to focus exclusively on any single ranking or to make any one ranking the standard of excellence.







1.