Sunday, June 06, 2021

The Decline of American Research



Looking through the  data of the latest SCImago Journal and Country Rank is a sobering experience. If you just look at the data for the quarter century from 1996 to 2020 then there is nothing very surprising. But an examination of the data year by year shows a clear and  frightening picture of scientific decline in the United States.

Over the whole of the quarter century the United States has produced 11,986,435 citable documents, followed by China with 7,229,532, and the UK with 3,347,117.

Quantity, of course, is not everything when comparing research capability. We also need to look at quality. SJCR also supplies data on citations which are admittedly an imperfect assessment of quality: they can be easily gamed with self-citations, mutual citations, and so on. They often measure what is fashionable, not that which contributes to public welfare or advances in fundamental science. They are at the moment, however, if collected and analysed carefully and competently, perhaps the least unreliable and subjective metric available for describing the quality of research.

Looking at the number of citations per paper, we have to set  a threshold otherwise the top country for quality would be Anguilla, followed by the Federated States of Micronesia, and Tokelau. So we wrote in a 50,000 paper threshold over the 25 years.

Top place for citations   in 1996-2020 goes to Switzerland, followed by the Netherlands, Denmark, and Sweden, with the USA in fifth place and the UK in tenth. Apart from the US it looks like the leaders in research are Cold Europe, the countries around the North and Baltic seas plus Switzerland.

China is well down the list in 51st place.

Things look very different when we look at the data year by year. In 1996 the USA was well ahead of everybody for  the output of citable documents with 350,258 documents, followed by Japan with 89,430. Then came the UK, Germany, France, Russia. China is a long way behind in nineth place with 30,856 documents.

Fast forward to 2020. Looking at output, China has now overtaken the US for citable documents and India has overtaken Germany and Japan. That is something that has been anticipated for quite some time.

That's quantity. Bear in mind that China and India have bigger populations than the USA so the output per capita is a lot less. For the moment.

Looking at citations per paper published in 1996 the United states had 42.14 citations or 1.62 per year over the quarter century. By 2020 this had shrunk to 1.22 per year.

It is possible that the annual score will pick up in later years as the US recovers from the lockdown and the virus before its competitors. It is equally possible that those papers may get fewer citations as time passes.

But Switzerland, which  was slightly ahead of the US in 1996, is in 2020 well in front, having improved its annual count of citations per paper from 1.65 to 1.68. Then there a cluster of other countries that have overtaken the US -- Australia, the  Netherlands, the UK.

And now China has also overtaken the US for quality with 1.23 citations per paper per year.

It looks as though the current pandemic will just accelerate what was happening anyway. Unless something drastic happens we can look forward to China steadily attaining and maintaining hegemony over the natural sciences.


Wednesday, March 31, 2021

Energetic and Proactive Leadership or a Defective Indicator?

It is now normal for university administrators to use global rankings to market their institutions, reward themselves and, all too often, justify demands for more public funding. Unfortunately, some ranking agencies are more than happy to go along with this.

One example of the misuse of rankings is from July of last year  when the Vice-Chancellor of the University of the West Indies, Sir Hilary Beckles, proclaimed a triple first  in the 2020 Times Higher Education (THE) rankings.

The original  target was to be be in the top 3% of ranked universities by the end of the current strategic planning cycle. The Vice-Chancellor was very happy that as a result of  the work of "an energetic and proactive leadership team of campus principals and others" the university was in the top 1% of the THE Latin America and Caribbean rankings and the top 1% of golden age universities, and was the only ranked university in the Caribbean.

Another article has just appeared lauding the achievements of the university. The Vice-Chancellor has asserted that "(t)he Times Higher Education informed us that what we have achieved is quite spectacular. That many universities had taken 30 years to achieve what we have achieved in a mere three years."

If this is an accurate report of what THE said then the magazine is being very irresponsible. Universities are complex structures, and cannot be turned around with just a few million dollars or a few dozen highly cited researchers. Without drastic restructuring or amalgamation, such a remarkable change is almost invariably the result of methodological changes or methodological defects. The latter is the case here.

UWI 's performance might appear  quite impressive but it seems that we have another case of  THE seeing things that nobody else can. THE is not the only global university ranking. In fact, it is in some important respects not a good one. Let's take a look at some other rankings. UWI is not ranked in Leiden Ranking, the Shanghai Academic Ranking of World Universities, the US News Best Global Universities, or the QS World University Rankings.

It does make an appearance in University Ranking by Academic Performance (URAP) published by Middle East Technical University where it is ranked 1622th and it is 1938th in the Center for World University Rankings.

But in the THE World University Rankings the university is in the top 600 and it is 18th in Latin America and the Caribbean. This is because of a single defective indicator.

UWI has an apparently healthy 81.1 in the THE world rankings citations indicator, supposedly a measure of research influence or impact, which at first sight seems odd since it has a very low score, 10.4, for research. In the Latin American rankings, where competition is less severe, that becomes  93.6 for citations and 75.8 for research. How could a university do so brilliantly for research influence when it has a poor reputation for research, doesn't publish many papers, and has little research income?

What has happened is that UWI has benefitted from THE's peculiar citations indicator that does not use fractional counting for papers with less than a hundred authors, plus hyper-normalisation, plus a country bonus for being located in a low performing country, resulting in a few multi-author, multi-cited papers pushing universities into positions that they could never achieve anywhere else.

UWI has recently contributed to a few papers with a large number of contributors and many citations in genetics and medicine in top journals including Lancet, Science, Nature, and the International Journal of Surgery. Five of these papers are linked with the Global Burden of Disease Study.

UWI is  to be congratulated for having a number of affiliated scientists who are taking part in cutting edge projects. Nonetheless, it is true that these have an impact only because of the technical defects of the THE rankings. In the last few years a succession of unlikely places -- Anglia Ruskin, Peradeniya, Reykjavik, Aswan, Brighton and Sussex Medical School, Durban University of Technology, have risen to the top of the citations charts in the THE rankings.

They have done so nowhere else. The exalted status of UWI in the THE WUR and its various offshoots is due to an eccentric methodology. If THE reforms that methodology, something that they have been talking about for a long time, or abandons the world rankings, the university will once again be consigned to the outer darkness of the unranked.



Sunday, March 14, 2021

The sensational fall of Panjab University and the sensational coming rise*

*If present methodology continues

The Hindustan Times has a report on the fall of Panjab University (PU) in the latest THE Emerging Economies Rankings. PU has fallen from 150th in 2019 to 201-250 this year. As expected, rivals and the media have launched a chorus of jeers against the maladministration of a once thriving institution.

But that isn't half of it. In 2014, the first year of these rankings, PU was ranked 14th in the emerging world and first in India ahead even of the highly regarded IITs. Since then the decline has been unrelenting. PU was 39th in 2015, 130th in 2018, 166th in 2020.

Since 2014 PU's scores for Research, Teaching and Industry Income have risen. That for International Outlook has fallen but that did not have much of an effect since it accounts for only 10 per cent of the total weighting.

What really hurt PU was the Citations indicator. In 2014 PU received a score of 84.7 for citations largely because of its participation in the mega-papers radiating from the Large Hadron Collider project, which have thousands of contributors and thousands of citations. But then THE stopped counting such papers and consequently in 2016 PU's citation score fell to 41 and its overall place to 121st. It has been downhill nearly every year since then.

The Director of PU's Internal Quality Assurance Cell said that the university has improved for research and teaching but needed to do better for international visibility and that proactive steps had been taken. The former Vice-Chancellor said that other universities were improving but PU was stagnating and was not even trying to overcome its weaknesses.

In fact, PU's decline had  nothing or very little  to do with international visibility or the policy failures of the administration just as its remarkable success in 2014 had nothing to do with working as a team, the strength brought by diversity, dynamic transformational leadership, or anything else plucked from the bureaucrats' big bag of clichés.

PU bet the farm on citations of particle physics mega-papers and for a couple of years that worked well. But then THE changed the rules of the game and stopped counting them, although after another year they received a reduced amount of credit. PU, along with the  Middle East Technical University and some other Turkish, Korean and French institutions that had overinvested in the CERN projects, tumbled down the THE rankings.

But PU may be about to make a comeback. Readers of this blog will probably guess what is coming next.

When THE stopped favouring papers with thousands of contributors they abolished the physics privilege that elevated places like Tokyo Metropolitan University, Federico Santa Maria Technical University, and Colorado School of Mines to the top of the research impact charts and replaced it with medical research privilege .

In 2018 PU started to publish papers that were part of the Gates-backed Global Burden of Disease Study (GBDS). Those papers have hundreds, but not thousands, of authors and so they are able to slip through the mega paper filter. The GBDS has helped Anglia Ruskin University, Brighton and Sussex Medical School, Aswan University, and Kurdistan University of Medical Sciences rise to the top of the research impact metric.

So far there have not been enough citations to make a difference for PU. But as those citations start to accumulate, providing of course that their impact is not diluted by too many other publications, PU will begin to rise again.

Assuming of course that THE's Methodology does not change.







Thursday, January 07, 2021

An Indisputable Ranking Scorecard? Not Really.

The University of New South Wales (UNSW) has produced an aggregate ranking of global universities, known as ARTU. This is based on the "Big Three" rankers, QS, Times Higher Education (THE) and the Shanghai ARWU. The scores that are given are not an average but a aggregate of their ranks, which is then inverted. Nor surprisingly, Australian universities do well and the University of Melbourne is the best in Australia.

Nicholas Fisk, Deputy Vice Chancellor of Research, hopes that this ranking will become "the international scoreboard, like the ATP tennis rankings" and "the indisputable scoreboard for where people fit in on the academic rankings."

This is not a new idea. I had a go at producing an aggregate ranking a few years ago, called Global Ranking of Academic Performance or GRAPE. It was going to be the Comparative Ranking of Academic Performance: maybe I was right the first time.  It was justifiably criticised by Ben Sowter of QS. I think though that it was quite right to note that some of the rankings of the time underrated the top Japanese universities and overrated British and Australian schools.

The ARTU is an another example of the emergence of a cartel or near cartel of the three global rankings that are apparently considered the only ones worthy of attention by academic administrators and the official media.

There are in fact a lot more and these three are not even the best three rankings, far from it. A pilot study, Rating the Rankers,  conducted by the International Network of Research Management Systems (INORMS), has found that on four significant dimensions, transparency, governance, measuring what matters, and rigour, the performance of six well known rankings is variable and that of the big three is generally unimpressive. That of THE is especially deficient.

Seriously, should  we consider as indisputable a ranking that includes indicators that proclaim Anglia Ruskin University as a world leader for research impact and Anadolu University as tops for innovation, another that counts that long dead winners of Nobel and Fields awards, and another that gives disproportionate weight to a survey with more respondents from Australia than from China?

There does seem to be a new mood of ranking skepticism emerging in many parts of the international research community. Rating the Rankers has been announced in an article in Nature. The critical analysis of rankings will, I hope, do more to create fair and valid systems of comparative assessment than simply adding up a bunch of flawed and opaque indicators.

Tuesday, November 17, 2020

Indian University Performance to be Judged by Rankings

I have commented on Indian responses to the rankings before and many times on problems with the better known rankings so I apologize for repeating myself. 


The influence of global university rankings continues to expand. There seem to be few areas of higher education or research where they are not consulted or used for appointments, promotion, admissions, project evaluation, grant approval, assessment, publicity and so on.

The latest example is from India.  The Indian Express reports that the Minister of Education has announced that the progress of "institutions of eminence" [IoEs]will be charted using the "renowned" QS and THE rankings. Apparently, "an incentive mechanism will be developed for those institutes which are performing well." That is definitely not a good idea: it will reward behaviour that leads to improved ranking performance not to improved output or quality.

Recently, some of the leading Indian Institutes of Technology (IITs), four of which are on the list of IoEs, announced that they would be boycotting the THE rankings.  I am not sure whether this means that there is now a split within the higher education sector in India or whether the IITs are rethinking their opposition to the rankings.

There is nothing wrong with evaluating and comparing universities, research centers, researchers, or departments. Indeed it would seem very helpful if a country is going to maintain an effective higher education system.  But it is questionable whether these rankings are the best way or even a good way of doing it. Research might be evaluated by panels of peer researchers, provided these are unbiased and fair, by international experts, surveys, or by bibliometric and scientometric indicators. The quality of teaching and learning is more problematic but national rankings around the world have used several measures that, although not very satisfactory, might provide a rough and imperfect assessment.

There is now a broad range of international rankings covering publications, citations, innovation, web presence, and other metrics. The IREG Inventory of International Rankings identified 17 global rankings in addition to regional and specialist ones and there are now more. If the Indian government wanted to use a ranking to measure research output and quality then it would probably be better to refer to the Leiden Ranking produced by the CWTS at Leiden University, or other straightforward research-based rankings such as URAP, published by the Middle East Technical University in Ankara,  the Shanghai Rankings, or the National Taiwan University Rankings, which have a generally stable and transparent methodology. Another possibility is the Scimago Institution Rankings which include indicators measuring web activity and altmetrics. Round University Rankings uses several metrics that might have a relationship to teaching quality.

It is, however, debatable whether the THE rankings are useful for evaluating Indian universities. There are several serious problems, which I have been talking about since 2011. I will discuss just three of them.  

The THE world rankings lack transparency. Eleven of its 13 indicators are bundled in three super-indicators so that it is impossible to figure out exactly what is doing what. If a university, for example, gets an improved score for Teaching: The Learning Environment this could be because of an improved score for teaching reputation, an increase in the number of staff, a reduction in the number of students, an increase in the number of doctorates awarded, a reduction in the number of bachelor degrees, a decrease in the number of academic staff, an increase in institutional income, or a combination of two or more of these.

THE does make disaggregated data available to universities but that is of little use for students or other stakeholders.

Another problem is the face validity of the two stand-alone indicators. Take a look at the citations indicator results for 2020-2021, which are supposed to measure research impact or research quality These are the top universities for 2020-2021: Anglia Ruskin University, Babol Noshirvani University of Technology, Brighton and Sussex Medical School,  Cankaya University, the Indian Institute of Technology Ropar, Kurdistan University of Medical Sciences. 

Similarly with the industry income indicator, which is presented as a measure of innovation.. At the top of this indicator are Anadolu University, Asia University Taiwan, University of Freiburg, Istanbul Technical University, Khalifa University, LMU Munich, the Korea Advanced Institute of Science and Technology, and Makerere University. The German and Korean universities seem plausible but one wonders about the others.

THE has not discovered a brilliant new method of finding diamonds in the rough. It is just using a flawed and eccentric methodology, one that it has repeatedly claimed that it will reform but somehow has never quite got round to doing so.

Third, the THE rankings include indicators that dramatically favour high status western universities with money, prestige and large numbers of postgraduate students. There are three measure of income, two reputation surveys accounting for  a third of the total weighting, and two measures counting doctoral students.

The QS rankings are somewhat better but there are issues here as well. There is only one indicator for research, citations per faculty, and only one that is directly related to teaching quality, that is the employer reputation indicator with a ten per cent weighting.

The QS rankings are heavily overweight on research reputation, which has a 40% weighting and is hugely biased to certain countries. There are more respondents from Malaysia than from China  and more from Kazakhstan than from India.

Using either of these rankings opens the way to attempts to manipulate the system. It is possible to get a high score in the THE rankings by recruiting somebody involved in the Global Burden of Disease Study. Doing well in the QS rankings might be influenced by signing up for a reputation management program.

It seems, however, that there is a new mood of scepticism about rankings in academia. One sign is the Rating the Rankings project by the Research Evaluation Working group of the International Network of Research Management Systems. This is a rating, definitely not a ranking, of six rankings by an international  team of expert reviewers who did an evaluation according to four criteria: good governance, transparency, measure what matters, and rigour.

The results are interesting. No ranking is perfect but it seem that the famous brands are more likely to fall short of the criteria. 

The Indian government and others would be wise to take note of the analysis and criticism that is available before committing themselves to using rankings for the assessment of research or higher education.





Saturday, July 25, 2020

How will the COVID-19 crisis affect the global rankings?


My article has just been published in University World News. 

You can read there and comment here.

Monday, June 08, 2020

The Great Acceleration

A major impact of the current virus crisis is that trends that were developing gradually have suddenly gathered new momentum. In many places around the world we can see the introduction of something like a universal basic income, the curtailment of carbon consuming transport, moves to abolish prisons and the police, and the continued rise of online education, online shopping, online almost anything.

Western universities have been facing an economic crisis for some time. Costs have soared and the supply of competent students has been drying up. For a while, the gap was filled with international, mainly Chinese, students but even before the virus that supply was dwindling. It seems likely that as universities open up, wholly or partly, there will be a lot fewer Chinese students and not many from other places. Meanwhile, local students and their parents will wonder whether there's any point in going deep into debt to pay for online or hybrid courses that will lead nowhere.

Organisations, like organisms, struggle to grow or to survive and universities need students to bring in revenue. If capable students are going to stay away then universities will have to recruit less capable students, close shop, merge, or become some other kind of institution. 

Another factor is the desperate need to calibrate the demographic composition of the student body, administration and faculty with that of the region or country or the world. Standardised testing has long been a problem here. Tests like the SAT, ACT, GRE, LSAT, and GMAT  are good predictors of academic ability and cognitive skills but they invariably give better scores to Whites and East Asians than to African Americans, Hispanics and Native Americans.

For many years there have been repeated demands that American universities abandon objective testing for admission and placement. One element in these demands was the observation that there was a correlation between family income and test scores. This was attributed to the ability of rich white parents to provide expensive test preparation courses for their  children.

There was an element of hypocrisy in these claims. If test prep courses were the cause of racial differences why not just make them a required part of the high school curriculum or pay the test centre fees and travel costs of low-income students? The failure to propose such measures suggests that everyone knows that this is not really the cause of racial or social achievements. 

The University of California (UC), once the exemplar of public tertiary education, has now decided to remove the SAT and ACT from the admissions process. The implications of this are large. Almost certainly many other public and perhaps some private universities will follow suit. The  UC Board of Regents has announced a plan to phase out standardised testing for undergraduate students by 2025.

In 2021 and 2022 UC will be test-optional. Students can submit test scores if they wish and UC campuses may use them as they see fit.

In 2023 and 2024 UC will be test-blind. Test scores will not be used for admissions although they might be used for course placement and scholarships. If I understand it correctly test scores could still be used for the admission of out of state and international students.

In 2025 the SAT/ACT will be phased out altogether and supposedly replaced by a new test "that more closely aligns with what we expect incoming students to know to demonstrate their preparedness for UC."

I would like to emulate Wayne Rooney and declare that I will never make a prediction but I cannot resist this one: there will never be a standardised test that will reconcile the need to predict academic ability with the demand for zero or minimal disparate racial impact.

UC will most probably end up with a complicated and holistic system of admission that will try to combine a semblance of selectivity with a mix of metrics that relate to subjective and marginal traits like grit, response to adversity, social awareness and so on and which produced an acceptable mix of groups.

It is very likely that the academic competence and cognitive skills of undergraduates and postgraduates at UC campuses will go into sharp decline. No doubt there will be compensations. Students will have be grittier, more diverse, more aware. Whether that is sufficient to balance the decline in cognitive ability remains to be seen

Meanwhile in  China and India, growing political authoritarianism and centralisation may lead to some decline in academic rigour but comparison to the US and Europe that still seems fairly limited.

Thursday, May 07, 2020

Observations on the Indian Ranking Boycott


Seven Indian Institutes of Technology (IITs) -- Delhi, Bombay, Guwahati, Kanpur, Kharagpur, Madras, and Roorkee -- have announced that they will be boycotting this year's Times Higher Education (THE) World University Rankings. The move has been coming for some time. Indian universities have not performed well in most rankings but they have done especially badly in THE's.

Take a look at the at the latest THE world rankings and the performance of three elite institutions. IIT Delhi (IITD) and IIT Bombay (IITB) are in the 401-500 band, and the Indian Institute of Science (IISc) is 301-350.

It is noticeable that these three all do much better in the QS world rankings where IIT Delhi is 182nd, IIT Bombay 152nd, and IISc  184th. That no doubt explains why these Institutes are boycotting THE but still engaging with QS. 

It should be pointed out that with regard to research THE probably  treats the Institutes better than they deserve. The Shanghai rankings, which are concerned only with research, have IITD and IITB in the 701-800 band, and IISc at 401-500. In the US News Best Global Universities IITD is 654th, ITB 513th, and IISc 530th.

The dissatisfaction with THE is understandable. Indeed it might be surprising that the IITs have taken so long to take action. They complain about transparency and the parameters. They have a point, in fact several points. The THE rankings are uniquely opaque: they combine eleven indicators into three clusters so it is impossible for a reader to figure out exactly why a university is doing so well or so badly for teaching or research. THE's income and international metrics, three of each, also work against Indian universities.

It is, however, noticeable that a few Indian universities have done surprisingly well in the THE world rankings: IIT Ropar and IIT Indore are in the top 400 and IIT Gandhinagar in the top 600 thanks to high scores for citations. IIT Ropar is credited with a score of 100, making it fourth in the world behind those giants of research impact: Aswan University, Brandeis University, and Brighton and Sussex Medical School. 

Regular readers of this blog will know what is coming next. IIT Ropar has contributed to 15 papers related to the multi-author and hugely cited Global Burden of Disease Study (GBDS), which is slightly less than 1.5% of its total papers over the relevant period but well over 40% of citations. 

It would be relatively simple for the mutinous seven to recruit one or two researchers involved in the GBDS and in a few years -- assuming the current methodology or something like it continues -- they too would be getting near "perfect" scores for citations and heading for top three hundred spots.

They may, however, have judged that the THE methodology is going to be changed sooner or later -- now looking like a little bit later -- or that aiming for the QS reputation surveys is more cost effective. Or perhaps they were simply unaware of exactly how to get a good score in the THE rankings.

It is sad that the Indian debate over ranking has largely been limited to comparisons between THE and QS. There are other rankings that are technically better in some ways and are certainly better suited to Indian circumstances. The Round University Ranking which has 20 indicators and a balanced weighting has IISC in 62nd place with extremely good scores for financial sustainability and doctoral students.

The boycott is long overdue. If it leads to a more critical and sceptical approach to ranking then it may do everybody a lot of good.