Saturday, February 25, 2023

Global Trends in Innovation: Evidence from SCImago

We are now approaching the third decade of global university rankings. They have had a mixed impact. The original Shanghai rankings published in 2003 were a salutary shock for universities in continental Europe and contributed to a wave of restructuring and excellence initiatives. On the other hand, rankings with unstable and unreliable methodologies are of little use to anyone except for the public relations departments of wealthy Western universities. 

In contrast, the SCImago Institutions Rankings, published by a Spanish research organisation, with thousands of universities, hospitals, research institutes, companies and other organisations, can be quite informative, especially the Innovation and Societal Rankings.

The Innovation Rankings, which are based on three measures of patent citations and applications, included 4019 organisations of various kinds in 2009. The top spot was held by the Centre Nationale de la Recherche Scientifique in France, followed by Harvard, the National Institutes of Health in the USA, Stanford, and MIT.

Altogether the top 20 in 2009 consisted of  ten universities, nine American plus the University of Tokyo, and ten non-university organisations, three American, two German, two French, two multinational, and the Chinese Academy of Sciences (CAS) in 14th place. 

Fast forward to 2022 and we now have 8084 institutions. First place now goes to CAS, followed by the State Grid Corporation of China, Deep Mind Technologies, a British AI firm, the Chinese Ministry of Education, and Samsung Corp.

Now, the top twenty includes exactly two universities, Tsinghua in 14th place and Harvard in 20th. The rest are companies, health organisations, and government agencies. The nationality assigned by Scimago for these eighteen is Multinational eight, USA six, China four, and UK and South Korea one each.

What about those high flying US universities of 2009? Stanford has fallen from 4th place to 67th, the University of Michigan from 13th to 249th, the University of Washington from 16th to 234th.

The relative -- and probably now absolute as well -- decline of American academic research has been well documented. It seems that the situation is even more dire for the innovative capability of US universities. But the technological torch is passing not only to Chinese universities and research centres but also to US and Multinational corporations.



Saturday, February 04, 2023

Aggregate Ranking from BlueSky Thinking

 

In recent years there have been attempts to construct rankings that combine several global rankings. The University of New South Wales has produced an aggregate ranking based on the “Big Three” rankings, the Shanghai ARWU, Times Higher Education (THE), and QS. AppliedHE of Singapore has produced a ranking that combines these three plus Leiden Ranking and Webometrics.

The latest aggregate ranking is from BlueSky Thinking, a website devoted to research and insights in higher education. This aggregates the big three rankings plus the Best Global Universities published by US News.

There are some noticeable differences between the rankings. The University of California Berkeley is fourth in the US News rankings but 27th in QS. The National University of Singapore is 11th in QS but 71st in ARWU.

The top of the aggregate ranking is unremarkable. Harvard leads followed by Stanford, MIT, Cambridge, and Oxford.

There have been some significant changes over the last five years, with universities in Mainland China, Hong Kong, France, Germany, Saudi Arabia, and Australia recording significant improvements, while a number of US institutions, including Ohio State University, Boston University and the University of Minnesota, have fallen.

 

Saturday, January 28, 2023

Another Sustainability Ranking

 

People and Planet is a British student network concerned with environmental and social justice. It has just published a league table that claims to measure the environmental and ethical performance of UK universities.

The top five universities are Cardiff Metropolitan, Bedfordshire, Manchester Metropolitan, Reading, and University of the Arts London. It is interesting that this league table shows almost no overlap with the other rankings that claim to assess commitment or contributions to sustainability.

Here are all the six British universities in the latest UI GreenMetric ranking in order: Nottingham, Nottingham Trent, Warwick, Glasgow Caledonian, Loughborough, Teesside.

The top five British universities in the THE Impact Rankings are Newcastle, Manchester, Glasgow, Leicester, King's College London. For the QS Sustainability rankings we have: Edinburgh, Glasgow, Oxford, Newcastle, Cambridge.

There is some similarity between the QS Sustainability and the THE Impact Rankings, because both give prominence to research on environmental topics. But even this is quite modest compared to the much greater overlap between conventional research based rankings such as Leiden, Shanghai or URAP (Middle East Technical University).

This surely raises serious questions about the trend to rankings based on sustainability. If the rankers produce league tables that show such a modest degree of correlation then we have to ask whether there is any point at all to the exercise.



 



Friday, January 20, 2023

Implications of the new QS methodolgy

QS have announced that the world rankings due to appear in 2023 will have a new methodology. This is likely to produce significant changes in the scores and ranks of some universities even if there are no significant changes in the underlying data. 

There will no doubt be headlines galore about how dynamic leadership and teamwork have transformed institutions or how those miserable Scrooges in government have been crushing higher education by withholding needed funds.

The first change is that the weighting of the academic survey will be reduced from 40% to 30%. This is quite sensible: 40% is far too high for any one indicator. It remains, however, the largest single indicator and it remains one that tends to favour the old elite or those universities that can afford expensive marketing consultants, at the expense of emerging institutions. The employer survey weight will go up from 10% to 15%.

Next, the weighting of faculty student ratio has been cut from 20% to 10%. Again this is not a bad idea. This metric is quite easy to manipulate and has only a modest relationship to teaching quality, for which it is sometimes supposed to be a proxy.

What has not changed is the citations per faculty indicator. This is unfortunate since rankers can get very different results by tweaking the methodology just a bit. It would have been a big improvement if QS had used several different metrics for citations and/or publications, something that Times Higher Education has just got round to doing.

Then there are three new indicators: international research network, graduate employability, and sustainability.

This means that international indicators will now account for a 15% weighting, adding a further bias towards English-speaking universities, or those in small countries adjoining larger neighbours with similar languages and cultures and working against China and India. 

The introduction of a sustainability metric is questionable. It requires a considerable amount of institutional data collecting and this will tend to favour schools with the resources and ambitions to jump through the rankers' hoops.

On the surface, it seems that these changes will be a modest  improvement. However, I suspect that one effect of the changes will be a spurious boost for the scores and ranks of the elite Western and English-speaking  universities who can mobilise partners and alumni for the surveys, nurture their global networks, and collect the data required to compete in the rankings.


Monday, November 07, 2022

The QS Sustainability Rankings

 


Do we need to measure social and environmental impact?

Sunday, October 23, 2022

Australia and the THE World Rankings

 

The Latest Rankings

The latest Times Higher Education (THE) world rankings have just been announced at a summit in New York. Around the world political leaders, mass media, and academics have been proclaiming their delight about their universities rising in the rankings. Australian universities are especially fascinated by them, sometimes to the point of unhealthy obsession.

Study Australia reports that "Australia shines again." Insider Guides finds it "particularly exciting" that six Australian universities in the top 200 have climbed the charts. Monash University is celebrating how it has "skyrocketed" 13 places, further proof of its world-leading status.

It is unfortunate that Australian media and administrators are so concerned with these rankings. They are not the only global rankings and certainly not the most reliable, although they are apparently approved by universities in the traditional elite or their imitators.  They are not totally without value, but they do need a lot of deconstructions to get to any sort of meaningful insight.

Transparency

One problem with the THE rankings, to be painfully repetitive, is that they are far from transparent. Three of their five current “pillars” consist of more than one indicator so we cannot be sure exactly what is contributing to a rise or fall. If, for example, a university suddenly improves for THE’s teaching pillar that might be because its income has increased, or the number of faculty has increased, or the number of students has decreased, or it has awarded more doctorates or fewer bachelor’s degrees, or it has got more votes in THE’s reputation survey, or a combination of two or more of these.

THE's citations indicator, which purportedly measures research impact or research quality, stands alone but it is also extremely opaque. To calculate a university’s score for citations you have to work out the number of citations in 8,000 “boxes” (300 plus fields multiplied by five years of publication multiplied by five types of documents) and compare them to the world average. Add them up and then apply the country bonus, the square root of the national impact score, to half of the university’s score. Then calculate Z scores. For practical purposes this indicator is a black box into which masses of data disappear, are chopped up, shuffled around, processed, reconstituted and then turned into numbers and ranks that are, to say the least, somewhat counter-intuitive.

This indicator, which accounts for a 30% weighting, has produced some remarkable results over the last decade, with a succession of improbable institutions soaring into the upper reaches of this metric. This year’s tables are no exception. The world leader is Arak University of Medical Sciences, Iran, followed by Cankaya University, Turkey, Duy Tan University, Vietnam, Golestan University of Medical Sciences, Iran, and Jimma University, Ethiopia. Another two Iranian medical universities are in the top 25. They may not last long. Over the last few years quite a lot of universities have appeared briefly at the top and then in a few years slumped to a much lower position.

One of the more interesting things about the current success of the THE rankings is the apparent suspension of critical thought among the superlatively credentialed and accredited leaders of the academic world. One wonders how those professors, rectors and deans who gather at the various summits, seminars, webinars, and masterclasses would react to a graduate student who wrote a research paper that claimed that Arak University of Medical Sciences leads the world for “research quality”, Istanbul Technical University for “knowledge transfer”, or Macau University of Science and Technology for “international outlook”.

Volatility

Not only do the rankings lack transparency they are also extremely volatile. The top fifty list, or even the top one hundred, is reasonably stable but after that THE has seen some quite remarkable and puzzling ascents and descents. There have been methodological changes and there is a big one coming next year but that alone does not explain why there should be such dramatic changes. One cause of instability in the rankings is the citations indicator which is constructed so that one or a few researchers, often those working on the Gates-funded Global Burden of Disease Study (GBDS), can have a massively disproportionate impact.

Another possible cause of volatility is that the number of ranked institutions is not fixed. If the rankings expand new universities will usually be at the lower end of the scale and the effect of this is that the mean score for each indicator is lowered and this will affect the final score for every institution since the standardised scores that appear in the published tables are based on means and deviations.

There may be other reasons for the volatility of this year’s rankings. Fluctuating exchange rates may have affected reported income data, international students’ numbers may have fallen or even recovered. Some universities might have performed better in the surveys of teaching or research.

 

Australian universities rising and falling

Some Australian universities appear to have been quite mobile this year. In some cases, this has a lot to do with the citation indicator. Two years ago, Bond University was ranked in the 501 – 600 band and 26th in Australia. Now it is tenth in Australia and in the world top 300, driven largely by a remarkable rise in the citations score from 56.4 to 99.7. A lot of that seems to have come from a small number of publications relating to the 2020 PRISMA statement which amassed a massive number of citations in 2021 and 2022.

Another example is Australian Catholic University. In 2018 it was in the world 501-600 band and this year it is in band 251-300. This is mainly due to an improvement in its citations score from 59.5 to 98.5, the result of a series of articles between 2017 and 2020 related to the multi-author and massively cited GBDS.

The problem with relying on citations to get ahead in the THE rankings is that if the researchers who have been racking up the citations move on or retire the scores will eventually decline as their papers pass outside the period for counting publications. This might have happened with the University of Canberra which has benefitted from GBDS papers published between 2015 and 2018. This year, however, the 2015 and 2016 papers no longer count, and the result is that Canberra’s citation score has fallen from 98.6 to 92.6 and its world rank from 170th to 250-300. A university might even start falling just because its peers have started racking up scores of 99 plus for citations.

This is similar to the trajectory of quite a few international universities that have risen and fallen in the wake of a few highly cited papers such as Babol Noshirvani University of Technology, Iran, the Indian Institute of Technology Ropar, the University of Occupational and Environmental Health, Japan, Durban University of Technology, South Africa, and Nova Southeastern University, USA.

Citations have a lot to do with Australia’s success in the THE rankings. All the Australian universities the world rankings have a higher score for citations than for research, which is measured by publications, reputation, and research income and six have citation scores in the 90s. Compare that with Japan, where the highest citation score is 82.8. and leading universities do better for research than for citations. If THE had taken some of the weight from citations and given it to research, Australian universities might be in a different position.

Are the THE rankings any use?

Over the long term the THE rankings might have some value in charting the general health of an institution or a national system. Should a university fall steadily across several indicators despite changes in methodology and despite proclaimed excellence initiatives, then that might be a sign of systemic decline.

The success of Australian universities in the THE rankings might represent genuine progress but it is necessary to identify exactly why they are rising and how sustainable that progress is.

The rankings certainly should not be used to punish or reward researchers and teachers for “success” or “failure” in the rankers, to allocate funds, or to attract talented faculty or wealthy students.

Other rankings

The THE rankings are not the only game in town or in the world. In fact, for most purposes there are several rankings that are no worse and probably a lot better than THE. It would be a good idea for Australian universities, students and stakeholders to shop around a bit,

For a serious analysis of research quantity and quality there are straightforward rankings of research conducted by universities or research centres such as Shanghai Ranking, CWTS Leiden University, University Ranking by Academic Performance, or National Taiwan University. They can be a bit boring since they do not change very much from year to year, but they are at least reasonably competent technically and they rely on data that is fairly objective and transparent.

For prospective graduate and professional students, the research-based rankings might be helpful since the quality of research is likely to have an effect, even if an unpredictable, on the quality of postgraduate and professional instruction.

For undergraduate students there is not really too much that is directly relevant to their needs. The QS employability rankings, the Employer opinion survey in the QS world rankings, the Emerging/Trendence rankings employability rankings, the student quality section in the Center for World University Ranking tables, now based in the Emirates, can all help to provide some helpful insights.

Next year?

It seems that THE has finally steeled itself to introduce a set of changes. The precise effect is unclear except that the world rankings look to be getting even more complex and even more burdensome for the underpaid drones toiling away to collect, process and transmit the data THE requires of its “customers”. It is not clear exactly how this will affect Australian universities.

No doubt Australian deans and rectors will be wondering what lies ahead of them in the 2024 rankings coming next year. But not to worry. THE is offering “bespoke” shadow rankings that will tell them how they would have done if the new methodology had been applied this year. 

 

 

 

 

 

Sunday, August 21, 2022

California in the Shanghai Rankings

Global rankings are often misleading and uninformative, especially those that have eccentric methodologies or are subject to systematic gaming. But if their indicators are objective and reliable over several years, they can tell us something about shifts in the international distribution of research excellence.

I would like to look at 20 years of the Shanghai Rankings from the first edition in 2003 to the most recent, published this week. The first thing that anyone notices is of course the remarkable rise of China -- not Asia in general -- and the relative decline of the USA. These rankings can also be used to find regional trends within nations. Take a look at California universities. In 2003 California was the research star of the US with six universities in the world top twenty. Two decades later that number has fallen to five with the University of California (UC) San Diego falling from 14th to 21st place.

That is symptomatic of a broader trend. UC Santa Barbara has fallen from 25th to 57th, the University of Southern California from 40th to 55th, and UC Riverside from 88th to the 201-300 band. 

American universities in nearly all the states have been falling and have, for the most part, been replaced, by Chinese institutions. But even within the USA California has been drifting downwards. Caltech has gone from 3rd to 7th, UC San Francisco, a medical school, from 11th to 15th, and UC Davis from 27th to band 40-54.

This is not universal. Stanford is still second in the USA in 2022 while UC Los Angeles (UCLA) has risen from 13th to 11th.

But overall California is falling. Of the thirteen universities in the top 500 in 2003, nine had fallen in the US table by 2022, two, UC Santa Cruz and UCLA, rose and, two remained in the same rank. The decline is especially apparent in the Publications metric, which is based on recent articles in the Web of Science.

Recent events in California, including learning loss during the pandemic, the abandonment of standardised testing, and the imposition of political loyalty tests for faculty, suggest that the decline is not going to be halted or reversed any time soon.

 





Tuesday, July 19, 2022

What's the Matter with Harvard?

When the first global ranking was published by Shanghai Jiao Tong University back in 2003, the top place was taken by Harvard. It was the same for the rankings that followed in 2004, Webometrics and the THES - QS World University Rankings.  Indeed, at that time any international ranking that did not put Harvard at the top would have  been regarded as faulty.

Is Harvard Declining?

But since then Harvard has been dethroned by a few rankings. Now MIT leads in the QS world rankings, while Oxford is first in THE's  and the Chinese Academy of Sciences in Nature Index. Recently Caltech deposed Harvard at the top of the Round University Rankings, now published in Georgia.

It is difficult to get excited about Oxford leading Harvard in the THE rankings. A table that purports to show Macau University of Science and Technology as the world's most international university, Asia University Taiwan as the most innovative, and An Najah National University as the best for research impact, need not be taken too seriously.

Losing out to MIT in the QS world rankings probably does not mean very much either. Harvard is at a serious disadvantage here for international students and international faculty.

Harvard and Leiden Ranking

On the other hand, the performance of Harvard in CWTS Leiden Ranking, which is generally respected in the global research community,  might tell us that something is going on. Take a look at the total number of publications for the period 2017-20 (using the default settings and parameters). There we can see Harvard at the top with 35,050 publications followed by Zhejiang and Shanghai Jiao Tong Universities.

But it is rather different for publications in the broad subject fields. Harvard is still in the lead for Biomedical Sciences and for Social Sciences and Humanities. For Mathematics and Computer Science, however, the top twenty consists entirely of Mainland Chinese universities. The best non - Mainland institution is Nanyang Technological University in Singapore. Harvard is 128th.

You could argue whether this is just a matter of quantity rather than quality. So, let's turn to another Leiden indicator, the percentage of publications in the top 10% of journals for Mathematics and Computer Science. Even here China is in the lead, although somewhat precariously. Changsha University of Science and Technology tops the table and  Harvard is in fifth place.

The pattern for Physical Sciences and Engineering is similar. The top 19 for publications are Chinese with the University of Tokyo in 20th place. However, for those in the top 10% Harvard still leads. It seems then that Harvard is still ahead for upmarket publications in physics and engineering but a growing and substantial amount of  research is done by China, a few other parts of Asia, and perhaps some American outposts of scientific excellence such as MIT and Caltech.

The Rise of China

The trend seems clear. China is heading towards industrial and scientific hegemony and eventually Peking, Tsinghua, Fudan and Zhejiang and a few others will, if nothing changes, surpass the Ivy league, the Group of Eight, and Oxbridge, although it will take longer for the more expensive and demanding fields of research. Perhaps the opportunity will be lost in the next few years if there is another proletarian cultural revolution in China or if Western universities change course.

What Happened to Harvard's Money?

It is standard to claim that the success or failure of universities is dependent on the amount of money they receive. The latest edition of the annual Nature Index tables was accompanied by headlines proclaiming that that China's recent success in high impact research was the result of a long term investment program. 

Money surely had a lot to do with it but there needs to be a bit of caution here. The higher education establishment has a clear vested interest in getting as much money from the public purse as it can and is inclined to claiming that any decline in the rankings is a result of hostility to higher education..

Tracing the causes of Harvard's decline, we should consult the latest edition of the Round University Rankings, now based in Georgia,  which provides ranks for 20 indicators. In 2021 Harvard was first but this year it was second, replaced by Caltech. So what happened?  Looking more closely we see that in 2021 Harvard was 2nd for financial sustainability and in 2022 it was 357th, That suggests a catastrophic financial collapse. So maybe there has been a financial disaster over at Harvard and the media simply have not noticed bankrupt professors jumping out of their offices, Nobel laureates hawking their medals, or mendicant students wandering the streets with tin cups. 

Zooming in a bit, it seems that, if the data is accurate, there has been a terrible collapse in Harvard's financial fortunes. For institutional income per academic staff Harvard's rank has gone from 21st to 891st.

Exiting sarcasm mode for a moment, it is of course impossible that there has actually been such a catastrophic fall in income. I suspect that what we have here is something similar to what happened  to Trinity College Dublin  a few years ago when someone forgot the last six zeros when filling out the form for the THE world rankings.

So let me borrow a flick knife from my good friend Occam and propose that what happened to Harvard in the Round University Rankings was simply that somebody left off the zeros at the end of the institutional income number when submitting data to Clarivate Analytics, who do the statistics for RUR. I expect next year the error will be corrected, perhaps without anybody admitting that anything was wrong.

So, there was no substantial reason why Harvard lost ground to Caltech in the Round Rankings this year. Still it does say something that such a mistake could occur and that nobody in the administration noticed or had the honesty to say anything. That is perhaps symptomatic of deeper problems within American academia. We can then expect the relative decline of Harvard and the rise of Chinese universities and a few others in Asia to continue.





Saturday, June 18, 2022

Is China really quitting the international rankings?

For some time, there have been signs that some of the leading higher education powers are disenchanted with global rankings, at least those based in the UK. Russia has wound up its 5 Top 100 project, aimed at getting five universities in the top 100 of selected rankings, and several of the highly regarded Indian Institutes of Technology have withdrawn from the THE world rankings. This seems to be part of a general withdrawal from global, or Western, standards and practices in higher education and research, the latest example of which is Russia leaving the Bologna process.

Recently University World News reported that three Chinese universities, Nanjing,  Renmin University of China, and Lanzhou would not participate in "all international rankings",  which appears  to mean the THE and QS rankings. 

It is typical of the biases of the ranking world that it seems to be assumed that abandoning the QS and THE world rankings is equivalent to leaving international rankings altogether.  

In itself, the reported withdrawal by the three universities means little. None of them were in the world top 100. But it does seems that China is become more sceptical of the pretensions of the western rankers. Most Chinese universities, for example, have ignored the THE impact rankings, although  Fudan University did make an appearance in the most recent edition, getting first place for clean and affordable energy. 

China may also have noticed that proposed changes by QS and THE could work to its disadvantage. QS says that next year it will introduce a new indicator into the world rankings, International Research Network, where Chinese institutions do not do very well. THE is considering a variety of changes the impact of which is still not clear, perhaps not even to HE's data team, and which may have an unsettling effect on Asian universities.

It seems that the world's universities are beginning to diverge in several important ways, not just with regard to rankings. China, for example, is deemphasising publications in international journals. US and European institutions are increasingly concerned with social and political matters that are of limited interest in other parts of the world.

It seems that some countries are adopting a pragmatic approach to rankings, making use of them when convenient and ignoring them if necessary. One sign of this  approach recently came come from Shanghai where the city is opening the hukou, a document that regulates access to education, health insurance and  housing,  to graduates of universities at the top of the one of four world rankings rankings, Shanghai, QS, THE and  the US News Best Global Universities. The hukou will be available to those from universities in the top fifty if in full time employment for sixth months and after six months for those with degrees from universities ranked 51-100.

This is part of an effort to restart the city's economy after recent lockdowns. It would be unsurprising if other Chinese cities and other countries adopted similar policies.


Sunday, May 08, 2022

Has China Really Stopped Rising? Looking at the QS Subject Rankings

For the last few years the publication of global rankings has led to headlines about the rise of Asia. If these were to be believed we would expect a few Asian universities to be orbiting above the stratosphere by now.

The Asian ascent was always somewhat exaggerated. It was true largely for China and perhaps Southeast Asia and the Gulf States. Japan, however, has been relatively stable or even declining a bit and India so far has made little progress as far as research or innovation is concerned. Now, it seems that the Chinese research wave may be slowing down. The latest edition of the QS subject rankings suggests that that  the quality of Chinese is levelling off and perhaps even declining. 

A bit of explanation here. QS publishes rankings for broad subject fields such as Arts and Humanities and for narrow subject areas  such as Archaeology. All tables include indicators for H-index, citations per paper, academic review, and employer review, with varying weightings. This year, QS has added a new indicator, International Research Network (IRN), with "broad" -- does that mean not unanimous? -- support from its Advisory Board, which is based on the number of international collaborators and their countries or territories. Chinese universities do much less well here than on the other indicators.

With QS, as with the other rankings, we should always be careful when there is any sort of methodological change. The first rule of ranking analysis is that any non-trivial change in rank is likely to be the result of methodological changes.

So lets take a look at the broad field tables. In Arts and Humanities  the top Chinese university is Peking University which fell seven places from 36th to 43rd between 2021 and 2022.

It was the same for other broad areas. In Engineering and Technology, Tsinghua fell from 10th to 14th, and in Natural Sciences from 15th to 23rd. (In this table Peking moved slightly ahead of Tsinghua into 21st place). In Social Sciences and Management Peking went from 21st to 26th 

There was one exception. In Life Sciences and Medicine Peking rose from 62nd to 53rd, although its overall score remained the same at 79.

However, before assuming that this is evidence of Chinese decline we should note the possible impact of the new indicator where Chinese institutions, including Peking and Tsinghua, do relatively poorly. In Life Sciences and Medicine every single one of the 22 Chinese universities listed do better for H-Index and Citations than for IRN. 

It looks as though the ostensible fall of Chinese universities  is partly or largely due to QS's addition of the IRN metric.

Looking at  Pitations per paper, which is a fairly good  proxy for research quality, we find that for most subject areas, the best Chinese universities have improved since last year. In Engineering and Technology Tsinghua has risen from 89.1 to 89.6. In Life Sciences and Medicine Peking has gone from 79.2 to 80.6 and in Social Science and Management from 89.7 to 90.7.

For Natural Science Tsinghau, had a score for citations of 88.6. It fell this year but was surpassed by Peking with a score of 90.1.

If Citations per Paper are consider the arbiter of research excellence then Chinese universities have been improving over the last year and the apparent decline in the broad subject areas is largely the result of the new indicator. One wonders if the QS management knew this was going to happen.

That is not the end of the discussion. There may well be areas where the Chinese advance is faltering or at least reaching a plateau and this might be revealed by a scrutiny of the narrow subject tables.



Monday, March 28, 2022

Where does reputation come from?

THE announced the latest edition of its reputation rankings last October. The amount of information is quite limited: scores are given for only the top fifty universities. But even that provides a few interesting insights.

First, there is really no point in providing separate data for teaching and research reputation. The correlation between the two for the top fifty is .99. This is unsurprising. THE surveys researchers who have published in Scopus indexed journals and so there is a very obvious halo effect. Respondents have no choice but to refer to their knowledge of research competence when trying to assess teaching performance. If THE are going to improve their current methodology they need to recognise that their reputation surveys are measuring the same thing. Maybe they could try to find another source of respondents for the teaching survey, such as school advisors, students or faculty at predominantly teaching institutions. 

Next, after plugging in a few indicators from other rankings, it is clear that that the metrics most closely associated with teaching and research reputation are publications in Nature and Science (Shanghai), highly cited researchers (Shanghai), and papers in highly reputed journals (Leiden).

The correlation with scores in the RUR and QS reputation rankings, citations (THE and QS), and international faculty was modest.

There was no correlation at all with the proportion of papers with female or male authors (Leiden).

So it seems that the best way to acquire a reputation for good teaching and research is publish papers in the top journals and get lots of citations. That, of course, applies only to this very limited group of institutions.



Sunday, March 20, 2022

What should Rankers Do About the Ukraine Crisis?

Over the last few days there have been calls for the global rankers to boycott or delist Russian universities to protest the Russian invasion of Ukraine. There have also been demands that journals should reject submissions from Russian authors and universities and research bodies stop collaborating with Russian authors.

So far, four European ranking agencies have announced some sort of sanctions.

U-Multirank has announced that Russian universities will be suspended "until they again share in the core values of the European higher education area."

QS will not promote Russia as a study area and will pause business engagement. It will also redact Russian universities from new rankings.

Webometrics will "limit the value added information" for Russian and Belarusian universities.

Times Higher Education (THE) will stop business activities with Russia but will not remove Russian universities from its rankings. 

The crisis has highlighted a fundamental ambiguity in the nature of global rankings. Are they devices for promoting the business interests of institutions or do they provide relevant and useful information for researchers, students and the public?

Refraining from doing business with Russia until it withdraws from Ukraine is a welcome rebuke to the current government. If, however, rankings contain useful information about Russian scientific and research capabilities then that information should continue to be made available.



Sunday, September 26, 2021

What is a University Really for ?

Louise Richardson, Vice-Chancellor of the University of Oxford, has seen fit to enlighten us about the true purpose of a university. It is, it seems, to inculcate appropriate deference to the class of certified experts.

Professor Richardson remarked at the latest Times Higher Education (THE) academic summit that she was embarrassed that "we" had educated the Conservative politician Michael Gove who said, while talking about Brexit, that people had had enough of experts.

So now we know what universities are really about.  Not about critical discussion, cutting-edge research, skepticism, the disinterested pursuit of truth but about teaching respect for experts.

A few years ago I wrote a post suggesting we were now in a world where the expertise of the accredited experts was declining along with public deference. I referred to the failure of political scientists to predict the nomination of Trump, the election of Trump, the rise of Leicester City, the Brexit vote. It looks like respect for experts has continued to decline, not entirely without reason.

Professor Richardson thinks that Gove's disdain for the Brexit experts is cause for embarrassment. While it is early years for the real effects of Brexit to become clear it is as yet far from obvious that it has been an unmitigated disaster.  It is, moreover, a little ironic that the remark was made at the latest THE academic summit where the annual world rankings were announced.  Richardson remarked that she was delighted that her university was once again ranked number one.

The irony is that the THE world rankings are probably the least expert of the global rankings although they are apparently the most prestigious at least among those institutions that are known for being prestigious.

Let's have another look at THE's Citations Indicator which is supposed to measure research quality or impact and accounts for nearly a third of the total weighting. (Regular readers of this blog can skim or skip the next few lines. ) Here are the top five from this year's rankings.

1,   University of Cape Coast

2,   Duy Tan University

3,   An Najah National University

4.   Aswan University

5.   Brighton and Sussex Medical School.

This is not an academic version of the imaginary football league tables that nine-year-old children used to construct. Nor is it the result of massive cheating by the universities concerned. It is quite simply the outcome of a hopelessly flawed system. THE, or rather its data analysts, appear to be aware of the inadequacies of this indicator but somehow meaningful reform keeps getting postponed. One day historians will search the THE archives to findr the causes of this inability to take very simple and obvious measures to produce a sensible and credible ranking. I suspect that the people in control of THE policy are averse to anything that might involve any distraction from the priority of monetising as much data as possible. Nor is there any compelling reason for a rush to reform when universities like Oxford are unconcerned about the inadequacies of the current system.

Here are the top five for income from industry which is supposed to have something to do with innovation.

1.   Asia University Taiwan

2.   Istanbul Technical University

3.   Khalifa University

4.   Korean Advanced Institute of Science and Technology (KAIST)

5.   LMU Munich.

This is a bit better. It is not implausible that KAIST or Munich is a world leader for innovation. But in general, this indicator is also inadequate for any purpose other than providing fodder for publicity. See a scathing review by Alex Usher

Would any tutor or examiner at Oxford give any credit to any student who thought that Ghana, Vietnam and Palestine were centers of international research impact. They are all doing a remarkable job of teachng in many reseapects but that is not what THE is ostensibly giving them credit for.

In addition, the THE world rankings fail to meet satisfactory standards with regard to basic validity. Looking at the indicator scores for the top 200 universities in the most recent world rankings we can see that the correlation between research and teaching is 0.92. In effect these are not two distinct metrics. They are measuring essentially the same thing. A quick look at the methodology suggests that what they are comparing is income (total institutional income for teaching, research income for research), reputation (the opinion surveys for research and teaching) and investment in doctoral programmes.

On the other hand , the citations indicator does not correlate significantly with research or teaching and correlates negatively with industry income.

One can hardly blame THE for wanting to make as much money as possible. But surely we can expect something better from supposedly elite institutions that claim to value intellectual and scientific excellence. If Oxford and its peers wish to restore public confidence in the experts there is no better way than saying to THE that we will not submit data to THE until you produce something a little less embarrassing.




Wednesday, August 25, 2021

THE World University Rankings: Indicator Correlations

I was going to wait until next week to do this but the publication of the latest edition of the THE world rankings is coming and there may be a new methodology.

The current THE methodology is based on five indicators or indicator groups: Teaching (5 indicators), Research (3 indicators), Citations, Income from Industry, International Outlook (3 indicators).

Looking at the analysis of 1526 cases (using PSPP), we can see that the correlation between Teaching and Research is very high, .89, and fairly good between those two and Citations. Teaching and Research both include surveys of teaching and research, which have been shown to yield vary similar results. Also, Teaching includes Institutional Income and Research Income, which are likely to be closely related.

The Citations indicator has a moderate correlation with Teaching and Research, as noted, and also with International Outlook.

The correlations between Industry Income and Teaching and Research are moderate and those with Citations and International Outlook are low, .20 and .18 respectively. The Industry Income indicator is close to worthless since the definition of income is apparently interpreted in several different ways and may have little relation to financial reality. International Outlook correlates modestly with the other indicators except for Industry Income.

It seems there is little point in distinguishing between the Teaching and Research indicators since they are both influenced by income, reputation, and large doctoral programmes. The Industry Income indicator has little validity and will probably, with very good reason, be removed, from the THE rankings.


CORRELATIONS

CORRELATION
/VARIABLES = teaching research citations industry international weightedtotal
/PRINT = TWOTAIL SIG.
Correlations
teachingresearchcitationsindustryinternationalweightedtotal
teachingPearson Correlation1.00.89.51.45.38.83
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
researchPearson Correlation.891.00.59.53.54.90
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
citationsPearson Correlation.51.591.00.20.57.87
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
industryPearson Correlation.45.53.201.00.18.42
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
internationalPearson Correlation.38.54.57.181.00.65
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526
weightedtotalPearson Correlation.83.90.87.42.651.00
Sig. (2-tailed).000.000.000.000.000
N152615261526152615261526


Most people are probably more concerned with distinctions among the world's elite or would be elite universities. Turning to the top 200 of the THE rankings, the correlation between Teaching and Research, is again very high, suggesting that these are measuring virtually the same thing.

The Citations indicator has a low correlation with International Outlook, a low and insignificant correlation with Teaching and Research, and a negative and insignificant correlation with Industry Income. 

Industry Income  has low correlations with Research and Teaching and negative with Citations and International Outlook.

It would seem that THE world rankings are not helpful for evaluating the quality of the global elite. A new methodology will be most welcome.


CORRELATIONS

CORRELATION
/VARIABLES = teaching research citations industry international weightedtotal
/PRINT = TWOTAIL SIG.
Correlations
teachingresearchcitationsindustryinternationalweightedtotal
teachingPearson Correlation1.00.90.02.23-.11.89
Sig. (2-tailed).000.768.001.114.000
N200200200200200200
researchPearson Correlation.901.00.06.28.05.92
Sig. (2-tailed).000.411.000.471.000
N200200200200200200
citationsPearson Correlation.02.061.00-.30.22.39
Sig. (2-tailed).768.411.000.001.000
N200200200200200200
industryPearson Correlation.23.28-.301.00-.10.17
Sig. (2-tailed).001.000.000.149.014
N200200200200200200
internationalPearson Correlation-.11.05.22-.101.00.17
Sig. (2-tailed).114.471.001.149.017
N200200200200200200
weightedtotalPearson Correlation.89.92.39.17.171.00
Sig. (2-tailed).000.000.000.014.017
N200200200200200200





Monday, August 23, 2021

Shanghai Rankings: Correlations Between Indicators

This is, I hope, the first of  a series. Maybe THE and QS next week.

If we want to compare  the utility of university rankings one attribute to consider is internal consistency. Here, the correlation between the various indicators can tell us a lot. If the correlation between a pair of indicators is 0.90 or above we can assume that these indicators are essentially measuring the same thing.

On the other hand, if  there is no correlation or one that is low, insignificant or even negative we might have doubts about the validity of one or both of the indicators. It is reasonable that if a university scores well for one metric it will do well for others providing they both represent highly valued attributes. A university producing high quality research or collecting large numbers of citations should also score well for reputation. If it does not there might be a methodological problem somewhere.

So, we can assume that if the indicators are valid and are not measuring the same thing the correlation between indicators will probably be somewhere between 0.5 and 0.9.

Let's have a look at the Shanghai ARWU for 2019. The indicator scores were extracted and analysed using PSPP. (It is very difficult to analyse the 2020 edition because of a recent change in presentation.) These rankings have six indicators: alumni and faculty receiving Nobel and Fields awards, papers in Nature and Science, highly cited researchers, publications in the Web of Science, and productivity per capita.

Looking at all 1000 institutions in the Shanghai Rankings, Alumni, Awards, and Nature and Science all correlate well with each other Highly Cited Researchers correlates well with Nature and Science and Publications but less so with Alumni and Awards. Nature and Science correlates well with all the other indicators.

The Publications indicator does not correlate well with Alumni and Awards. This is to be expected since Publications refers to 2018 while the Alumni and Awards indicators go back several decades.

Overall, the correlations are quite good although there is a noticeable divergence between Publications and Alumni and Awards, which cover very different time periods. 

CORRELATIONS

CORRELATION
/VARIABLES = alumni awards highlycited naturescience publications pcp finaltotal
/PRINT = TWOTAIL NOSIG.
Correlations
alumniawardshighlycitednaturesciencepublicationspcpfinaltotal
alumniPearson Correlation1.00.78.51.72.45.63.76
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
awardsPearson Correlation.781.00.57.75.44.67.82
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
highlycitedPearson Correlation.51.571.00.79.72.64.87
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
naturesciencePearson Correlation.72.75.791.00.69.73.93
Sig. (2-tailed).000.000.000.000.000.000
N992992992992992992992
publicationsPearson Correlation.45.44.72.691.00.50.81
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
pcpPearson Correlation.63.67.64.73.501.00.78
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000
finaltotalPearson Correlation.76.82.87.93.81.781.00
Sig. (2-tailed).000.000.000.000.000.000
N100010001000992100010001000



Most observers of ARWU and other global rankings are interested in the top levels where elite schools and national flagships jostle for dominance. Analysing correlations among indicators for the top 200 in ARWU, there are high correlations between Alumni, Awards, Nature and Science, and Productivity per Capita, ranging from .69 to .79.

There is also a high correlation of .72 between Nature and Science and Highly Cited Researchers. It is, however, noticeable that the correlation between Publications and other indicators is low for Highly Cited Researchers and very low for Productivity per Capita, Alumni and Awards.

It seems that, especially among the top 200 places, there is a big gap opening between the old traditional elite of Oxbridge, the Ivy League and the like who continue to get credit for long dead Nobel laureates and the new rising stars of Asia and Europe who are surging ahead for WOS papers and beginning to produce or recruit superstar researchers.




Correlations
alumniawardshighlycitednaturesciencepublicationspcpfinaltotal
alumniPearson Correlation1.00.79.36.69.21.62.78
Sig. (2-tailed).000.000.000.003.000.000
N200200200199200200200
awardsPearson Correlation.791.00.44.74.14.67.84
Sig. (2-tailed).000.000.000.044.000.000
N200200200199200200200
highlycitedPearson Correlation.36.441.00.72.57.49.78
Sig. (2-tailed).000.000.000.000.000.000
N200200200199200200200
naturesciencePearson Correlation.69.74.721.00.44.65.92
Sig. (2-tailed).000.000.000.000.000.000
N199199199199199199199
publicationsPearson Correlation.21.14.57.441.00.12.55
Sig. (2-tailed).003.044.000.000.083.000
N200200200199200200200
pcpPearson Correlation.62.67.49.65.121.00.72
Sig. (2-tailed).000.000.000.000.083.000
N200200200199200200200
finaltotalPearson Correlation.78.84.78.92.55.721.00
Sig. (2-tailed).000.000.000.000.000.000
N200200200199200200200