Sunday, April 19, 2020

THE's WUR 3.0 is on the way

Alert to readers. Some of this post covers ground I have been over before. See here, here and here. I plead guilty to self-plagiarism.

Times Higher Education (THE) is talking about a 3.0 version of its World University Rankings to be announced at this year's academic summit in Toronto and implemented in 2021, a timetable that may not survive the current virus crisis. I will discuss what is wrong with the rankings, what THE could do, and what it might do.

The magazine has achieved an enviable position in the university rankings industry. Global rankings produced by reliable university researchers with sensible methodologies, such as the CWTS Leiden Ranking, University Ranking by Academic Performance (Middle East Technical University) and the National Taiwan University Rankings are largely ignored by the media, celebrities and university administrators. In contrast, THE is almost always one of the Big Four rankings (the others are QS, US News, and Shanghai Ranking), the Big Three or the Big Two and sometimes the only global ranking that is discussed. 

The exalted status of THE is remarkable considering that it has many defects. It seems that the prestigious name -- there are still people who think that is the Times newspaper or part of it -- and  skillful public relations campaigns replete with events, workshops. gala dinners and networking lunches have eroded the common sense and critical capacity of the education media and the administrators of the Ivy League, the Russell Group and their imitators.

There are few things more indicative of the inadequacy of the current leadership of Western higher education than their toleration of a ranking that puts Aswan University top of the world for  research impact by virtue of its participation in the Gates funded Global Burden of Disease Study and Anadolu University top for innovation because it reported its income from private online courses as research income from industry. Would they really accept that sort of thing from a master's thesis candidate? It is true that the "Sokal squared" hoax has shown that that the capacity for critical thought has been seriously attenuated in the humanities and social sciences but one would expect better from philosophers, physicists and engineers.    

The THE world and regional rankings are distinctively flawed in several ways. First, a substantial amount of their data comes directly from institutions. Even if universities are 100% honest and transparent the probability that data will flow smoothly and accurately from branch campuses, research centres and far flung campuses through the committees tasked with data submission and on to the THE team is not very high.

THE has implemented an audit by PricewaterhouseCooper (PwC) but that seems to be about "testing the key controls to capture and handle data, and a full reperformance of the calculation of the rankings" and does not extend to checking the validity of the data before it enters the mysterious machinery of the rankings. PwC state that this is a "limited assurance engagement."

Second, THE is unique among the well-known rankings in bundling eleven of its 13 indicators in three groups with composite scores. That drastically reduces the utility of the rankings since it is impossible to figure out whether, for example, an improvement for research results from an increase in the number of published papers, an increase in research income, a decline in the number of research and academic staff, a better score for research reputation, or some combination of these. Individual universities can gain access to more detailed information but that is not necessarily helpful to students or other stakeholders.

Third, the THE rankings give a substantial weighting to various input metrics. One of these is income which is measured by three separate indicators, total institutional income, research income, and research income from industry. Of the  other world rankings only the Russian Round University Rankings do this. 

There is of course some relationship between funding and productivity but it is far from absolute and universal. The Universitas 21 system rankings, for example, show that countries like Malaysia and Saudi Arabia have substantial resources but so far have achieved only a  modest scientific output while Ireland has done very well in maintaining output despite a limited and declining resource base.    

The established universities of the world seem to be quite happy with these income indicators which, whatever happens, are greatly to their advantage. If their overall score goes down this can be plausibly attributed to a decline in funding that can be used to demand money from national resources. At a time when austerity has threatened the well being of many vulnerable groups, with more suffering to come in the next few months, it is arguable that universities are not those most deserving of state funding. 

Fourth, another problem arises from THE counting doctoral students in two indicators. It is difficult to see how the number of doctoral students or degrees can in itself add to the quality of undergraduate or master's teaching and this could act to the detriment of liberal arts colleges like Williams or Harvey Mudd which have an impressive record of produced employable graduates.

These indicators may also have the perverse consequence of  forcing people who would benefit from a master's or post graduate diploma course into doctoral programs with high rates of non-completion. 

Fifthly, the two stand alone indicators are very problematic. The industry income indicator purports to represent universities' contributions to innovation. An article by Alex Usher found that the indicator appeared to be based on very dubious data. See here for a reply by Phil Baty that is almost entirely tangential to the criticism. Even if the data were accurate it is a big stretch to claim that this is a valid measure of a university's contribution to innovation.

The citations indicator which is supposed to measure research impact, influence or quality is a disaster. Or it should be: the defects of this metric seem to have passed unnoticed everywhere it matters.

The original sin of the citations indicator goes back to the early days of the THE rankings after that unpleasant divorce from QS. THE used data from the ISI database, as it was then known, and in return agreed to give prominence to an indicator that was almost the same as the InCites platform that was a big-selling product.

The indicator is assigned a weighting of 30% which is much higher than that given to publications and higher than given to citations by QS, Shanghai US News or RUR. In fact this understates the weighting. THE has a regional modification or country bonus that divides the impact score of a university by the square root of the impact score of the country where it is located. The effect of this is that the scores of  universities in the top country will remain unchanged but everybody else will get an increase, a big one for low scoring countries, a smaller one for those scoring higher. Previously the bonus applied to the whole of the indicator but now it is 50%. Basically this means that universities are rewarded for being in a low scoring country.

The reason originally given for this was that some countries lack the networking and funds to nurture citation rich research. Apparently, such a problem has no relevance to international indicators. This was in fact probably an ad hoc way of getting round the massive gap between the world's elite and other universities with regard to citations, much bigger than most other metrics. 

The effect of this was to give a big advantage to mediocre universities surrounded by low achieving peers. Combined with other defects it has produced big distortions in the indicator.

This indicator is overnormalised. Citation scores are based not on a simple count of citations but rather on a comparison with the world average of citations according to year of publication, type of publication, and academic field, over three hundred of them. A few years ago someone told THE that absolute counting of citations was  a mortal sin and that seems to have become holy scripture. There is clearly a need to take account of disciplinary variations, such as the relative scarcity of citations in literary studies and philosophy and their proliferation in medical research and physics  but the finer the analysis gets the more chance there is that outliers  will exert a disproportionate effect on the impact score.

Perhaps the biggest problem with the THE rankings is the failure to use fractional counting of citations. There is an increasing problem with papers with scores, hundreds, occasionally thousands of "authors", in particle physics, medicine and genetics. Such papers often attract thousands of citations partly because of their scientific importance, partly because many of their authors will find opportunities to cite themselves.

The result is that until 2014-15  a university with  a modest contribution to a project like the Large Hadron Collider Project could get a massive score for citations especially if its overall output of papers was not high and especially if it was located in a country were citations were generally low.

The 2014-15 THE world rankings included among the world's leaders for citations Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology and Bogazici University.

Then THE introduced some reforms. Papers with over a thousand authors were excluded from the citation count, the country bonus was halved, and the source of bibliometric data was switched from  ISI to Scopus. This was disastrous for those universities that had over-invested in physics especially in Turkey, South Korea and France. 

The next year THE started counting the mega-papers again but introduced a modified form of fractional counting. Papers with a thousand plus papers were counted according to their contribution to the paper with a minimum of five per cent.

The effect of these changes was to replace physics privilege with medicine privilege. Fractional counting did not apply to papers with hundreds of authors but less than a thousand and so a new batch of improbable universities started getting near perfect scores for citations and began to break into the top five hundred or thousand in the world. Last year these included Aswan University, the Indian University of Technology Ropar, the University of Peradeniya, Anglia Ruskin University, the University of Reykjavik, and the University of Environmental and Occupational Health Japan.

They did so because of participation in the Global  Burden of Disease Study combined with  a modest overall output of papers and/or the good fortunate to be located in a country with a low impact score.

There is something else about the indicator that should be noted. THE includes self-citations and on a couple of occasions has said that this does not make any significant difference. Perhaps not in the aggregate, but there have been occasions when self-citers have in fact made a large difference to the scores of specific universities. In 2009 Alexandria University broke into the top 200 world universities by virtue of a self-citer and a few friends. In 2017 Veltech University was the third best university in India and the best in Asia for citations all because of exactly one self-citing author, In 2018 the university had for some reason completely disappeared from the Asian rankings.

So here are some fairly obvious things that THE ought to do:
  • change the structure of the rankings to give more prominence to publications and less to citations
  • remove the income indicators or reduce their weighting
  • replace the income from industry indicator with a count of patents preferably those accepted rather than filed
  • in general, where possible replace self-submitted with third party data
  • if postgraduate students are to be counted then count master's as well as doctoral students
  • get rid of the country bonus which exaggerates the scores of mediocre or sub-mediocre institutions because they are in the poorly performing countries
  • adopt a moderate form of normalisation with a dozen or a score of fields rather than the present 300+ 
  • use full-scale fractional counting 
  • do not count self citations, even better do not count intra-institutional citations
  • do not count secondary affiliations, although that is something that is more the responsibility of publishers
  • introduce two or more measures of citations.

 But what will THE actually do?

Duncan Ross, THE data director, has published a few articles setting out some talking points (here, here, here,  here).
    He suggests that in the citation indicator THE should take the 75th percentile as the benchmark rather than the mean when calculating  field impact scores. If I understand it correctly this would reduce the extreme salience of outliers in this metric.

It seems that a number of new citations measures are being considered with the proportion of most cited publications apparently  getting the most favourable consideration. Unfortunately it seems that they are not going any further with fractional counting, supposedly because it will discourage collaboration. 

Ross  mentions changing the weighting of the indicators but does not seem enthusiastic about this. He also discusses the importance of measuring cross-disciplinary research.

THE is also considering the doctoral student  measures with the proportion of doctoral students who eventually graduate. They are  thinking about replacing institutional income with "a more precise measure," perhaps spending on teaching and teaching related activities. That would probably not be a good idea. I can think of all sorts of ways in which institutions could massage the data so that in the end it would be as questionable as the current industry income indicator.

It seems likely that patents will replace income from industry as the proxy for innovation.

So it appears that there will be some progress in reforming the THE world rankings. Whether it will be enough remains to be seen.


Tuesday, April 14, 2020

Who's doing research on Covid-19?

This is crude and simple. I typed in Covid-19 with fields Article title, Abstract, Keywords.

The first and oldest item of 1,500 to appear was "The New Coronavirus, the Current King of China." by S A Plotkin in the Journal of the Pediatric Infectious Diseases Society. I wonder if there will be a sequel, "The Current King of the World."

The top universities for number of publications are:

1.   Tongji Medical College
2.   Huazhong University of Science and Technology
3.   Chinese Academy of Medical Sciences and Peking Union Medical College
4.   "School of Medicine"
5.   London School of Hygiene and Tropical Medicine
6.   Wuhan University
7.   Chinese Academy of Sciences
8.   Capital Medical University
9.   Fudan University
10. National University of Singapore.

The top funding agencies are:

1.   National Natural Sciences Foundation of China
2.   National Basic Research Program of China
3.   National Institutes of Health
4.   Fundamental Research Funds for the Central Universities 
5.   Wellcome Trust
6.   Chinese Academy of Sciences
7.   Canadian Institutes of Health Research
8.   National Science Foundation
9.   Agence Nationale de la Recherche
10.  Chinese Academy of Medical Research


Tuesday, March 10, 2020

University of California Riverside:Is it America's Fastest Rising University?




Not really.

It seems that one major function of rankings is to cover up the decline or stagnation of major western universities. An example is the University of California Riverside (UCR), a leading public institution. 

Recently there was a tweet and a web page extolling UCR as "America's fastest rising university" on the strength of its ascent in four different rankings: Forbes 80 places in two years, US News 33 places in two years, Times Higher Education (THE) 83 places in two years and the Center for World University Rankings (CWUR) 41 places in one year.

This surprised me a bit because I was under the impression that the University of California system had been declining in the research-based rankings and that UCR had not done so well in the THE world rankings. So I had a quick look. In 2019-20 UCR was in the 251-300 band in the THE world rankings. Two years before that it  was 198th and in 2010-11 it was 117th. I have little trust in THE but that is evidence of a serious decline by any standards. 

But it seems that the tweeter was thinking about the THE/Wall Street Journal US Teaching Rankings. In 2019-20 Riverside was 189th there, in 2018-19 212th, in 2017-18 272nd, and in 2016-17 it was 364th.

That is definitely a substantial rise and it is rather impressive considering the this rise occurred while the UCR was falling in THE's world rankings. Most of the rise occurred in the outcomes "pillar" and probably was a result of the introduction in 2018 of a new indicator that measured student debt. 

UCR's rise had nothing to do with resources, environment or engagement. It is not possible to disentangle the components within THE's four pillars but it is a very plausible hypothesis that a good part of UCR's success is the result of a methodological change that introduced an element where this university was especially strong.

Another ranking where UCR does well is the Center for World University Ranking now published in the UAE. Last year  UCR was 204th, in 2018-19  245th, and in 2017-18 218th. 

The fall and rise of UCR over two years follows the weighting given to research. When the weighting was 25% UCR's position was 218th. When the research weighting was increased to 70 % UCR fell to 245th. When it fell to 50% UCR rose to 204th. So, here again UCR's rank is dependent on methodological tweaking. It does well when the weighting for research is reduced and less well when it is increased.

I assume that the US News (USN) rankings refer to America's Best Colleges, where UCR does very well. It was 91st last year among national universities and 1st for social mobility. 

The rise of UCR in these rankings is also the result of methodological changes. In 2019 USN began to shift away from academic excellence to social mobility, which basically means admitting and graduating  larger numbers of recognised and protected groups. The acceptance rate criterion has been scrapped and metrics related to social mobility such as the graduation rates of low income students have been introduced. UCR has not necessarily improved: what has happened is that the ranking has put more weight on things that it is good at and less on those where it performs less well.

The Forbes ranking also refers to changes announced in 2017 that "better align this list with what FORBES values most: entrepreneurship, success, impact and the consumer experience." It is  likely that these had a favourable impact on UCR's performance here as well.

When it comes to international research based rankings over several years the story is a different one of steady decline. Starting with total publications in the publications indicator of the CWTS Leiden Ranking, UCR went from 270th in 2006-09 to 392nd in 2014-17. Much of this was due to the general decline of American universities but even within the US group there was a decline from 88th to 93rd.

The decline is starker if we look at the most rigorous expression of quality, the proportion of publications in the top 1% of journals. In the same period UCR fell from 12th to 130th worldwide and 11th to 59th in the USA.

Turning to the Scimago Institution Rankings which include patents and altmetrics, Riverside fell from 151st in 2011 to 228th in 2019. Among US institutions it fell from  70th  in 2011 to 85th in 2019.

That is the situation with regard to research based rankings. Moving on the rankings that include things related to teaching UCR fell from 271st in the QS world rankings in 2017 to 454th in 2020. In the Round University Rankings it fell from 197th in 2010 to 231st in 2916 and then stopped participating from 2017 onward.

It seems fairly clearly that UCR has been in declining in indicators relating to research, especially research of the highest quality. It also performs poorly in those rankings that combine teaching withresearch and internationalisation metrics. Exactly why must wait for another post but I strongly suspect that the underlying reason is the declining ability of incoming students and the retreat from meritocracy in graduate school admissions and faculty appointments and promotions.


Friday, February 28, 2020

Polish Universities in International Rankings

My short article on Polish universities and international rankings has just been published by Forum Akademikie. The article in Polish can be accessed here. Translation and editing by Piotr Kieracinski. The full journal issue is here.




Here is the English version.



Richard Holmes

Polish Universities and International Rankings

A Brief History of International University Rankings

After a false start with the Asiaweek rankings of 1999-2000, international university rankings took off in 2003 with the Academic Ranking of World Universities (ARWU), published by Shanghai Jiao Tong University and then by the Shanghai Ranking Consultancy.

In 2004 two new rankings appeared: the Ranking Web of Universities, better known as Webometrics, which originally measured only web activity, and the Times Higher Education Supplement (THES) – Quacquarelli Symonds (QS) World University Rankings, which emphasised research and also included faculty resources and internationalisation indicators.

Since then the number of rankings, metrics and data points has increased prompting ranking consultant Daniel Guhr to talk about “vast data lakes”. Rankings have become more complex and sophisticated and often use statistical techniques such as standardisation and field normalisation.

In addition to global rankings, specialist rankings of regions, subjects, and business schools have appeared. International rankings continue to have a bias towards research but some try to find a way of capturing data that might be relevant to teaching and learning or to university third missions such as sustainability, gender equity and open access. They have also become significant in shaping national higher education policy and institutional strategy.

Although the media usually talk about the big four rankings or sometimes the big three or big two, there are now many more. The IREG Inventory of International Rankings includes 17 global rankings in addition to regional and specialised rankings and various spin offs. Since the publication of the inventory more global rankings have appeared and no doubt there are more to come.

Media Perceptions of Rankings

It is unfortunate that the media and public perception of global rankings has little relation to reality. The Times Higher Education (THE) World University Rankings are by far the most prestigious but they have serious defects. They lack transparency with eleven indicators bundled into three broad groups. They rely on subjective surveys and questionable data submitted by institutions. They are unbalanced with a 30% weighting going towards a citation indicator that can be influenced by a handful of papers in a multi-author international project and that has elevated a succession of little-known places to world research leadership. These include the University of Reykjavik, Babol Noshirvani University of Technology, Aswan University, and Anglia Ruskin University.

The problems of the THE world rankings are illustrated by looking at the fate of a leading Polish university in recent editions. In the 2014-15 rankings the University of Warsaw was ranked 301-350 but in 2015-16 it fell to 501-600. This was entirely the result of a dramatic fall in the score for citations and that was entirely the result of a methodological change. In 2015 THE stopped counting citations to papers with over a thousand “authors”. This badly affected the University of Warsaw which, along with Warsaw University of Technology, had been contributing to the Large Hadron Collider project a producer of many such papers. The University of Warsaw’s decline in the rankings had nothing to do with any defect. It was simply the result of THE’s tweaking.

Although they receive little attention from the media there are now several global rankings published by universities and research councils that include more universities, cover a broader range of indicators and are technically as good as or better than the Big Four. These include the National Taiwan University Rankings, University Ranking by Academic Performance published by Middle East Technical University, the Scimago Institution Rankings and CWTS Leiden Ranking.

Polish Universities in Global Rankings

Turning to the current position of Polish universities in international rankings there is a great deal of variation. There are 14 in the THE rankings with 4 in the top 1000, but 410 in the Webometrics rankings of which 10 are in the top 1000. The ranking with the best representation of Polish universities is Scimago with 54 in the top 1000.
Of the “big four” rankings -- THE, QS, Shanghai, US News -- the best for analysing the current standing of the International Visibility Project (IntVP) universities is the US News Best Global Universities (BGU). THE and QS are unbalanced with too much emphasis on a single indicator, citations and academic survey respectively. The Shanghai rankings include Nobel and Fields awards some of which are several decades old. It should be noted that BGU is an entirely research based ranking.

The list below indicates the world rank of Polish universities in the latest US News BGU:

University of Warsaw 286
Jagiellonian University 343
Warsaw University of Technology 611
AGH University of Science and Technology 635
Adam Mickiewicz University 799
University of Wroclaw 833
Medical University of Wroclaw 926
Wroclaw University of Science and Technology 961
Nicholas Copernicus University 984
Medical University of Gdansk 995
Medical University of Warsaw 1033
University of Silesia 1082
University of Gdansk 1096
University of Lodz 1119
Gdansk University of Technology 1148
Poznan University of Technology 1148
Lodz University of Technology 1194
Lodz Medical University 1203
Warsaw University of Life Sciences 1221
Pomeranian Medical University 1303  
Poznan University of Medical Sciences 1312
Silesian University of Technology 1351
Krakow University of Technology   1363
University of Warmia 1363
Medical University Silesia    1399
Medical University of Lublin 1414
University of Rseszov 1430
Technical University Czestochowa 1445
Poznan University of Life Sciences 1457
Wroclaw University of Life and Environmental Sciences 1465
Agricultural University of Lublin. No overall rank. 214 for agriculture
Medical University of Bialystok. No overall rank. 680 for clinical medicine.

One thing that emerges from this list is that the Polish university system suffers from a serious handicap in this ranking and in others due to the existence of independent specialist universities of technology, business and medicine. Consolidation of small specialist institutions could bring about significant improvements as has recently happened in France.

There is also some variation in the rank of the best performing Polish universities. In most rankings the top scoring Polish university is in the 300s or 490s. There are, however, some exceptions. The default indicator in Leiden Ranking, total publications, has Jagiellonian University at 247, GreenMetric has Adam Mickiewicz University at 160 and the Moscow Three Missions University Rankings puts the University of Warsaw at 113. On the other hand, no Polish university gets higher than the 600-800 band in the THE world rankings.

The various rankings have very different methodologies and indicators. THE for example includes income in three indicators. The QS rankings gave a combined weighting of 50 % to reputation surveys. Scimago counts patents and the Center for World University Rankings (CWUR), now based in the Arab Gulf, the achievements of alumni. It would be a good idea to look carefully at the content and format of all the rankings before using them for evaluation or benchmarking.

Polish Universities: Strengths and Weaknesses

Poland has certain advantages with regard to international rankings. It has an excellent secondary school system as shown by above average performance in PISA and other international standardised tests. Current data indicates that it has adequate teaching resources, shown by statistics for staff student ratio. It has cultural and economic links to the East, with the Anglosphere and within the EU that are likely in the future to produce fruitful research partnerships and networks.

On the other hand, the evidence of current rankings is that Polish universities relatively underfunded and that doctoral education is still relatively limited. The reputation for research in the world is not very high although is better for regional reputation.

One exception to the limited international visibility of Polish universities is a recent British film, The Last Passenger, in which a hijacked train is saved by a few heroes one of whom has an engineering degree from Gdansk University of Technology.

Poland and the Rankings

It would be unwise for Poland, or indeed any country, to focus on a single ranking. Some rankings have changed their methodology and will probably continue to do so and this might lead to unexpected rises or falls. THE has announced that there will be a new 3.0 version of the world rankings towards the end of this year.

Any university or university system wishing to engage with the rankings should be aware that they often favour certain types of institutions. The Shanghai rankings, for example, privilege medical research and totally ignore the arts and humanities. Scimago includes data about patents which would give technological universities an advantage. THE’s current methodology gives a massive privilege to participants in multi-contributor projects. QS uses several channels to obtain respondents for its academic and employer surveys. One of these is a list of potential respondents provided by the universities. Universities are very likely to nominate those who are likely to support them in the surveys. 

Guidelines for Polish universities as they seek to establish and extend their international presence.

First, improving international visibility will take time. Quick fixes such as recruiting highly cited adjunct faculty or taking part in high profile projects may be counterproductive, especially if there is an unannounced change in methodology.

Second, before launching a campaign to rise in the global rankings some universities might consider regional or specialist rankings first, such as the THE Europe Teaching Rankings, the QS graduate employability ranking, the Indonesian GreenMetric rankings or business school rankings.

Third, Universities should also consider the cost of taking part in the rankings. US News, QS and THE require universities to submit data and this can be time consuming especially for THE which asks for data in ten subjects. Many universities seem to need three or four staff dedicated to rankings.

Fourth, it would be wise to monitor all international rankings for data that can be used for internal evaluation or publicity.

Fifth, universities should match their profiles, strengths and weaknesses with the methodology of specific rankings. Universities with strengths in medical research might perform well in the Shanghai rankings.

Sixth, it is not a good idea to focus exclusively on any single ranking or to make any one ranking the standard of excellence.







1.