My previous post has just been republished by University World News.
Comments here are welcome.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Saturday, September 28, 2019
Thursday, September 26, 2019
Trinity College Dublin: Time to forget about THE?:
Global rankings, especially THE's, have been very useful to British universities, at least to those sitting at the apex of the system. If a Russell Group university falls in the rankings then it is the fault of impending Brexit and/or the terrible austerity inflicted on the nation's research prowess. If it rises then this is cause for congratulation but with a hint of foreboding. How can they keep advancing with Ebenezer Scrooge controlling the treasury and the bitter winds of Brexit howling at the door? The universities have reciprocated by not inquiring about how THE constructs its rankings, particularly the citations and industry income indicators.
Across the Irish Sea, universities have for the most part also been loyal to THE. Trinity College Dublin (TCD) has continued to submit data to THE and also to QS and has passively accepted the results of the rankings even when they show the institution going down and down. The steady decline is usually blamed on the meanness of the Irish government and its failure to provide sufficient funds.
I have dealt with Trinity's misfortunes here, here, here, and here, In 2015 TCD fell seven places in the QS world rankings and 22 in THE's. In contrast it had been rising in the Shanghai ARWU rankings since 2004 and in the Round University Rankings (RUR) since 2010, although everybody pretended not to notice this.
This year history repeats itself all over again. TCD has fallen in THE world rankings from 120th place to 164th. Again this supposedly is the fault of the Irish state to provide enough money.
But we get a very different picture when we look at the Shanghai Rankings. TCD has risen from 167th place to 154th, getting close to the 101-150 band. Leaving aside the Nobel and Fields Awards, Trinity has gained 6.6 points for highly cited researchers, 1.4 for publications, and 1.4 for productivity per capita. It has, however, fallen 0.6 for papers in Nature and Science.
Looking at RUR, TCD has risen from 75th to 57th for Research and 35th to 29th for international diversity. It has fallen slightly for financial sustainability from 191st to 197th and for Teaching from 275th to 335th, mainly because of a fall in the number of academic staff.
It seems perverse for TCD to keep on about its decline in the THE rankings when it can point to a steady rise in the Shanghai rankings which are not perfect but are certainly more stable, consistent and realistic than THE.
Does THE really want to be judged by rankings that apparently think that Anadolu University is best for Innovation, Luxembourg for International Orientation and Aswan for research impact measured by Citations?
But if TCD really insist on sticking with THE then I suggest that they recruit a few researchers taking part in the Global Burden of Disease Study, funded by the Bill and Melinda Gates Foundation.*
They should also think about amalgamating with the Royal College of Surgeons.
*assuming no methodological change
I have dealt with Trinity's misfortunes here, here, here, and here, In 2015 TCD fell seven places in the QS world rankings and 22 in THE's. In contrast it had been rising in the Shanghai ARWU rankings since 2004 and in the Round University Rankings (RUR) since 2010, although everybody pretended not to notice this.
This year history repeats itself all over again. TCD has fallen in THE world rankings from 120th place to 164th. Again this supposedly is the fault of the Irish state to provide enough money.
But we get a very different picture when we look at the Shanghai Rankings. TCD has risen from 167th place to 154th, getting close to the 101-150 band. Leaving aside the Nobel and Fields Awards, Trinity has gained 6.6 points for highly cited researchers, 1.4 for publications, and 1.4 for productivity per capita. It has, however, fallen 0.6 for papers in Nature and Science.
Looking at RUR, TCD has risen from 75th to 57th for Research and 35th to 29th for international diversity. It has fallen slightly for financial sustainability from 191st to 197th and for Teaching from 275th to 335th, mainly because of a fall in the number of academic staff.
It seems perverse for TCD to keep on about its decline in the THE rankings when it can point to a steady rise in the Shanghai rankings which are not perfect but are certainly more stable, consistent and realistic than THE.
Does THE really want to be judged by rankings that apparently think that Anadolu University is best for Innovation, Luxembourg for International Orientation and Aswan for research impact measured by Citations?
But if TCD really insist on sticking with THE then I suggest that they recruit a few researchers taking part in the Global Burden of Disease Study, funded by the Bill and Melinda Gates Foundation.*
They should also think about amalgamating with the Royal College of Surgeons.
*assuming no methodological change
Tuesday, September 17, 2019
Going Up and Up Down Under: the Case of the University of Canberra
It is a fact almost universally ignored that when a university suddenly rises or falls many places in the global rankings the cause is not transformative leadership, inclusive excellence, team work, or strategic planning but nearly always a defect or a change in the rankers' methodology.
Let's take a look at the fortunes of the University of Canberra (UC) which THE world rankings now have in the world's top 200 universities and Australia's top ten. This is a remarkable achievement since the university did not appear in these rankings until 2015-16 when it was placed in the 500-600 band with very modest scores of 18.4 for teaching, 19.3 for research, 29.8 for citations, which is supposed to measure research impact, 36.2 for industry income, and 54.6 for international outlook.
Just four years later the indicator scores are 25.2 for teaching, 31.1 for research, 99.2 for citations, 38.6 for industry income, and 86.9 for international orientation.
The increase in the overall score over four years, calculated with different weightings for the indicators, was composed of 20.8 points for citations and 6.3 for the other four indicators combined. Without those 20.8 points Canberra would be in the 601-800 band.
I will look at where that massive citation score came from in a moment.
It seems that the Australian media is reporting on this superficially impressive performance with little or no scepticism and without noting how different it is from the other global rankings.
The university has issued a statement quoting vice-chancellor Professor Deep Saini as saying that the "result confirms the steady strengthening of the quality at the University of Canberra, thanks to the outstanding work of our research, teaching and professional staff" and that the "increase in citation impact is indicative of the quality of research undertaken at the university, coupled with a rapid growth in influence and reach, and has positioned the university as amongst the best in the world."
The Canberra Times reports that the vice-chancellor has said that part of the improvement was the result of a talent acquisition campaign while noting that many faculty were complaining about pressure and excessive workloads.
Leigh Sullivan, DVC for research and innovation, has a piece in the Campus Morning Mail that hints at reservations about UC's apparent success, which is " a direct result of its Research Foundation Plan (2013-2017) and "a strong emphasis on providing strategic support for research excellence in a few select research areas where UC has strong capability." He notes that when the citation scores of research stars are excluded there has still been a significant increase in citations and warns that what goes up can go down and that performance can be affected by changes in the ranking methodology.
The website riotact quotes the vice-chancellor on the improvement in research quality as evidence by the citation score and as calling for more funding for universities: the "government has to really think and look hard at how well we support our universities. That's not to say it badly supports us, it's that the university sector deserves to be on the radar of our government as a major national asset."
The impressive ascent of UC is unique to THE. No serious ranking puts it in the top 200 or anywhere near. In the current Shanghai Rankings it is in the 601-700 band and has been falling for the last two years. In Webometrics it is 730th in the world and 947th for Excellence, that is publications in the 10% most cited in 25 disciplines. In University Ranking by Academic Performance it is 899th and in the CWUR Rankings it doesn't even make the top 1,000.
Round University Ranking and Leiden Ranking do not rank UC at all.
Apart from THE UC does best in the QS rankings where it is 484th in the world and 26th in Australia.
So how could UC perform so brilliantly in THE rankings when nobody else has recognised that brilliance? What does THE know that nobody else does? Actually, it does not perform brilliantly in the THE rankings, just in the citations indicator which is supposed to measure research influence or research impact.
This year UC has a score of 99.2 which puts it in the top twenty for citations just behind Nova Southeastern University in Florida and Cankaya University in Turkey and ahead of Harvard, Princeton and Oxford. The top university this year is Aswan University in Egypt replacing Babol Noshirvani University of Technology in Iran.
No, THE is not copying the interesting methodology of the Fortunate 500. This is the result of an absurd methodology that THE is unable or unwilling for some reason to change.
THE has a self-inflicted problem with a small number of papers that have hundreds or thousands of "authors" and collect thousands of citations. Some of these are from the CERN project and THE has dealt with them by using a modified form of fractional counting for papers with more than a thousand authors. That has removed the privilege of institutions that contribut to CERN projects but has replaced it with the privilege of those that contribute to the Global Burden of Disease Study (GBDS) whose papers tend to have hundreds but not thousands of contributors and sometimes receive over a thousand citations. As a result, places like Tokyo Metropolitan University, National Research University MEPhI and Royal Holloway London have been replaced as citation super stars by St Georges' London, Brighton and Sussex Medical School, and Oregon Health and Science University.
It would be a simple matter to apply fractional counting to all papers, dividing the number of citations by the number of authors. After all Leiden Ranking and Nature Index manage to do it but THE for some reason has chosen not to follow.
The problem is compounded by counting self-citations, by hyper-normalisation so that the chances of hitting the jackpot with an unusually highly cited paper are increased, and by the country bonus that boosts the scores for universities by virtue of their location in low scoring countries.
And so to UC's apparent success this year. This is entirely the result of it's citation score which is entirely dependent on THE's methodology.
Between 2014 and 2018 UC had 3,825 articles in the Scopus database of which 27 were linked to the GBDS which is funded by the Bill and Melinda Gates Foundation. Those 27 articles, each with hundreds of contributors, have received 18,431 citations all of which are credited to UC and its contributor. The total number of citations is 53,929 so those 27 articles accounted for over a third of UC's citations. Their impact might be even greater if they were cited disproportionately soon after publication.
UC has of course improved its citation performance even without those articles but it is clear that they have made an outsize contribution. UC is not alone here. Many universities in the top 100 for citations in the THE world rankings owe their status to the GBDS: Anglia Ruskin, Reykjavik, Aswan, Indian Institute of Technology Ropar, the University of Peradeniya, Desarrollo, Pontifical Javeriana and so on.
There is absolutely nothing wrong with the GBDS nor with UC encouraging researchers to take part. The problem lies with THE and its reluctance to repair an indicator that produces serious distortions and is an embarrassment to those universities who apparently look to the THE rankings to validate their status.
The Canberra Times reports that the vice-chancellor has said that part of the improvement was the result of a talent acquisition campaign while noting that many faculty were complaining about pressure and excessive workloads.
Leigh Sullivan, DVC for research and innovation, has a piece in the Campus Morning Mail that hints at reservations about UC's apparent success, which is " a direct result of its Research Foundation Plan (2013-2017) and "a strong emphasis on providing strategic support for research excellence in a few select research areas where UC has strong capability." He notes that when the citation scores of research stars are excluded there has still been a significant increase in citations and warns that what goes up can go down and that performance can be affected by changes in the ranking methodology.
The website riotact quotes the vice-chancellor on the improvement in research quality as evidence by the citation score and as calling for more funding for universities: the "government has to really think and look hard at how well we support our universities. That's not to say it badly supports us, it's that the university sector deserves to be on the radar of our government as a major national asset."
The impressive ascent of UC is unique to THE. No serious ranking puts it in the top 200 or anywhere near. In the current Shanghai Rankings it is in the 601-700 band and has been falling for the last two years. In Webometrics it is 730th in the world and 947th for Excellence, that is publications in the 10% most cited in 25 disciplines. In University Ranking by Academic Performance it is 899th and in the CWUR Rankings it doesn't even make the top 1,000.
Round University Ranking and Leiden Ranking do not rank UC at all.
Apart from THE UC does best in the QS rankings where it is 484th in the world and 26th in Australia.
So how could UC perform so brilliantly in THE rankings when nobody else has recognised that brilliance? What does THE know that nobody else does? Actually, it does not perform brilliantly in the THE rankings, just in the citations indicator which is supposed to measure research influence or research impact.
This year UC has a score of 99.2 which puts it in the top twenty for citations just behind Nova Southeastern University in Florida and Cankaya University in Turkey and ahead of Harvard, Princeton and Oxford. The top university this year is Aswan University in Egypt replacing Babol Noshirvani University of Technology in Iran.
No, THE is not copying the interesting methodology of the Fortunate 500. This is the result of an absurd methodology that THE is unable or unwilling for some reason to change.
THE has a self-inflicted problem with a small number of papers that have hundreds or thousands of "authors" and collect thousands of citations. Some of these are from the CERN project and THE has dealt with them by using a modified form of fractional counting for papers with more than a thousand authors. That has removed the privilege of institutions that contribut to CERN projects but has replaced it with the privilege of those that contribute to the Global Burden of Disease Study (GBDS) whose papers tend to have hundreds but not thousands of contributors and sometimes receive over a thousand citations. As a result, places like Tokyo Metropolitan University, National Research University MEPhI and Royal Holloway London have been replaced as citation super stars by St Georges' London, Brighton and Sussex Medical School, and Oregon Health and Science University.
It would be a simple matter to apply fractional counting to all papers, dividing the number of citations by the number of authors. After all Leiden Ranking and Nature Index manage to do it but THE for some reason has chosen not to follow.
The problem is compounded by counting self-citations, by hyper-normalisation so that the chances of hitting the jackpot with an unusually highly cited paper are increased, and by the country bonus that boosts the scores for universities by virtue of their location in low scoring countries.
And so to UC's apparent success this year. This is entirely the result of it's citation score which is entirely dependent on THE's methodology.
Between 2014 and 2018 UC had 3,825 articles in the Scopus database of which 27 were linked to the GBDS which is funded by the Bill and Melinda Gates Foundation. Those 27 articles, each with hundreds of contributors, have received 18,431 citations all of which are credited to UC and its contributor. The total number of citations is 53,929 so those 27 articles accounted for over a third of UC's citations. Their impact might be even greater if they were cited disproportionately soon after publication.
UC has of course improved its citation performance even without those articles but it is clear that they have made an outsize contribution. UC is not alone here. Many universities in the top 100 for citations in the THE world rankings owe their status to the GBDS: Anglia Ruskin, Reykjavik, Aswan, Indian Institute of Technology Ropar, the University of Peradeniya, Desarrollo, Pontifical Javeriana and so on.
There is absolutely nothing wrong with the GBDS nor with UC encouraging researchers to take part. The problem lies with THE and its reluctance to repair an indicator that produces serious distortions and is an embarrassment to those universities who apparently look to the THE rankings to validate their status.
Monday, September 16, 2019
What should universities do about organised cheating?
Every so often the world of higher education is swept by a big panic about systemic and widespread cheating. The latest instance is concern about contract cheating or essay mills that provide bespoke essays or papers for students.
It seems that the Australian government will introduce legislation to penalise the supply or advertising of cheating services to students. There are already laws in several American states and there have been calls for the UK to follow suit.
There is perhaps a bit of hypocrisy here. If universities in Europe Australia and North America admit more and more students who lack the cognitive or language skills to do the required work and if they chose to use assessment methods that are vulnerable to deception and dishonesty such as unsupervised essays and group projects then cheating is close to inevitable.
On the supply side there appear to be large numbers of people around the world without decent academic jobs or jobs of any sort who are capable of producing academic of a high standard, sometimes worth an A grade or a first. The Internet has made it possible for lazy or incompetent students to link up with competent writers.
The Daily Mail has reported that Kenya hosts a medium sized industry with students and academics slaving away to churn out essays for British and American students. This is no doubt a hugely exploitative business but consider the consequences of shutting down the essay mills. Many educated Kenyans are going to suffer financially. Many students will drop out, resort to other forms of cheating, or will demand more support and counselling and transitional or foundation programmes.
If universities are serious about the scourge of essay mills they need to work on both the supply and the demand side. They might start by offering the essay writers in Kenya to apply scholarships for undergraduate or postgraduate courses or posts in EAP departments.
On the demand side the solution seems to be simple. Stop admitting students because they show leadership ability, have overcome adversity, will make the department look like Britain, America or the world, will help craft an interesting class, and admit them because they have demonstrated an ability to do the necessary work.
https://www.studyinternational.com/news/australia-essay-mills-contract-cheating-penalty-law/
It seems that the Australian government will introduce legislation to penalise the supply or advertising of cheating services to students. There are already laws in several American states and there have been calls for the UK to follow suit.
There is perhaps a bit of hypocrisy here. If universities in Europe Australia and North America admit more and more students who lack the cognitive or language skills to do the required work and if they chose to use assessment methods that are vulnerable to deception and dishonesty such as unsupervised essays and group projects then cheating is close to inevitable.
On the supply side there appear to be large numbers of people around the world without decent academic jobs or jobs of any sort who are capable of producing academic of a high standard, sometimes worth an A grade or a first. The Internet has made it possible for lazy or incompetent students to link up with competent writers.
The Daily Mail has reported that Kenya hosts a medium sized industry with students and academics slaving away to churn out essays for British and American students. This is no doubt a hugely exploitative business but consider the consequences of shutting down the essay mills. Many educated Kenyans are going to suffer financially. Many students will drop out, resort to other forms of cheating, or will demand more support and counselling and transitional or foundation programmes.
If universities are serious about the scourge of essay mills they need to work on both the supply and the demand side. They might start by offering the essay writers in Kenya to apply scholarships for undergraduate or postgraduate courses or posts in EAP departments.
On the demand side the solution seems to be simple. Stop admitting students because they show leadership ability, have overcome adversity, will make the department look like Britain, America or the world, will help craft an interesting class, and admit them because they have demonstrated an ability to do the necessary work.
https://www.studyinternational.com/news/australia-essay-mills-contract-cheating-penalty-law/
Saturday, September 14, 2019
Are UK universities facing a terrible catastrophe?
A repeated theme of mainstream media reporting on university rankings (nearly always QS or THE) is that Brexit has inflicted, is inflicting, or is surely going to inflict great damage on British education and the universities because they will not get any research grants from the European Union or be able to network with their continental peers.
The latest of these dire warnings can be found in a recent edition of the Guardian, which is the voice of the British progressive establishment. Marja Makarow claims that Swiss science was forced "into exile" after the 2014 referendum on immigration controls. Following this, Switzerland supposedly entered a period of isolation without access to Horizon 2020 or grants from the European Research Council and with a declining reputation and a loss of international collaboration and networks. This will happen to British research and universities if Brexit is allowed to happen.
But has Swiss research suffered? A quick tour of some relevant rankings suggests that it has not. The European Research Ranking which measures research funding and networking in Europe has two Swiss universities in the top ten. The Universitas 21 systems rankings put Switzerland in third place for output, up from sixth in 2013, and first for connectivity.
The Leiden Ranking shows that EPF Lausanne and ETH Zurich have fallen for total publications between 2011-14 and 2014-17 but both have risen for publications in the top 10% of journals, a measure of research quality.
The Round University Rankings show that EPF and ETH have both improved for research since 2013 and both have improved their world research reputation.
So it looks as though Switzerland has not really suffered very much, if at all. Perhaps Brexit, if it ever happens, will turn out to be something less than the cataclysm that is feared, or hoped for.
The latest of these dire warnings can be found in a recent edition of the Guardian, which is the voice of the British progressive establishment. Marja Makarow claims that Swiss science was forced "into exile" after the 2014 referendum on immigration controls. Following this, Switzerland supposedly entered a period of isolation without access to Horizon 2020 or grants from the European Research Council and with a declining reputation and a loss of international collaboration and networks. This will happen to British research and universities if Brexit is allowed to happen.
But has Swiss research suffered? A quick tour of some relevant rankings suggests that it has not. The European Research Ranking which measures research funding and networking in Europe has two Swiss universities in the top ten. The Universitas 21 systems rankings put Switzerland in third place for output, up from sixth in 2013, and first for connectivity.
The Leiden Ranking shows that EPF Lausanne and ETH Zurich have fallen for total publications between 2011-14 and 2014-17 but both have risen for publications in the top 10% of journals, a measure of research quality.
The Round University Rankings show that EPF and ETH have both improved for research since 2013 and both have improved their world research reputation.
So it looks as though Switzerland has not really suffered very much, if at all. Perhaps Brexit, if it ever happens, will turn out to be something less than the cataclysm that is feared, or hoped for.
Saturday, September 07, 2019
Finer and finer rankings prove anything you want
If you take a single metric from a single ranking and do a bit of slicing by country, region, subject, field and/or age there is a good chance that you can prove almost anything, for example that the University of the Philippines is a world beater for medical research. Here is another example from the Financial Times.
An article by John O'Hagan, Emeritus Professor at Trinity College Dublin, claims that German universities are doing well for research impact in the QS economics world rankings. Supposedly, "no German university appears in the top 50 economics departments in the world using the overall QS rankings. However, when just research impact is used, the picture changes dramatically, with three German universities, Bonn, Mannheim and Munich, in the top 50, all above Cambridge and Oxford on this ranking."
This is a response to Frederick Studemann's claim that German universities are about to move up the rankings. O'Hagan is saying that is already happening.
I am not sure what this is about. I had a look at the most recent QS economics rankings and found that in fact Mannheim is in the top fifty overall for that subject. The QS subject rankings do not have a research impact indicator. They have academic reputation, citations per paper, and h-index, which might be considered proxies for research impact, but for none of these are the three universities in the top fifty. Two of the three universities are in the top fifty for academic research reputation, one for citations per paper and two for h-index.
So it seems that the article isn't referring to the QS economics subject ranking. Maybe it is the overall ranking that professor O'Hagan is thinking of? There are no German universities in the overall top fifty there but there are also none in the citations per faculty indicator.
I will assume that the article is based on an actual ranking somewhere, maybe an earlier edition of the QS subject rankings or the THE world rankings or from one of the many spin-offs.
But it seems a stretch to talk about German universities moving up the rankings just because they did well in one metric in one of the 40 plus international rankings in one year.
An article by John O'Hagan, Emeritus Professor at Trinity College Dublin, claims that German universities are doing well for research impact in the QS economics world rankings. Supposedly, "no German university appears in the top 50 economics departments in the world using the overall QS rankings. However, when just research impact is used, the picture changes dramatically, with three German universities, Bonn, Mannheim and Munich, in the top 50, all above Cambridge and Oxford on this ranking."
This is a response to Frederick Studemann's claim that German universities are about to move up the rankings. O'Hagan is saying that is already happening.
I am not sure what this is about. I had a look at the most recent QS economics rankings and found that in fact Mannheim is in the top fifty overall for that subject. The QS subject rankings do not have a research impact indicator. They have academic reputation, citations per paper, and h-index, which might be considered proxies for research impact, but for none of these are the three universities in the top fifty. Two of the three universities are in the top fifty for academic research reputation, one for citations per paper and two for h-index.
So it seems that the article isn't referring to the QS economics subject ranking. Maybe it is the overall ranking that professor O'Hagan is thinking of? There are no German universities in the overall top fifty there but there are also none in the citations per faculty indicator.
I will assume that the article is based on an actual ranking somewhere, maybe an earlier edition of the QS subject rankings or the THE world rankings or from one of the many spin-offs.
But it seems a stretch to talk about German universities moving up the rankings just because they did well in one metric in one of the 40 plus international rankings in one year.
Saturday, August 24, 2019
Seven modest suggestions for Times Higher
The latest fabulous dynamic exciting trusted prestigious sophisticated etc etc Times Higher Education (THE) world academic summit is coming.
The most interesting, or at least the most amusing, event will probably be the revelation of the citations indicator which supposedly measures research impact. Over the last few years this metric has discovered a series of unexpected world-class research universities: Alexandria University, Tokyo Metropolitan University, Anglia Ruskin University, the University of Reykjavik, St. George's London, Babol Noshirvani University of Technology, Brighton and Sussex Medical School. THE once called this their flagship indicator but oddly enough they don't seem to have got round to releasing it as a standalone ranking.
But looking at the big picture, THE doesn't appear to have suffered much, if at all, from the absurdity of the indicator. The great and the good of the academic world continue to swarm to THE summits where they bask in the glow of the charts and tables that confirm their superiority.
THE have hinted that this summit will see big reforms to the rankings especially the citations indicator. That would certainly improve their credibility although they may be less interesting.
I have discussed THE's citation problems here, here, here, and here. So, for one last time, I hope, here the main flaws and we will see whether THE will fix them.
1. A 30% weighting for any single indicator is far too high. It would be much better to reduce it to 10 or 20%.
2. Using only one method to measure citations is not a good idea. Take at look at the Leiden Ranking and play around with the settings and parameters. You will see that you can get very different results with just a bit of tweaking. It is necessary to use a variety of metrics to get a broad picture of research quality, impact and influence.
3. THE have a regional modification or country bonus that divides the research impact score of universities by the square root of the scores of the country where they are located. The effect of this is to increase the score of every university except those in the top ranking country with the increase being greater for those with worse research records. This applies to half of the indicator and is supposed to compensate for some researchers lacking access to international networks. For some reason this was never a problem for the publications, income or international indicators. Removing the bonus would do a let to make the metric more credible.
4. The indicator is over-normalized. Impact scores are bench marked to the world average for over three hundred fields plus year of publication. The more fields the greater the chance that a university can benefit from an anomalous paper that receives an unusually high number of citations. It would help if THE reduced the number of fields although that seems unlikely.
5. Unless a paper has over a thousand authors THE treat every single contributor as receiving every single citation. Above that number they use fractional counting. The result is that the THE rankings privilege medical institutions such as St George's and the Brighton and Sussex Medical School that take part in multi-author projects such as the Global Burden of Disease study. All round fractional counting would seem the obvious answer although it might add a bit to costs.
6. Self-citation has become an issue recently. THE have said several times that it doesn't make very much difference. That may be true but there have been occasions when a single serial self citer can make a university like Alexandria or Veltech soar into the research stratosphere and that could happen again.
7. A lot of researchers are adding additional affiliations to their names when they publish. Those secondary, tertiary, sometimes more affiliations are counted by rankers as though they were primary affiliations. It would make sense to count only primary affiliations as ARWU does with highly cite researchers.
Friday, August 23, 2019
Rankings are everywhere
This is from an Indian website. The QS World University Rankings are being used to sell real estate in Australia.
SYDNEY: Leading Australian developer Crown Group is developing six new residential apartment developments near six of the world's top 200 university cities, according to the new QS World University Rankings 2020.
SYDNEY: Leading Australian developer Crown Group is developing six new residential apartment developments near six of the world's top 200 university cities, according to the new QS World University Rankings 2020.
Tuesday, August 13, 2019
University of the Philippines beats Oxford, Cambridge, Yale, Harvard, Tsinghua, Peking etc etc
Rankings can do some good sometimes. They can also do a lot of harm and that harm is multiplied when they are sliced more and more thinly to produce rankings by age, by size, by mission, by region, by indicator, by subject. When this happens minor defects in the overall rankings are amplified.
That would not be so bad if universities, political leaders and the media were to treat the tables and the graphs with a healthy scepticism. Unfortunately, they treat the rankings, especially THE, with obsequious deference as long as they are provided with occasional bits of publicity fodder.
Recently, the Philippines media have proclaimed that the University of the Philippines (UP) has beaten Harvard, Oxford and Stanford for health research citations. It was seventh in the THE Clinical, Pre-clinical and Health category behind Tokyo Metropolitan University, Auckland University of Technology, Metropolitan Autonomous University Mexico, Jordan University of Science and Technology, University of Canberra and Anglia Ruskin University.
The Inquirer is very helpful and provides an explanation from the Philippine Council for Health Research and Development that citation scores “indicate the number of times a research has been cited in other research outputs” and that the score "serves as an indicator of the impact or influence of a research project which other researchers use as reference from which they can build on succeeding breakthroughs or innovations.”
Fair enough, but how can UP, which has a miserable score of 13.4 for research in the same subject ranking have such a massive research influence? How can it have an extremely low output of papers, a poor reputation for research, and very little funding and still be a world beater for research impact.
It is in fact nothing to do with UP, nothing to do with everyone working as a team, decisive leadership or recruiting international talent.
It is the result of a bizarre and ludicrous methodology. First, THE does not use fractional counting for papers with less than a thousand authors. UP, along with many other universities, has taken part in the Global Burden of Disease project funded by the Bill and Melinda Gates Foundation. This has produced a succession of papers, many of them in the Lancet, with hundreds of contributing institutions and researchers, whose names are all listed as authors, and hundreds or thousands of citations. As long as the number of authors does not reach 1,000 each author is counted as though he or she were the recipient of all the citations. So UP gets the credit for a massive number of citations which is divided by a relatively small number of papers.
Why not just use fractional counting, dividing the citations among the contributors or the intuitions, like Leiden Ranking does. Probably because it might add a little bit to costs, perhaps because THE doesn't like to admit it made a mistake.
Then we have the country bonus or regional modification, applied to half the indicator, which increases the score for universities in countries with low impact.
The result of all this is that UP, surrounded by low scoring universities, not producing very much research but with a role in a citation rich mega project, gets a score for this indicator that puts it ahead of the Ivy League, the Group of Eight and the leading universities of East Asia.
If nobody took this seriously, then no great harm would be done. Unfortunately it seems that large numbers of academics, bureaucrats and journalists do take the THE rankings very seriously or pretend to do so in public.
And so committee addicts get bonuses and promotions, talented researchers spend their days in unending ranking-inspired transformational seminars, funds go to the mediocre and the sub mediocre, students and stakeholders base their careers on misleading data, and the problems of higher education are covered up or ignored.
That would not be so bad if universities, political leaders and the media were to treat the tables and the graphs with a healthy scepticism. Unfortunately, they treat the rankings, especially THE, with obsequious deference as long as they are provided with occasional bits of publicity fodder.
Recently, the Philippines media have proclaimed that the University of the Philippines (UP) has beaten Harvard, Oxford and Stanford for health research citations. It was seventh in the THE Clinical, Pre-clinical and Health category behind Tokyo Metropolitan University, Auckland University of Technology, Metropolitan Autonomous University Mexico, Jordan University of Science and Technology, University of Canberra and Anglia Ruskin University.
The Inquirer is very helpful and provides an explanation from the Philippine Council for Health Research and Development that citation scores “indicate the number of times a research has been cited in other research outputs” and that the score "serves as an indicator of the impact or influence of a research project which other researchers use as reference from which they can build on succeeding breakthroughs or innovations.”
Fair enough, but how can UP, which has a miserable score of 13.4 for research in the same subject ranking have such a massive research influence? How can it have an extremely low output of papers, a poor reputation for research, and very little funding and still be a world beater for research impact.
It is in fact nothing to do with UP, nothing to do with everyone working as a team, decisive leadership or recruiting international talent.
It is the result of a bizarre and ludicrous methodology. First, THE does not use fractional counting for papers with less than a thousand authors. UP, along with many other universities, has taken part in the Global Burden of Disease project funded by the Bill and Melinda Gates Foundation. This has produced a succession of papers, many of them in the Lancet, with hundreds of contributing institutions and researchers, whose names are all listed as authors, and hundreds or thousands of citations. As long as the number of authors does not reach 1,000 each author is counted as though he or she were the recipient of all the citations. So UP gets the credit for a massive number of citations which is divided by a relatively small number of papers.
Why not just use fractional counting, dividing the citations among the contributors or the intuitions, like Leiden Ranking does. Probably because it might add a little bit to costs, perhaps because THE doesn't like to admit it made a mistake.
Then we have the country bonus or regional modification, applied to half the indicator, which increases the score for universities in countries with low impact.
The result of all this is that UP, surrounded by low scoring universities, not producing very much research but with a role in a citation rich mega project, gets a score for this indicator that puts it ahead of the Ivy League, the Group of Eight and the leading universities of East Asia.
If nobody took this seriously, then no great harm would be done. Unfortunately it seems that large numbers of academics, bureaucrats and journalists do take the THE rankings very seriously or pretend to do so in public.
And so committee addicts get bonuses and promotions, talented researchers spend their days in unending ranking-inspired transformational seminars, funds go to the mediocre and the sub mediocre, students and stakeholders base their careers on misleading data, and the problems of higher education are covered up or ignored.
Wednesday, August 07, 2019
The Decline of the University of California
Rankings have been justly criticised. Still, they do have their uses. They can identify which institutions and systems are ailing and which are in good health.
California was once the incarnation of the American dream with high wages, cheap housing, free or cheap education right through to college or even law, medical or graduate school. The University of California (UC) system was hailed as the peak of modern public research university education.
Now the state seems to have entered a period of decay. On any measure of literacy or education tCalifornia falls near the bottom of the USA. So how has this affected the public university system?
The question is now more urgent since UC Berkeley has just admitted to sending false data to the US News America's Best Colleges. Apparently the university had inflated the funds donated by alumni, a metric that accounts for five per cent of those rankings. Berkeley is now cast into the dark regions of higher education full of unranked places. It will be interesting to see what happens to applications and donations over the next couple of years.
Questions about the performance of universities are often answered by reference to the Big Two international rankings, QS and Times Higher Education (THE). That might not be a good idea.These rankings are unbalanced with indicators that have an excessive weighting, THE's citation indicator and QS's academic survey. They are sometimes volatile and can produce results that can be charitably described as unusual and counter-intuitive. They also rely on surveys and data submitted directly by institutions which are not very reliable. Both, however, suggest that UC is slowly and steadily declining since 2011 (THE) and 2017 (QS).
To get an accurate picture of research quality it would be better look at the Leiden Ranking, which has a consistent and transparent methodology. This too shows that there has been a steady decline in the relative global performance of the UC system.
First, compare performance for publications, the default indicator, in 2006-9 and 2014-17. Every single campus of UC, except for Merced which is not ranked, has fallen: UCLA from 6th to 23rd, Berkeley from 23rd to 43rd, San Diego from 25th to 37th, Davis from 26th to 54th, San Francisco from 39th to 63rd, Irvine from 88th to 146th, Santa Barbara from 168th to 28th, Riverside from 270th to 391st, Santa Cruz from 436th to 620th.
If we want to talk about quality we might look at another indicator, the number of papers in the top 1%.
Again we have a pattern of decline although not so steep: The smallest is San Francisco from 15th to 16th, the largest Riverside from 84th to 252nd. And there is one case of an improvement: San Diego goes from from 25th to 19th.
So far we have just looked at research. There is no global ranking that makes any serious attempt to assess teaching and learning. There are some that have indicators that might have something to do with student ability or graduate quality. The most useful of these are the Russia based Round University Rankings, which use data from Clarivate Analytics and include 20 indicators in four groups, teaching, research, international diversity and financial sustainability.
Berkeley was ranked 52nd overall in the world and 32nd in the US in 2011. It was 197th for teaching, 5th for research, 133rd for international diversity, and 8th for financial sustainability.
In 2019 it was still 52nd overall in the world and 27th in the USA. It had risen for teaching to 168th and for international diversity to 97th but had fallen for research to 20th and to 38th for financial sustainability.
Going into detail we see that Berkeley is improving for numbers of academic staff per student and world teaching reputation but falling for everything related to research.
Berkeley is the best of the UC campuses. All of the others except for the unranked Merced have fallen and all except for Davis have fallen for research.
It seems that UC is approaching the edge of the cliff. It is likely that the fall will get faster as Chinese students stop coming. It is difficult to see how the financial situation can get better in the foreseesble future and where the next generation of college-ready students are going to come from. No doubt there are more headlines and scandals to come.
Saturday, July 13, 2019
Singapore and the Rankings Again
As far as the rankings are concerned, Singapore has been a great success story, at least as far as the Big Two (THE and QS) are concerned.
In the latest QS world rankings the two major Singaporean universities, National University of Singapore (NUS) and Nanyang Technological University (NTU, have done extremely well, both of them reaching the eleventh spot. Predictably, the mainstream local media has praised the universities, and quoted QS spokespersons and university representatives about how the results are due to hiring talented faculty and the superiority of the national secondary education system.
There is some scepticism in Singapore about the rankings. The finance magazine Dollars and Sense has just published an article by Sim Kang Heong that questions the latest performance by Singapore's universities, including the relatively poor showing by Singapore Management University.
The author is aware of the existence of other rankings but names only two (ARWU and THE) and then presents a list of indicators from all the QS rankings including regional and specialist tables as though they were part of the World University Rankings.
The piece argues that "it doesn't take someone with a PhD to see some of the glaring biases and flaws in the current way QS does its global university rankings."
It is helpful that someone is taking a sceptical view of Singapore's QS ranking performance but disappointing that there is no specific reference to how NUS and NTU fared in other rankings.
Last February I published a post showing the ranks of these two institutions in the global rankings. Here they are again:
THE: 23rd and 51st
Shanghai ARWU: 85th and 96th
RUR: 50th and 73rd
Leiden (publications): 34th and 66th
These are definitely top 100 universities by any standard. Clearly though, the QS rankings rate them much more highly than anybody else.
In the latest QS world rankings the two major Singaporean universities, National University of Singapore (NUS) and Nanyang Technological University (NTU, have done extremely well, both of them reaching the eleventh spot. Predictably, the mainstream local media has praised the universities, and quoted QS spokespersons and university representatives about how the results are due to hiring talented faculty and the superiority of the national secondary education system.
There is some scepticism in Singapore about the rankings. The finance magazine Dollars and Sense has just published an article by Sim Kang Heong that questions the latest performance by Singapore's universities, including the relatively poor showing by Singapore Management University.
The author is aware of the existence of other rankings but names only two (ARWU and THE) and then presents a list of indicators from all the QS rankings including regional and specialist tables as though they were part of the World University Rankings.
The piece argues that "it doesn't take someone with a PhD to see some of the glaring biases and flaws in the current way QS does its global university rankings."
It is helpful that someone is taking a sceptical view of Singapore's QS ranking performance but disappointing that there is no specific reference to how NUS and NTU fared in other rankings.
Last February I published a post showing the ranks of these two institutions in the global rankings. Here they are again:
THE: 23rd and 51st
Shanghai ARWU: 85th and 96th
RUR: 50th and 73rd
Leiden (publications): 34th and 66th
These are definitely top 100 universities by any standard. Clearly though, the QS rankings rate them much more highly than anybody else.
Thursday, July 11, 2019
India and the QS rankings
The impact of university rankings is mixed. They have, for example, often had very negative consequences for faculty, especially junior, who in many places have been coerced into attending pointless seminars and workshops and churning out unread papers or book chapters in order to reach arbitrary and unrealistic targets or performance indicators.
But sometimes they have their uses. They have shown the weakness of several university systems. In particular, the global rankings have demonstrated convincingly that Indian higher education consists of a few islands of excellence in a sea of sub-mediocrity. The contrast with China, where many universities are now counted as world class, is stark and it is unlikely that it can be fixed with a few waves of the policy wand or by spraying cash around.
The response of academic and political leaders is not encouraging. There have been moves to give universities more autonomy, to increase funding, to engage with the rankings. But there is little sign that India is ready to acknowledge the underlying problems of the absence of a serious research culture or a secondary school system that seems unable to prepare students for tertiary education.
Indian educational and political leaders have lately become very concerned about the international standing of the country's universities. Unfortunately, their understanding of how the rankings actually work seems limited. This is not unusual. The qualities needed to climb the slippery ladder of academic politics are not those of a successful researcher or someone able to analyse the opportunities and the flaws of global rankings.
Recently there was a meeting of the Indian minister for Human Resource Development (HRD) plus the heads of the Indian Institutes of Technology Bombay and Delhi and Indian Institute of Science Bangalore.
According to a local media report, officials have said that the reputation indicators in the QS international rankings contribute to Indian universities poor ranking performance as they are "an area where the Indian universities lose out the maximum number of marks - due to the absence of Indian representation at QS panel."
The IIT Bombay director is quoted as saying "there are not enough participants in the UK or the US to rate Indian universities."
This shows ignorance of QS's methodology. QS now collects response from several channels including lists submitted by universities and a facility where individual researchers and employers can sign up to join the survey. In 2019 out of 83,877 academic survey responses collected over five years, 2.6% were from academics with an Indian affiliation, which is less than Russia, South Korea, Australia or Malaysia but more than China or Germany. This does not include responses from Indian academics at British, North American or Australian institutions. A similar proportion of responses to the QS employer survey were from India.
If there are not enough Indian participants in the QS survey then this might well be the fault of Indian universities themselves. QS allows universities to nominate up to 400 potential survey participants. I do not know if they have taken full advantage of this or whether those nominated have actually voted for Indian institutions.
It is possible that India could do better in the rankings by increasing its participation in the QS surveys to the level of Malaysia but it is totally inaccurate to suggest that there are no Indians in the current QS surveys
If Indian universities are going to rise in the rankings then they need to start by understanding how they actually work and creating informed and realistic strategies.
But sometimes they have their uses. They have shown the weakness of several university systems. In particular, the global rankings have demonstrated convincingly that Indian higher education consists of a few islands of excellence in a sea of sub-mediocrity. The contrast with China, where many universities are now counted as world class, is stark and it is unlikely that it can be fixed with a few waves of the policy wand or by spraying cash around.
The response of academic and political leaders is not encouraging. There have been moves to give universities more autonomy, to increase funding, to engage with the rankings. But there is little sign that India is ready to acknowledge the underlying problems of the absence of a serious research culture or a secondary school system that seems unable to prepare students for tertiary education.
Indian educational and political leaders have lately become very concerned about the international standing of the country's universities. Unfortunately, their understanding of how the rankings actually work seems limited. This is not unusual. The qualities needed to climb the slippery ladder of academic politics are not those of a successful researcher or someone able to analyse the opportunities and the flaws of global rankings.
Recently there was a meeting of the Indian minister for Human Resource Development (HRD) plus the heads of the Indian Institutes of Technology Bombay and Delhi and Indian Institute of Science Bangalore.
According to a local media report, officials have said that the reputation indicators in the QS international rankings contribute to Indian universities poor ranking performance as they are "an area where the Indian universities lose out the maximum number of marks - due to the absence of Indian representation at QS panel."
The IIT Bombay director is quoted as saying "there are not enough participants in the UK or the US to rate Indian universities."
This shows ignorance of QS's methodology. QS now collects response from several channels including lists submitted by universities and a facility where individual researchers and employers can sign up to join the survey. In 2019 out of 83,877 academic survey responses collected over five years, 2.6% were from academics with an Indian affiliation, which is less than Russia, South Korea, Australia or Malaysia but more than China or Germany. This does not include responses from Indian academics at British, North American or Australian institutions. A similar proportion of responses to the QS employer survey were from India.
If there are not enough Indian participants in the QS survey then this might well be the fault of Indian universities themselves. QS allows universities to nominate up to 400 potential survey participants. I do not know if they have taken full advantage of this or whether those nominated have actually voted for Indian institutions.
It is possible that India could do better in the rankings by increasing its participation in the QS surveys to the level of Malaysia but it is totally inaccurate to suggest that there are no Indians in the current QS surveys
If Indian universities are going to rise in the rankings then they need to start by understanding how they actually work and creating informed and realistic strategies.
Thursday, July 04, 2019
Comparing National Rankings: USA and China
America's Best Colleges
The US News America's Best Colleges (ABC) is very much the Grand Old Man of university rankings. Its chief data analyst has been described as the most powerful man in America although that is perhaps a bit exaggerated. These rankings have had a major role in defining excellence in American higher education and they may have contributed to US intellectual and scientific dominance in the last two decades of the twentieth century.
But they are changing. This year's edition has introduced two new measures of "social mobility", namely the number of Pell Grant (low income) students and the comparative performance of those students. There is suspicion that this is an attempt to reward universities for the recruitment and graduation of certain favoured groups, including African Americans and Hispanics, and perhaps recent immigrants from the Global South. Income is used as a proxy for race since current affirmative action policies at Harvard and other places are under legal attack.
It should be noted that success is defined as graduation within a six year period and that is something that can be easily achieved by extra tuition, lots of collaborative projects, credit for classroom discussions and effort and persistence, holding instructors responsible for student failure, innovative methods of assessment, contextualised grading and so on.
The new ABC has given the Pell Grant metrics a 5% weighting and has also increased the weighting for graduation rate performance, which looks at actual student outcomes compared to those predicted from their social and academic attributes, from 7.5% to 8%. So now a total of 13 % in effect goes to social engineering. A good chunk of the rankings then is based on the dubious proposition that universities can and should reduce or eliminate the achievement gap between various groups.
To make room for these metrics the acceptance rate indicator has been eliminated, and the weightings for standardised test scores, high school rank, counsellor reviews and six year graduation rate have been reduced.
Getting rid of the acceptance rate metric is probably not a bad idea since it had the unfortunate effect of encouraging universities to maximise the number of rejected applications, which produced income for the universities but imposed a financial burden on applicants.
The rankings now assign nearly a one third weighting to student quality, 22% to graduation and retention rates and 10% for standardised tests and high school rank.
It seems that US News is moving from ranking universities by the academic ability of their students to ranking based on the number and relative success of low income and "minority" students.
The latest ranking shows the effect of these changes. The very top is little changed but further down there are significant shifts. William and Mary is down. Howard University, a predominantly African American institution, is up as are the campuses of the University of California system.
ABC also has another 30% for resources (faculty 20% and financial 10%), 20% for for reputation (15 % peer and 5% high school counsellors), and 5% for alumni donations.
Shanghai Best Chinese University Rankings
The Shanghai Best Chinese University Ranking (BCUR) is a recent initiative although ShanghaiRanking has been doing global rankings since 2003. They are quite different from the US News rankings.
For student outcomes Shanghai assigns a weighting of 10% to graduate employment and does not bother with graduation rates. As noted, ABC gives 22% for student outcomes (six year graduation rate and first year retention rate).
Shanghai gives a 30% weighting for the dreaded Gaokao, the national university entrance exam, compared to 10% for high school class rank and SAT/ACT scores in ABC.
With regard to inputs, Shanghai allocates just 5% for alumni donations, compared to 30% in the ABC for class size, faculty salary, faculty highest degrees, student faculty ratio, full time faculty and financial resources.
That 5% is the only thing in Shanghai that might be relevant to reputation while ABC has a full 20% for reputation among peers and counsellors.
Shanghai also has a 40% allocation for research, 10% for "social service", which comprises research income from industry and income from technology transfer, and 5% for international students. ABC has no equivalent to these, although it publishes rankings separately on postgraduate programmes.
To compare the two, ABC is heavy on inputs, student graduation and retention, reputation, and social engineering. Probably the last will become more important over the next few years
BCUR, in contrast, emphasises student ability as measured by a famously rigorous entrance exam, student employment, research, links with industry, and internationalisation.
It seems that in the coming years excellence in higher education will be defined very differently. An elite US university will be one well endowed with money and human resources, will make sure that most of its students graduate one way or another, will ensure that that the ethnic and gender composition of the faculty and student body matches that of America or the world, and has a good reputation among peers and the media.
An elite Chinese university will be one that produces employed and employable graduates, admits students with high levels of academic skills, has close ties with industry, and has a faculty that produces a high volume of excellent research.
Sunday, June 30, 2019
The Influence of rankings revisited
Rankings are everywhere. Like a cleverly constructed virus they are all over the place and are almost impossible to delete. They are used for immigration policy, advertising, promotion, and recruitment. Here is the latest example.
A tweet from Eduardo Urias noted by Stephen Curry reported that an advertisement for an assistant professorship at Maastricht University included the requirement that candidates "should clearly state the (THE, QS, of FT business school) ranking of the university of their highest degree."
The sentence has since been removed but one wonders why the relevant committee at Maastricht could not be trusted to look up the university ranks by themselves and why should they ask about those specific rankings, which might not be the most relevant or accurate. Maastricht is a very good university, especially for the social sciences (I knew that anyway and I checked with Leiden Ranking), so why should it need to take rankings into account instead of looking at the applicants grad school records publications?
Even though that sentence was removed. this one remains.
"Maastricht University is currently ranked fifth in the top of Young Universities under 50 years."
A tweet from Eduardo Urias noted by Stephen Curry reported that an advertisement for an assistant professorship at Maastricht University included the requirement that candidates "should clearly state the (THE, QS, of FT business school) ranking of the university of their highest degree."
The sentence has since been removed but one wonders why the relevant committee at Maastricht could not be trusted to look up the university ranks by themselves and why should they ask about those specific rankings, which might not be the most relevant or accurate. Maastricht is a very good university, especially for the social sciences (I knew that anyway and I checked with Leiden Ranking), so why should it need to take rankings into account instead of looking at the applicants grad school records publications?
Even though that sentence was removed. this one remains.
"Maastricht University is currently ranked fifth in the top of Young Universities under 50 years."
Monday, June 17, 2019
Are Malaysian Universities Going Backwards?
International university rankings have become very popular in Malaysia, perhaps obsessively so. There is also a lot of commentary in the media, usually not very well informed.
Are Malaysian universities going backwards?
Murray Hunter writing in Eurasia Review thinks so. His claim is supported entirely by their relatively poor performance in the Times Higher Education (THE) world and Asian university rankings.
(By the way, Hunter refers to "THES" but that changed several years ago).
Hunter apparently is one of those who are unaware of the variety and complexity of the current international university ranking scene. The IREG international inventory lists 45 rankings and is already in need of updating. Many of these cover more institutions than THE, some are much more technically competent, and some include more indicators.
THE's is not the only ranking available and it is not very helpful for any institution seeking to make genuine improvements. It bundles eleven indicators in groups so that it is very difficult to work out exactly what contributed to a deterioration or an improvement . The two metrics that stand alone have produced some amusing but questionable results, Babol Noshirvani University of Technology first for research impact, Anadolu University for industry income.
It really is no disgrace to do badly in these rankings.
Hunter's article is a mirror image of the excitement in the Malaysian media about the rise of Malaysian universities in the QS rankings, which seems to be largely the result of massive Malaysian participation in the QS academic survey, which has a disproportionate weighting of 40%.
Malaysian universities have been celebrating their rise in the QS world rankings for some time. That is perhaps a bit more reasonable than getting excited about the THE rankings but still not very helpful.
We need to use a broad range of rankings. For a start take a look at the Leiden Ranking for quantity and quality of research. For total publications Universiti Malaya (UM) has risen from 509th place in 2006-09 to112th in 2014-17.
For the percentage of publications in the top 1% of journals, the most selective indicator, its rank has risen from 824th in 2006-2009 to 221st in 2014-17.
Turning to the Moscow based Round University Rankings for a more general assessment, we find that UM has risen from 269th in 2016 to 156th in 2019 (76th for teaching).
Malaysian universities, at least the best known ones, are making significant and substantial progress in stable and reliable global rankings.
At the end of the article Hunter says that "(t)he fact that Universiti Tunku Abdul Rahman (UTAR) has run into second best Malaysian University in less than 20 years of existence as a university is telling about the plight of Malaysian public universities."
Actually, it says nothing except that THE has a flawed methodology for counting citations. UTAR's performance in the THE rankings is the result of one talented researcher working for the Global Burden of Disease project, limited research output, a bonus for location in a country with a modest impact score and a refusal to use fractional counting.
Are Malaysian universities going backwards?
Murray Hunter writing in Eurasia Review thinks so. His claim is supported entirely by their relatively poor performance in the Times Higher Education (THE) world and Asian university rankings.
(By the way, Hunter refers to "THES" but that changed several years ago).
Hunter apparently is one of those who are unaware of the variety and complexity of the current international university ranking scene. The IREG international inventory lists 45 rankings and is already in need of updating. Many of these cover more institutions than THE, some are much more technically competent, and some include more indicators.
THE's is not the only ranking available and it is not very helpful for any institution seeking to make genuine improvements. It bundles eleven indicators in groups so that it is very difficult to work out exactly what contributed to a deterioration or an improvement . The two metrics that stand alone have produced some amusing but questionable results, Babol Noshirvani University of Technology first for research impact, Anadolu University for industry income.
It really is no disgrace to do badly in these rankings.
Hunter's article is a mirror image of the excitement in the Malaysian media about the rise of Malaysian universities in the QS rankings, which seems to be largely the result of massive Malaysian participation in the QS academic survey, which has a disproportionate weighting of 40%.
Malaysian universities have been celebrating their rise in the QS world rankings for some time. That is perhaps a bit more reasonable than getting excited about the THE rankings but still not very helpful.
We need to use a broad range of rankings. For a start take a look at the Leiden Ranking for quantity and quality of research. For total publications Universiti Malaya (UM) has risen from 509th place in 2006-09 to112th in 2014-17.
For the percentage of publications in the top 1% of journals, the most selective indicator, its rank has risen from 824th in 2006-2009 to 221st in 2014-17.
Turning to the Moscow based Round University Rankings for a more general assessment, we find that UM has risen from 269th in 2016 to 156th in 2019 (76th for teaching).
Malaysian universities, at least the best known ones, are making significant and substantial progress in stable and reliable global rankings.
At the end of the article Hunter says that "(t)he fact that Universiti Tunku Abdul Rahman (UTAR) has run into second best Malaysian University in less than 20 years of existence as a university is telling about the plight of Malaysian public universities."
Actually, it says nothing except that THE has a flawed methodology for counting citations. UTAR's performance in the THE rankings is the result of one talented researcher working for the Global Burden of Disease project, limited research output, a bonus for location in a country with a modest impact score and a refusal to use fractional counting.
Tuesday, May 14, 2019
Bangladeshi Universities Should Forget about Their Websites
Bangladesh has a lot of universities but none of them have succeeded in getting into the list of 417 universities included in the THE Asian University Rankings. The country's performance is worse than anywhere in South Asia. Even Nepal and Sri Lanka have managed one university each
Once again, it seems that the media and and university administrators have been persuaded that there is only one set of international rankings, those produced by Times Higher Education (THE), and that the many others, which are documented in the IREG Inventory of International Rankings, do not exist.
The response of university bureaucrats shows a lack of awareness of current university rankings and their methodologies. Dhaka University's head claimed, in an interview with the Dhaka Tribune, that if the university had provided the necessary information on its website it would be in a "prestigious position". He apparently went on to say that the problem was that the website was not up to date and that a dean has been assigned to discuss the matter.
THE does not use data from university websites. It collects and processes information submitted by institutions, bibliometric data from the Scopus database and responses to surveys. It makes no difference to THE or other rankers whether the website was updated yesterday or a decade ago.
The Vice Chancellor of Shahjahal University of Science and Technology spoke about research papers not being published on websites or noted in annual reports. Again, this makes no difference to THE or anyone else.
He was, however, correct to note that bureaucratic restrictions on the admission of foreign students would reduce the scores in those rankings that count international students as an indicator.
Universities in Bangladesh need to do some background research into the current ranking scene before they attempt to get ranked. They should be aware of the rapidly growing number of rankings. THE is not the only international ranking and it is probably unsuitable for universities in countries like Bangladesh that do not have very much income or established reputations and are unable to participate in citation-rich global projects.
They should look at rankings with a more appropriate methodology. Dhaka University, for example, is currently ranked 504th among universities in the Scimago Institutions Rankings, which include patents, altmetrics, and web size as well as research.
Bangladeshi universities should first review the current rankings and make a note of their procedures and requirements and also consider the resources available to collect and submit data .
It would probably be a good idea to focus on Scimago and the research focussed URAP rankings, If universities want to try for a research plus teaching ranking which require institutions to submit data then it would be better to contact the Global Institutional profile Project to get into the Round University Rankings or QS with the objective of leveraging their local reputations with academics and employers.
Saturday, April 13, 2019
Do we really need a global impact ranking?
Sixteen years ago there was just one international university ranking, the Shanghai Academic Ranking of World Universities (ARWU). Since then rankings have proliferated. We have world rankings, regional rankings, subject rankings, business school rankings, young university rankings, employability rankings, systems rankings, and best student cities.
As if this wasn't enough, there is now a "global impact" ranking published by Times Higher Education (THE). This was announced with a big dose of breathless hyperbole as though it was something revolutionary and unprecedented. Not quite. Before THE's ranking there was the GreenMetric ranking published by Universitas Indonesia. This measured universities' contribution to sustainability through indicators like water, waste, transportation, and education and research .
THE was doing something more specific and perhaps more ambitious, measuring adherence to the Sustainable Development Goals proclaimed by the UN. Universities could submit data about eleven out of the 17 seventeen goals and a minimum of four were counted for the overall rankings, with one, partnership for the goals, being mandatory.
The two rankings have attracted different respondents so perhaps they are complementary rather than competitive. The GreenMetric rankings include 66 universities from Indonesia, 18 from Malaysia and 61 from the USA compared to 7, 9 and 31 in the THE impact rankings. On the other hand, the THE rankings have a lot more universities from Australia and the UK. It is noticeable that China is almost entirely absent from both (2 universities in GreenMetric and 3 in THE's).
But is there really any point in a global impact ranking? Some universities in the West seem to be doing a fairly decent job of producing research in the natural sciences although no doubt much of it is mediocre or worse and there is also a lot of politically correct nonsense being produced in the humanities and social sciences. They have been far less successful in teaching undergraduates and providing them with the skills required by employers and professional and graduate schools. It is surely debatable whether universities should be concerned about the UN sustainable development goals before they have figured out to fulfill their teaching mission.
Similarly, rankers have become quite adept at measuring and comparing research output and quality. There are several technically competent rankings which look at research from different viewpoints. There is the Shanghai ARWU which counts long dead Nobel and Fields laureates, the National Taiwan University ranking which counts publications over an eleven year period period, Scimago which includes patents, URAP with 2,500 institutions, the US News Best Global Universities which includes books and conferences.
The THE world ranking is probably the least useful of the research-dominant rankings. It gives a 30 % weighting to research which is assessed by three indicators, reputation, publications per staff and research income per staff. An improvement in the score for research could result from an improved reputation for research, an reduction in the number of academic staff, an increase in the number of publications, an increase in research funding, or a combination of some or all of these. Students and stakeholders who want to know exactly why the research prowess of a university is rising or falling will not find THE very helpful.
The THE world and regional rankings also have a citations indicator derived from normalised citations impact. Citations are benchmarked against documents in 300+ fields, five document types and five years of publications. Further, citations to documents with less that a thousand authors are not fractionalised. Further again, self-citations are allowed. And again, there is a regional modification or country bonus applied to half of the indicator, dividing a universities impact score by the square root of the score of the country in which it is located. This means that every university except those in the country with the highest score goes up, some a bit and some a lot.
The result of all this is a bit of a mess. Over the last few years we have seen institutions rise to glory at the top of the citations that should never have been there, usually because they have succeeded in combining a small number of publications with participation in a mega project with hundreds of authors and affiliated universities and thousands of citations. Top universities for research impact in the 2018-19 world rankings include Babol Noshirvani University of Technology, the University of Reykjavik, the Brighton and Sussex Medical School and Anglia Ruskin University.
There is something disturbing about university leaders queuing up to bask in the approval of an organisation that seems to think that Babol Norshirvani University of Technology has a greater research influence than anywhere else in the world. The idea that a ranking organisation that cannot publish a plausible list of influential research universities should have the nerve to start talking about measuring global impact is quite surprising.
Most rankers have done better at evaluating research than THE. At least they have not produced indicators as ridiculous as the normalised citations indicator. Teaching, especially undergraduate teaching, is another matter. Attempts to capture the quality of university teaching have been far from successful. Rankers have tried to measure inputs such as income or faculty resources or have conducted surveys but these are at best very indirect indicators. It seems strange that they should now turn their attention to various third missions.
Of course, research and teaching are not the only thing that universities do. But until international ranking organisations have worked out how to effectively compare universities for the quality of learning and teaching or graduate employability it seems premature to start trying to measure anything else.
It is likely though that many universities will welcome the latest THE initiative. Many Western universities faced with declining standards and funding and competition from the East will welcome the opportunity to find something where they can get high scores that will help with branding and promotion.
As if this wasn't enough, there is now a "global impact" ranking published by Times Higher Education (THE). This was announced with a big dose of breathless hyperbole as though it was something revolutionary and unprecedented. Not quite. Before THE's ranking there was the GreenMetric ranking published by Universitas Indonesia. This measured universities' contribution to sustainability through indicators like water, waste, transportation, and education and research .
THE was doing something more specific and perhaps more ambitious, measuring adherence to the Sustainable Development Goals proclaimed by the UN. Universities could submit data about eleven out of the 17 seventeen goals and a minimum of four were counted for the overall rankings, with one, partnership for the goals, being mandatory.
The two rankings have attracted different respondents so perhaps they are complementary rather than competitive. The GreenMetric rankings include 66 universities from Indonesia, 18 from Malaysia and 61 from the USA compared to 7, 9 and 31 in the THE impact rankings. On the other hand, the THE rankings have a lot more universities from Australia and the UK. It is noticeable that China is almost entirely absent from both (2 universities in GreenMetric and 3 in THE's).
But is there really any point in a global impact ranking? Some universities in the West seem to be doing a fairly decent job of producing research in the natural sciences although no doubt much of it is mediocre or worse and there is also a lot of politically correct nonsense being produced in the humanities and social sciences. They have been far less successful in teaching undergraduates and providing them with the skills required by employers and professional and graduate schools. It is surely debatable whether universities should be concerned about the UN sustainable development goals before they have figured out to fulfill their teaching mission.
Similarly, rankers have become quite adept at measuring and comparing research output and quality. There are several technically competent rankings which look at research from different viewpoints. There is the Shanghai ARWU which counts long dead Nobel and Fields laureates, the National Taiwan University ranking which counts publications over an eleven year period period, Scimago which includes patents, URAP with 2,500 institutions, the US News Best Global Universities which includes books and conferences.
The THE world ranking is probably the least useful of the research-dominant rankings. It gives a 30 % weighting to research which is assessed by three indicators, reputation, publications per staff and research income per staff. An improvement in the score for research could result from an improved reputation for research, an reduction in the number of academic staff, an increase in the number of publications, an increase in research funding, or a combination of some or all of these. Students and stakeholders who want to know exactly why the research prowess of a university is rising or falling will not find THE very helpful.
The THE world and regional rankings also have a citations indicator derived from normalised citations impact. Citations are benchmarked against documents in 300+ fields, five document types and five years of publications. Further, citations to documents with less that a thousand authors are not fractionalised. Further again, self-citations are allowed. And again, there is a regional modification or country bonus applied to half of the indicator, dividing a universities impact score by the square root of the score of the country in which it is located. This means that every university except those in the country with the highest score goes up, some a bit and some a lot.
The result of all this is a bit of a mess. Over the last few years we have seen institutions rise to glory at the top of the citations that should never have been there, usually because they have succeeded in combining a small number of publications with participation in a mega project with hundreds of authors and affiliated universities and thousands of citations. Top universities for research impact in the 2018-19 world rankings include Babol Noshirvani University of Technology, the University of Reykjavik, the Brighton and Sussex Medical School and Anglia Ruskin University.
There is something disturbing about university leaders queuing up to bask in the approval of an organisation that seems to think that Babol Norshirvani University of Technology has a greater research influence than anywhere else in the world. The idea that a ranking organisation that cannot publish a plausible list of influential research universities should have the nerve to start talking about measuring global impact is quite surprising.
Most rankers have done better at evaluating research than THE. At least they have not produced indicators as ridiculous as the normalised citations indicator. Teaching, especially undergraduate teaching, is another matter. Attempts to capture the quality of university teaching have been far from successful. Rankers have tried to measure inputs such as income or faculty resources or have conducted surveys but these are at best very indirect indicators. It seems strange that they should now turn their attention to various third missions.
Of course, research and teaching are not the only thing that universities do. But until international ranking organisations have worked out how to effectively compare universities for the quality of learning and teaching or graduate employability it seems premature to start trying to measure anything else.
It is likely though that many universities will welcome the latest THE initiative. Many Western universities faced with declining standards and funding and competition from the East will welcome the opportunity to find something where they can get high scores that will help with branding and promotion.
Where is the real educational capital of the world?
Here is another example of how rankings, especially those produced by Times Higher Education (THE), are used to mislead the public.
The London Post has announced that London is the Higher Educational Capital of the World for 2019. Support for this claim is provided by four London universities appearing in the top 40 of the THE World University Rankings which, unsurprisingly, have been welcomed by London Mayor Sadiq Khan.
In addition, THE has Oxford and Cambridge as first and second in the world in their overall rankings and QS has declared London to be the Best Student City.
THE is not the only global ranking. There are now several others and none of them have Oxford in first place. Most of them give the top spot to Harvard, although in the QS world rankings it is MIT and in the GreenMetric rankings Wageningen.
Also, if we look at the number of universities in the top 50 of the Shanghai rankings we cannot see London as the undisputed HE capital of the world. Using this simple criterion it would be New York with three, Columbia, New York University and Rockefeller.
Then come Boston, Paris, Chicago and London with two each.
The London Post has announced that London is the Higher Educational Capital of the World for 2019. Support for this claim is provided by four London universities appearing in the top 40 of the THE World University Rankings which, unsurprisingly, have been welcomed by London Mayor Sadiq Khan.
In addition, THE has Oxford and Cambridge as first and second in the world in their overall rankings and QS has declared London to be the Best Student City.
THE is not the only global ranking. There are now several others and none of them have Oxford in first place. Most of them give the top spot to Harvard, although in the QS world rankings it is MIT and in the GreenMetric rankings Wageningen.
Also, if we look at the number of universities in the top 50 of the Shanghai rankings we cannot see London as the undisputed HE capital of the world. Using this simple criterion it would be New York with three, Columbia, New York University and Rockefeller.
Then come Boston, Paris, Chicago and London with two each.
Saturday, April 06, 2019
Resources alone may not be enough
Universitas 21 has just published its annual ranking of higher education systems. There are four criteria each containing several metrics: resources, connectivity, environment and output.
The ranking has received a reasonable amount of media coverage although not as much as THE or QS.
A comparison of the ranks for the Resources indicator, comprising five measures of expenditure, and for Output, which includes research, citations, performance on rankings, graduation rates and enrolments, produces some interesting insights. There are countries such as Denmark and Switzerland that do well for both. China, Israel and some European countries seem to be very good at getting a high output from the resources available. There are others, including Turkey, Brazil, Saudi Arabia, and Malaysia, that appear to have adequate or more than adequate resources but whose rank for output is not so high.
These are of course limited indicators and it could perhaps just be a matter of time before the resources produce the desired results. The time for panic or celebration may not have arrived yet. Even so, it does seem that some countries or cultures are able to make better use of their resources than others.
The table below orders countries according to the difference between their ranks for resources and for output. Ireland is 20 places higher for output than it is for resources. India is seven places lower.
The relatively poor performance for Singapore is surprising given that country's reputation for all round excellence. Possibly there is a point where expenditure on higher education runs into diminishing or even negative returns.
The ranking has received a reasonable amount of media coverage although not as much as THE or QS.
A comparison of the ranks for the Resources indicator, comprising five measures of expenditure, and for Output, which includes research, citations, performance on rankings, graduation rates and enrolments, produces some interesting insights. There are countries such as Denmark and Switzerland that do well for both. China, Israel and some European countries seem to be very good at getting a high output from the resources available. There are others, including Turkey, Brazil, Saudi Arabia, and Malaysia, that appear to have adequate or more than adequate resources but whose rank for output is not so high.
These are of course limited indicators and it could perhaps just be a matter of time before the resources produce the desired results. The time for panic or celebration may not have arrived yet. Even so, it does seem that some countries or cultures are able to make better use of their resources than others.
The table below orders countries according to the difference between their ranks for resources and for output. Ireland is 20 places higher for output than it is for resources. India is seven places lower.
The relatively poor performance for Singapore is surprising given that country's reputation for all round excellence. Possibly there is a point where expenditure on higher education runs into diminishing or even negative returns.
China
|
+20
|
Ireland
|
+20
|
Russia
|
+18
|
Greece
|
+16
|
Hungary
|
+14
|
Italy
|
+14
|
UK
|
+11
|
Israel
|
+10
|
Slovenia
|
+10
|
South Korea
|
+10
|
Australia
|
+8
|
USA
|
+8
|
Spain
|
+7
|
Taiwan
|
+4
|
Bulgaria
|
+3
|
Germany
|
+3
|
Iran
|
+3
|
Netherlands
|
+3
|
Japan
|
+3
|
Czech Republic
|
+2
|
Belgium
|
+1
|
Croatia
|
+1
|
Romania
|
+1
|
Thailand
|
+1
|
Finland
|
0
|
France
|
0
|
Indonesia
|
0
|
New Zealand
|
0
|
Canada
|
-1
|
Denmark
|
-1
|
Portugal
|
-1
|
Argentina
|
-2
|
Norway
|
-2
|
Poland
|
-2
|
South Africa
|
-2
|
Switzerland
|
-2
|
Ukraine
|
-5
|
Hong Kong
|
-4
|
Sweden
|
-5
|
India
|
-7
|
Chile
|
-9
|
Singapore
|
-9
|
Austria
|
-11
|
Mexico
|
-13
|
Serbia
|
-13
|
Slovakia
|
-14
|
Turkey
|
-14
|
Brazil
|
-16
|
Saudi Arabia
|
-25
|
Malaysia
|
-28
|
Subscribe to:
Posts (Atom)