Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Saturday, August 24, 2019
Seven modest suggestions for Times Higher
The latest fabulous dynamic exciting trusted prestigious sophisticated etc etc Times Higher Education (THE) world academic summit is coming.
The most interesting, or at least the most amusing, event will probably be the revelation of the citations indicator which supposedly measures research impact. Over the last few years this metric has discovered a series of unexpected world-class research universities: Alexandria University, Tokyo Metropolitan University, Anglia Ruskin University, the University of Reykjavik, St. George's London, Babol Noshirvani University of Technology, Brighton and Sussex Medical School. THE once called this their flagship indicator but oddly enough they don't seem to have got round to releasing it as a standalone ranking.
But looking at the big picture, THE doesn't appear to have suffered much, if at all, from the absurdity of the indicator. The great and the good of the academic world continue to swarm to THE summits where they bask in the glow of the charts and tables that confirm their superiority.
THE have hinted that this summit will see big reforms to the rankings especially the citations indicator. That would certainly improve their credibility although they may be less interesting.
I have discussed THE's citation problems here, here, here, and here. So, for one last time, I hope, here the main flaws and we will see whether THE will fix them.
1. A 30% weighting for any single indicator is far too high. It would be much better to reduce it to 10 or 20%.
2. Using only one method to measure citations is not a good idea. Take at look at the Leiden Ranking and play around with the settings and parameters. You will see that you can get very different results with just a bit of tweaking. It is necessary to use a variety of metrics to get a broad picture of research quality, impact and influence.
3. THE have a regional modification or country bonus that divides the research impact score of universities by the square root of the scores of the country where they are located. The effect of this is to increase the score of every university except those in the top ranking country with the increase being greater for those with worse research records. This applies to half of the indicator and is supposed to compensate for some researchers lacking access to international networks. For some reason this was never a problem for the publications, income or international indicators. Removing the bonus would do a let to make the metric more credible.
4. The indicator is over-normalized. Impact scores are bench marked to the world average for over three hundred fields plus year of publication. The more fields the greater the chance that a university can benefit from an anomalous paper that receives an unusually high number of citations. It would help if THE reduced the number of fields although that seems unlikely.
5. Unless a paper has over a thousand authors THE treat every single contributor as receiving every single citation. Above that number they use fractional counting. The result is that the THE rankings privilege medical institutions such as St George's and the Brighton and Sussex Medical School that take part in multi-author projects such as the Global Burden of Disease study. All round fractional counting would seem the obvious answer although it might add a bit to costs.
6. Self-citation has become an issue recently. THE have said several times that it doesn't make very much difference. That may be true but there have been occasions when a single serial self citer can make a university like Alexandria or Veltech soar into the research stratosphere and that could happen again.
7. A lot of researchers are adding additional affiliations to their names when they publish. Those secondary, tertiary, sometimes more affiliations are counted by rankers as though they were primary affiliations. It would make sense to count only primary affiliations as ARWU does with highly cite researchers.
Friday, August 23, 2019
Rankings are everywhere
This is from an Indian website. The QS World University Rankings are being used to sell real estate in Australia.
SYDNEY: Leading Australian developer Crown Group is developing six new residential apartment developments near six of the world's top 200 university cities, according to the new QS World University Rankings 2020.
SYDNEY: Leading Australian developer Crown Group is developing six new residential apartment developments near six of the world's top 200 university cities, according to the new QS World University Rankings 2020.
Tuesday, August 13, 2019
University of the Philippines beats Oxford, Cambridge, Yale, Harvard, Tsinghua, Peking etc etc
Rankings can do some good sometimes. They can also do a lot of harm and that harm is multiplied when they are sliced more and more thinly to produce rankings by age, by size, by mission, by region, by indicator, by subject. When this happens minor defects in the overall rankings are amplified.
That would not be so bad if universities, political leaders and the media were to treat the tables and the graphs with a healthy scepticism. Unfortunately, they treat the rankings, especially THE, with obsequious deference as long as they are provided with occasional bits of publicity fodder.
Recently, the Philippines media have proclaimed that the University of the Philippines (UP) has beaten Harvard, Oxford and Stanford for health research citations. It was seventh in the THE Clinical, Pre-clinical and Health category behind Tokyo Metropolitan University, Auckland University of Technology, Metropolitan Autonomous University Mexico, Jordan University of Science and Technology, University of Canberra and Anglia Ruskin University.
The Inquirer is very helpful and provides an explanation from the Philippine Council for Health Research and Development that citation scores “indicate the number of times a research has been cited in other research outputs” and that the score "serves as an indicator of the impact or influence of a research project which other researchers use as reference from which they can build on succeeding breakthroughs or innovations.”
Fair enough, but how can UP, which has a miserable score of 13.4 for research in the same subject ranking have such a massive research influence? How can it have an extremely low output of papers, a poor reputation for research, and very little funding and still be a world beater for research impact.
It is in fact nothing to do with UP, nothing to do with everyone working as a team, decisive leadership or recruiting international talent.
It is the result of a bizarre and ludicrous methodology. First, THE does not use fractional counting for papers with less than a thousand authors. UP, along with many other universities, has taken part in the Global Burden of Disease project funded by the Bill and Melinda Gates Foundation. This has produced a succession of papers, many of them in the Lancet, with hundreds of contributing institutions and researchers, whose names are all listed as authors, and hundreds or thousands of citations. As long as the number of authors does not reach 1,000 each author is counted as though he or she were the recipient of all the citations. So UP gets the credit for a massive number of citations which is divided by a relatively small number of papers.
Why not just use fractional counting, dividing the citations among the contributors or the intuitions, like Leiden Ranking does. Probably because it might add a little bit to costs, perhaps because THE doesn't like to admit it made a mistake.
Then we have the country bonus or regional modification, applied to half the indicator, which increases the score for universities in countries with low impact.
The result of all this is that UP, surrounded by low scoring universities, not producing very much research but with a role in a citation rich mega project, gets a score for this indicator that puts it ahead of the Ivy League, the Group of Eight and the leading universities of East Asia.
If nobody took this seriously, then no great harm would be done. Unfortunately it seems that large numbers of academics, bureaucrats and journalists do take the THE rankings very seriously or pretend to do so in public.
And so committee addicts get bonuses and promotions, talented researchers spend their days in unending ranking-inspired transformational seminars, funds go to the mediocre and the sub mediocre, students and stakeholders base their careers on misleading data, and the problems of higher education are covered up or ignored.
That would not be so bad if universities, political leaders and the media were to treat the tables and the graphs with a healthy scepticism. Unfortunately, they treat the rankings, especially THE, with obsequious deference as long as they are provided with occasional bits of publicity fodder.
Recently, the Philippines media have proclaimed that the University of the Philippines (UP) has beaten Harvard, Oxford and Stanford for health research citations. It was seventh in the THE Clinical, Pre-clinical and Health category behind Tokyo Metropolitan University, Auckland University of Technology, Metropolitan Autonomous University Mexico, Jordan University of Science and Technology, University of Canberra and Anglia Ruskin University.
The Inquirer is very helpful and provides an explanation from the Philippine Council for Health Research and Development that citation scores “indicate the number of times a research has been cited in other research outputs” and that the score "serves as an indicator of the impact or influence of a research project which other researchers use as reference from which they can build on succeeding breakthroughs or innovations.”
Fair enough, but how can UP, which has a miserable score of 13.4 for research in the same subject ranking have such a massive research influence? How can it have an extremely low output of papers, a poor reputation for research, and very little funding and still be a world beater for research impact.
It is in fact nothing to do with UP, nothing to do with everyone working as a team, decisive leadership or recruiting international talent.
It is the result of a bizarre and ludicrous methodology. First, THE does not use fractional counting for papers with less than a thousand authors. UP, along with many other universities, has taken part in the Global Burden of Disease project funded by the Bill and Melinda Gates Foundation. This has produced a succession of papers, many of them in the Lancet, with hundreds of contributing institutions and researchers, whose names are all listed as authors, and hundreds or thousands of citations. As long as the number of authors does not reach 1,000 each author is counted as though he or she were the recipient of all the citations. So UP gets the credit for a massive number of citations which is divided by a relatively small number of papers.
Why not just use fractional counting, dividing the citations among the contributors or the intuitions, like Leiden Ranking does. Probably because it might add a little bit to costs, perhaps because THE doesn't like to admit it made a mistake.
Then we have the country bonus or regional modification, applied to half the indicator, which increases the score for universities in countries with low impact.
The result of all this is that UP, surrounded by low scoring universities, not producing very much research but with a role in a citation rich mega project, gets a score for this indicator that puts it ahead of the Ivy League, the Group of Eight and the leading universities of East Asia.
If nobody took this seriously, then no great harm would be done. Unfortunately it seems that large numbers of academics, bureaucrats and journalists do take the THE rankings very seriously or pretend to do so in public.
And so committee addicts get bonuses and promotions, talented researchers spend their days in unending ranking-inspired transformational seminars, funds go to the mediocre and the sub mediocre, students and stakeholders base their careers on misleading data, and the problems of higher education are covered up or ignored.
Wednesday, August 07, 2019
The Decline of the University of California
Rankings have been justly criticised. Still, they do have their uses. They can identify which institutions and systems are ailing and which are in good health.
California was once the incarnation of the American dream with high wages, cheap housing, free or cheap education right through to college or even law, medical or graduate school. The University of California (UC) system was hailed as the peak of modern public research university education.
Now the state seems to have entered a period of decay. On any measure of literacy or education tCalifornia falls near the bottom of the USA. So how has this affected the public university system?
The question is now more urgent since UC Berkeley has just admitted to sending false data to the US News America's Best Colleges. Apparently the university had inflated the funds donated by alumni, a metric that accounts for five per cent of those rankings. Berkeley is now cast into the dark regions of higher education full of unranked places. It will be interesting to see what happens to applications and donations over the next couple of years.
Questions about the performance of universities are often answered by reference to the Big Two international rankings, QS and Times Higher Education (THE). That might not be a good idea.These rankings are unbalanced with indicators that have an excessive weighting, THE's citation indicator and QS's academic survey. They are sometimes volatile and can produce results that can be charitably described as unusual and counter-intuitive. They also rely on surveys and data submitted directly by institutions which are not very reliable. Both, however, suggest that UC is slowly and steadily declining since 2011 (THE) and 2017 (QS).
To get an accurate picture of research quality it would be better look at the Leiden Ranking, which has a consistent and transparent methodology. This too shows that there has been a steady decline in the relative global performance of the UC system.
First, compare performance for publications, the default indicator, in 2006-9 and 2014-17. Every single campus of UC, except for Merced which is not ranked, has fallen: UCLA from 6th to 23rd, Berkeley from 23rd to 43rd, San Diego from 25th to 37th, Davis from 26th to 54th, San Francisco from 39th to 63rd, Irvine from 88th to 146th, Santa Barbara from 168th to 28th, Riverside from 270th to 391st, Santa Cruz from 436th to 620th.
If we want to talk about quality we might look at another indicator, the number of papers in the top 1%.
Again we have a pattern of decline although not so steep: The smallest is San Francisco from 15th to 16th, the largest Riverside from 84th to 252nd. And there is one case of an improvement: San Diego goes from from 25th to 19th.
So far we have just looked at research. There is no global ranking that makes any serious attempt to assess teaching and learning. There are some that have indicators that might have something to do with student ability or graduate quality. The most useful of these are the Russia based Round University Rankings, which use data from Clarivate Analytics and include 20 indicators in four groups, teaching, research, international diversity and financial sustainability.
Berkeley was ranked 52nd overall in the world and 32nd in the US in 2011. It was 197th for teaching, 5th for research, 133rd for international diversity, and 8th for financial sustainability.
In 2019 it was still 52nd overall in the world and 27th in the USA. It had risen for teaching to 168th and for international diversity to 97th but had fallen for research to 20th and to 38th for financial sustainability.
Going into detail we see that Berkeley is improving for numbers of academic staff per student and world teaching reputation but falling for everything related to research.
Berkeley is the best of the UC campuses. All of the others except for the unranked Merced have fallen and all except for Davis have fallen for research.
It seems that UC is approaching the edge of the cliff. It is likely that the fall will get faster as Chinese students stop coming. It is difficult to see how the financial situation can get better in the foreseesble future and where the next generation of college-ready students are going to come from. No doubt there are more headlines and scandals to come.
Subscribe to:
Posts (Atom)