Saturday, September 14, 2019

Are UK universities facing a terrible catastrophe?

A repeated theme of mainstream media reporting on university rankings (nearly always QS or THE) is that Brexit has inflicted, is inflicting, or is surely going to inflict great damage on British education and the universities because they will not get any research grants from the European Union or be able to network with their continental peers.

The latest of these dire warnings can be found in a recent edition of the Guardian, which is the voice of the British progressive  establishment. Marja Makarow claims that Swiss science was forced "into exile" after the 2014 referendum on immigration controls. Following this, Switzerland supposedly entered a period of isolation without access to Horizon 2020 or grants from the European Research Council and with a declining reputation and a loss of international collaboration and networks. This will happen to British research and universities if Brexit is allowed to happen.

But has Swiss research suffered? A quick tour of some relevant rankings suggests that it has not. The European Research Ranking which measures research funding and networking in Europe has two Swiss universities in the top ten. The Universitas 21 systems rankings put Switzerland in third place for output, up from sixth in 2013, and first for connectivity.

The Leiden Ranking shows that EPF Lausanne and ETH Zurich have fallen for total publications between 2011-14 and 2014-17 but both have risen for publications in the top 10% of journals, a measure of research quality.

The Round University Rankings show that EPF and ETH have both improved for research since 2013 and both have improved their world research reputation.

So it looks as though Switzerland has not really suffered very much, if at all. Perhaps Brexit, if it ever happens, will turn out to be something less than the cataclysm that is feared, or hoped for.




Saturday, September 07, 2019

Finer and finer rankings prove anything you want

If you take a single metric from a single ranking and do a bit of slicing by country, region, subject, field and/or age there is a good chance that you can prove almost anything, for example that the University of the Philippines is a world beater for medical research. Here is another example from the Financial Times.

An article by John O'Hagan, Emeritus Professor at Trinity College Dublin, claims that German universities are doing well for research impact in the QS economics world rankings. Supposedly, "no German university appears in the top 50 economics departments in the world using the overall QS rankings. However, when just research impact is used, the picture changes dramatically, with three German universities, Bonn, Mannheim and Munich, in the top 50, all above Cambridge and Oxford on this ranking."

This is a response to Frederick Studemann's claim that German universities are about to move up the rankings. O'Hagan is saying that is already happening.

I am not sure what this is about. I had a look at the most recent QS economics rankings and found that in fact Mannheim is in the top fifty overall for that subject. The QS subject rankings do not have a research impact indicator. They have academic reputation, citations per paper, and h-index, which might be considered proxies for research impact, but for none of these are the three universities in the top fifty. Two of the three universities are in the top fifty for academic research reputation, one for citations per paper and two for h-index.

So it seems that the article isn't referring to the QS economics subject ranking. Maybe it is the overall ranking that professor O'Hagan is thinking of? There are no German universities in the overall top fifty there but there are also none in the citations per faculty indicator. 

I will assume that the article is based on an actual ranking somewhere, maybe an earlier edition of the QS subject rankings or the THE world rankings or from one of the many spin-offs. 

But it seems a stretch to talk about German universities moving up the rankings just because they did well in one metric in one of the 40 plus international rankings in one year.


Saturday, August 24, 2019

Seven modest suggestions for Times Higher


The latest fabulous dynamic exciting trusted prestigious sophisticated etc etc Times Higher Education (THE) world academic summit is coming. 

The most interesting, or at least the most amusing, event will probably be the revelation of the citations indicator which supposedly measures research impact. Over the last few years this metric has discovered a series of unexpected world-class research universities: Alexandria University, Tokyo Metropolitan University, Anglia Ruskin University, the University of Reykjavik, St. George's London, Babol Noshirvani University of Technology, Brighton and Sussex Medical School. THE once called this their flagship indicator but oddly enough they don't seem to have got round to releasing it as a standalone ranking.

But looking at the big picture, THE doesn't appear to have suffered much, if at all, from the absurdity of the indicator. The great and the good of the academic world continue to swarm to THE summits where they bask in the glow of the charts and tables that confirm their superiority.

THE have hinted that this summit will see big reforms to the rankings especially the citations indicator. That would certainly improve their credibility although they may be less interesting.

I have discussed THE's citation problems here, here, here, and here. So, for one last time, I hope, here the main flaws and we will see whether THE will fix them.

1. A 30% weighting for any single indicator is far too high. It would be much better to reduce it to 10 or 20%.

2. Using only one method to measure citations is not a good idea. Take at look at the Leiden Ranking and play around with the settings and parameters. You will see that you can get very different results with just a bit of tweaking. It is necessary to use a variety of metrics to get a broad picture of research quality, impact and influence.

3. THE have a regional modification or country bonus that divides the research impact score of universities by the square root of the scores of the country where they are located. The effect of this is to increase the score of every university except those in the top ranking country with the increase being greater for those with worse research records. This applies to half of the indicator and is supposed to compensate for some researchers lacking access to international networks. For some reason this was never a problem for the publications, income or international indicators. Removing the bonus would do a let to make the metric more credible.


4. The indicator is over-normalized. Impact scores are bench marked to the world average for over three hundred fields plus year of publication. The more fields the greater the chance that a university can benefit from an anomalous paper that receives an unusually high number of citations. It would help if THE reduced the number of fields although that seems unlikely.

5. Unless a paper has over a thousand authors THE treat every single contributor as receiving every single citation. Above that number they use fractional counting. The result is that the THE rankings privilege medical institutions such as St George's and the Brighton and Sussex Medical School that take part in multi-author projects such as the Global Burden of Disease study. All round fractional counting would seem the obvious answer although it might add a bit to costs.

6. Self-citation has become an issue recently. THE have said several times that it doesn't make very much difference. That may  be true but there have been occasions when a single serial self citer can make a university like Alexandria or Veltech soar into the research stratosphere and that could happen again.

7.  A lot of researchers are adding additional affiliations to their names when they publish. Those secondary, tertiary, sometimes more affiliations are counted by rankers as though they were primary affiliations. It would make sense to count only primary affiliations as ARWU does with highly cite researchers.




Friday, August 23, 2019

Rankings are everywhere

This is from an Indian website. The QS World University Rankings are being used to sell real estate in Australia.

SYDNEY: Leading Australian developer Crown Group is developing six new residential apartment developments near six of the world's top 200 university cities, according to the new QS World University Rankings 2020.

Tuesday, August 13, 2019

University of the Philippines beats Oxford, Cambridge, Yale, Harvard, Tsinghua, Peking etc etc

Rankings can do some good sometimes. They can also do a lot of harm and that harm is multiplied when they are sliced more and more thinly to produce rankings by age, by size, by mission, by region, by indicator, by subject. When this happens minor defects in the overall rankings are amplified.

That would not be so bad if universities, political leaders and the media were to treat the tables and the graphs with a healthy scepticism. Unfortunately, they treat the rankings, especially THE, with obsequious deference as long as they are provided with occasional bits of publicity fodder.

Recently, the Philippines media have proclaimed that the University of the Philippines (UP) has beaten Harvard, Oxford and Stanford for health research citations. It was seventh in the THE Clinical, Pre-clinical and Health category behind Tokyo Metropolitan University, Auckland University of Technology, Metropolitan  Autonomous University Mexico, Jordan University of Science and Technology, University of Canberra and Anglia Ruskin University.

The Inquirer is very helpful and provides an explanation from the Philippine Council for Health Research and Development that citation scores “indicate the number of times a research has been cited in other research outputs” and that the score "serves as an indicator of the impact or influence of a research project which other researchers use as reference from which they can build on succeeding breakthroughs or innovations.” 

Fair enough, but how can UP, which has a miserable score of  13.4 for research in the same subject ranking have such a massive research influence? How can it have an extremely low output of papers, a poor reputation for research, and very little funding and still be a world beater for research impact.

It is in fact nothing to do with UP, nothing to do with everyone working as a team, decisive leadership or recruiting international talent.

It is the result of a bizarre and ludicrous methodology.  First, THE does not use fractional counting for papers with less than a thousand authors. UP, along with many other universities, has taken part in the Global Burden of Disease project funded by the Bill and Melinda Gates Foundation. This has produced a succession of papers, many of them in the Lancet, with hundreds of contributing institutions and researchers, whose names are all listed as authors, and hundreds or thousands of citations. As long as the number of authors does not reach 1,000 each author is counted as though he or she were the recipient of all the citations. So UP gets the credit for a massive number of citations which is divided by a relatively small number of papers.

Why not just use fractional counting, dividing the citations among the contributors or the intuitions, like Leiden Ranking does. Probably because it might add a little bit to costs, perhaps because THE doesn't like to admit it made a mistake.

Then we have the country bonus or regional modification, applied to half the indicator, which increases the score for universities in countries with low impact.

The result of all this is that UP, surrounded by low scoring universities, not producing very much research but with a role in a citation rich mega project, gets a score for this indicator that puts it ahead of the Ivy League, the Group of Eight and the leading universities of East Asia.

If nobody took this seriously, then no great harm would be done. Unfortunately it seems that large numbers of academics, bureaucrats and journalists do take the THE rankings very seriously or pretend to do so in public. 

And so committee addicts get bonuses and promotions, talented researchers spend their days in unending ranking-inspired transformational seminars, funds go to the mediocre and the sub mediocre, students and stakeholders base their careers on  misleading data, and the problems of higher education are covered up or ignored.



Wednesday, August 07, 2019

The Decline of the University of California


Rankings have been justly criticised. Still, they do have their uses. They can identify which institutions and systems are ailing and which are in good health.

California was once the incarnation of the American dream with high wages, cheap housing, free or cheap education right through to college or even law, medical or graduate school. The University of California (UC) system was hailed as the peak of modern public research university education.

Now the state seems to have entered a period of decay. On any measure of literacy or education tCalifornia falls near the bottom of the USA. So how has this affected the public university system?

The question is now more urgent since UC Berkeley has just admitted to sending false data to the US News America's Best Colleges. Apparently the university had inflated the funds donated by alumni, a metric that accounts for five per cent of those rankings. Berkeley is now cast into the dark regions of higher education full of unranked places. It will be interesting to see what happens to applications and donations over the next couple of years. 

Questions about the performance of universities are often answered by reference to the Big Two international rankings, QS and Times Higher Education (THE). That might not be a good idea.These rankings are unbalanced with indicators that have an excessive weighting, THE's citation indicator and QS's academic survey. They are sometimes volatile and can produce  results that can be charitably described as unusual and counter-intuitive. They also rely on surveys and data submitted directly by institutions which are not very reliable. Both, however, suggest that UC is slowly and steadily declining since 2011 (THE) and 2017 (QS).

To get an accurate picture of research quality it would be better look at the Leiden Ranking, which has a consistent and transparent methodology. This too shows that there has been a steady decline in the relative global performance of the UC system. 

First, compare performance for publications, the default indicator, in 2006-9 and 2014-17. Every single campus of UC, except for Merced which is not ranked, has fallen: UCLA from 6th to 23rd, Berkeley from 23rd to 43rd, San Diego from 25th to 37th, Davis from 26th to 54th, San Francisco from 39th to 63rd,  Irvine from 88th to 146th, Santa Barbara from 168th to 28th, Riverside from 270th to 391st, Santa Cruz from 436th to 620th.

If we want to talk about quality we might look at another indicator, the number of papers in the top 1%.

Again we have a  pattern of decline although not so steep: The smallest is San Francisco from 15th to 16th, the largest Riverside from 84th to 252nd. And there is one case of  an improvement: San Diego goes from from 25th to 19th.

So far we have just looked at research. There is no global ranking that makes any serious attempt to assess teaching and learning. There are some that have indicators that might have something to do with student ability or graduate quality. The most useful of these are the Russia based Round University Rankings, which use data from Clarivate Analytics and include 20 indicators in four groups, teaching, research, international diversity and financial sustainability. 

Berkeley was ranked 52nd overall in the world and 32nd in the US in 2011. It was 197th for teaching, 5th for research, 133rd for  international diversity, and 8th for financial sustainability.

In 2019 it was still 52nd overall in the world and 27th in the USA. It had risen for teaching to 168th and for international diversity to 97th but had fallen for research to 20th and to 38th for financial sustainability.

Going into detail we see that Berkeley is improving for numbers of academic staff per student and world teaching reputation but falling for everything related to research.

Berkeley is the best of the UC campuses. All of the others except for the unranked Merced have fallen and all except for Davis have fallen for research.

It seems that UC is approaching the edge of the cliff. It is likely that the fall will get faster as Chinese students stop coming. It is difficult to see how the financial situation can get better in the foreseesble future and where the next generation of college-ready students are going to come from. No doubt there are more headlines and scandals to come.




Saturday, July 13, 2019

Singapore and the Rankings Again

As far as the rankings are concerned, Singapore has been a great success story, at least as far as the Big Two (THE and QS) are concerned.

In the latest QS world rankings the two major Singaporean universities, National University of Singapore (NUS) and Nanyang Technological University (NTU, have done extremely well, both of them reaching the eleventh spot. Predictably, the mainstream local media has praised the universities, and quoted QS spokespersons  and university representatives about how the results are due to hiring talented faculty and the superiority of the national secondary education system.

There is some scepticism in Singapore about the rankings. The finance magazine Dollars and Sense has just published an article by Sim Kang Heong that questions the latest performance by Singapore's universities, including the relatively poor showing by Singapore Management University.

The author is aware of the existence of other rankings but names only two (ARWU and THE) and then presents a list of indicators from all the QS rankings including regional and specialist tables as though they were part of the World University Rankings.

The piece argues that "it doesn't take someone with a PhD to see some of the glaring biases and flaws in the current way QS does its global university rankings."

It is helpful that someone is taking a sceptical view of Singapore's QS ranking performance but disappointing that there is no specific reference to how NUS and NTU fared in other rankings.

Last February I published a post showing the ranks of these two institutions in the global rankings. Here they are again:

THE:   23rd and 51st
Shanghai ARWU:  85th and 96th
RUR:  50th  and 73rd
Leiden (publications): 34th and 66th

These are definitely top 100 universities by any standard. Clearly though, the QS rankings rate them much more highly than anybody else. 







Thursday, July 11, 2019

India and the QS rankings

The impact of university rankings is mixed. They have, for example, often had very negative consequences for faculty, especially junior, who in many places have been coerced into attending pointless seminars and workshops and churning out unread papers or book chapters in order to reach arbitrary and unrealistic targets or performance indicators.

But sometimes they have their uses. They have shown the weakness of several university systems. In particular, the global rankings have demonstrated convincingly that Indian higher education consists of a few islands of excellence in a sea of sub-mediocrity. The contrast with China, where many universities are now counted as world class, is stark and it is unlikely that it can be fixed with a few waves of the policy wand or by spraying cash around.

The response of academic and political leaders is not encouraging. There have been moves to give universities more autonomy, to increase funding, to engage with the rankings. But there is little sign that India is ready to acknowledge the underlying problems of the absence of a serious research culture or a secondary school system that seems unable to prepare students for tertiary education.

Indian educational and political leaders have lately become very concerned about the international standing of the country's universities. Unfortunately, their understanding of how the rankings actually work seems limited. This is not unusual. The qualities needed to climb the slippery ladder of academic politics are not those of a successful researcher or someone able to analyse the opportunities and the flaws of global rankings. 

Recently there was a meeting of the Indian minister for Human Resource Development (HRD) plus the heads of the Indian Institutes of Technology Bombay and Delhi  and Indian Institute of Science Bangalore.

According to a local media report, officials have said that the reputation indicators in the QS international rankings contribute to Indian universities poor ranking performance as they are "an area where the Indian universities lose out the maximum number of marks - due to the absence of Indian representation at QS panel." 
The IIT Bombay director is quoted as saying "there are not enough participants in the UK or the US to rate Indian universities." 

This shows ignorance of QS's methodology. QS now collects response from several channels including lists submitted by universities and a facility where individual researchers and employers can sign up to join the survey.  In 2019 out of 83,877 academic survey responses collected over five years, 2.6% were from academics with an Indian affiliation, which is less than Russia, South Korea, Australia or Malaysia but more than China or Germany. This does not include responses from Indian academics at British, North American or Australian institutions. A similar proportion of responses to the QS employer survey were from India.

If there are not enough Indian participants in the QS survey then this might well be the fault of Indian universities themselves. QS allows universities to nominate up to 400 potential survey participants. I do not know if they have taken full advantage of this or whether those nominated have actually voted for Indian institutions. 

It is possible that India could do better in the rankings by increasing its participation in the QS surveys to the level of Malaysia but it is totally inaccurate to suggest that there are no Indians in the current QS surveys

If Indian universities are going to rise in the rankings then they need to start by understanding how they actually work and creating informed and realistic  strategies.


Thursday, July 04, 2019

Comparing National Rankings: USA and China


America's Best Colleges
The US News America's Best Colleges (ABC) is very much the Grand Old Man of university rankings. Its chief data analyst has been described as the most powerful man in America although that is perhaps a bit exaggerated. These rankings have had a major role in defining excellence in American higher education and they may have contributed to US intellectual and scientific dominance in the last two decades of the twentieth century.

But they are changing. This year's edition has introduced two new measures of "social mobility", namely the number of  Pell Grant (low income) students and the comparative performance of those students. There is suspicion that this is an attempt to reward universities for the recruitment and graduation of certain favoured groups, including African Americans and Hispanics, and perhaps recent immigrants from the Global South. Income is used as a proxy for race since current affirmative action policies at Harvard and other places are under legal attack. 

It should be noted that success is defined as graduation within a six year period and that is something that can be easily achieved by extra tuition, lots of collaborative projects, credit for classroom discussions and effort and persistence, holding instructors responsible for student failure, innovative methods of assessment, contextualised grading and so on.

The new ABC has given the Pell Grant metrics a 5% weighting  and has also increased the weighting for graduation rate performance, which looks at actual student outcomes compared to those predicted from their social and academic attributes, from 7.5% to 8%. So now a total of 13 % in effect goes to social engineering. A good chunk of the rankings then is based on the dubious proposition that universities can and should reduce or eliminate the achievement gap between various groups.

To make room for these metrics the acceptance rate indicator has been eliminated, and the weightings for standardised test scores, high school rank, counsellor reviews and six year graduation rate have been reduced.

Getting rid of the acceptance rate metric is probably not a bad idea since it had the unfortunate effect of encouraging universities to maximise the number of rejected applications, which produced income for the universities but imposed a financial burden on applicants.

The rankings now assign nearly a one third weighting to student quality, 22% to graduation and retention rates and 10% for standardised tests and high school rank. 

It seems that US News is moving from ranking universities by the academic ability of their students to ranking based on the number and relative success of low income and "minority" students.

The latest ranking shows the effect of these changes. The very top is little changed but further down there are significant shifts. William and Mary is down. Howard University, a predominantly African American institution, is up as are the campuses of the University of California system.

ABC also has another 30% for resources (faculty 20% and financial 10%), 20% for for reputation (15 % peer and 5% high school counsellors), and 5% for alumni donations.

Shanghai Best Chinese University Rankings

The Shanghai Best Chinese University Ranking (BCUR) is a recent initiative although ShanghaiRanking has been doing global rankings since 2003. They are quite different from the US News rankings.

For student outcomes Shanghai assigns a weighting of 10% to graduate employment and does not bother with graduation rates. As noted, ABC gives 22% for student outcomes (six year graduation rate and first year retention rate). 


Shanghai gives a 30% weighting for the dreaded Gaokao, the national university entrance exam, compared to 10% for high school class rank and SAT/ACT scores in ABC.

With regard to inputs, Shanghai allocates just 5% for alumni donations, compared to 30% in the ABC for  class size, faculty salary, faculty highest degrees, student faculty ratio, full time faculty and financial resources. 

That 5% is the only thing in Shanghai that might  be relevant to reputation while ABC has a full 20% for reputation among peers and counsellors. 

Shanghai also has a 40% allocation for research, 10% for "social service", which comprises research income from industry and income from technology transfer, and 5% for international students. ABC has no equivalent to these, although it publishes rankings separately on postgraduate programmes.

To compare the two, ABC is heavy on inputs, student graduation and retention, reputation, and social engineering. Probably the last will become more important over the next few years 
BCUR, in contrast, emphasises student ability as measured by a famously rigorous entrance exam, student employment, research, links with industry, and internationalisation.

It seems that in the coming years excellence in higher education will be defined very differently. An elite US university will be one well endowed with money and human resources, will make sure that most of its students graduate one way or another, will ensure that that the  ethnic and gender composition of the faculty and student body matches that of America or the world, and has a good reputation among peers and the media.

An elite Chinese university will be one that produces employed and employable graduates, admits students with high levels of academic skills, has close ties with industry, and has a faculty that produces a high volume of excellent research.


Sunday, June 30, 2019

The Influence of rankings revisited

Rankings are everywhere. Like a cleverly constructed virus they are all over the place and are almost impossible to delete. They are used for immigration policy, advertising, promotion, and recruitment. Here is the latest example.

A tweet from Eduardo Urias noted by Stephen Curry reported that an advertisement for an assistant professorship at Maastricht University included the requirement that candidates "should clearly state the (THE, QS, of FT business school) ranking of the university of their highest degree."

The sentence has since been removed but one wonders why the relevant committee at Maastricht could not be trusted to look up the university ranks by themselves and why should they ask about those specific rankings, which might not be the most relevant or accurate. Maastricht is a very good university, especially for the social sciences (I knew that anyway and I checked with Leiden Ranking), so why should it need to take rankings into account instead of looking at the applicants grad school records publications?

Even though that sentence was removed. this one remains.

"Maastricht University is currently ranked fifth in the top of Young Universities under 50 years."