Tuesday, September 17, 2019

Seven Modest Suggestions for Times Higher Education


My post of the 24th of August has been republished in Arabia Higher Education







Going Up and Up Down Under: the Case of the University of Canberra

It is a fact almost universally ignored that when a university suddenly rises or falls many places in the global rankings the cause is not transformative leadership, inclusive excellence, team work, or strategic planning but nearly always a defect or a change in the rankers' methodology.

Let's take a look at the fortunes of the University of Canberra (UC) which THE world rankings now have in the world's top 200 universities and Australia's top ten. This is a remarkable achievement since the university did not appear in these rankings until 2015-16 when it was placed in the 500-600 band with very modest scores of 18.4 for teaching, 19.3 for research, 29.8 for citations, which is supposed to measure research impact, 36.2 for industry income, and 54.6 for international outlook.

Just four years later the indicator scores are 25.2 for teaching, 31.1 for research, 99.2 for citations, 38.6 for industry income, and 86.9 for international orientation. 

The increase in the overall score over four years, calculated with different weightings for the indicators, was composed of 20.8 points for citations and 6.3 for the other four indicators combined. Without those 20.8 points Canberra would be in the 601-800 band.

I will look at where that massive citation score came from in a moment. 

It seems that the Australian media is reporting on this superficially impressive performance with little or no scepticism and without noting how different it is from the other global rankings. 

The university has issued a statement quoting vice-chancellor Professor Deep Saini as saying that the "result confirms the steady strengthening of the quality at the University of Canberra, thanks to the outstanding work of our research, teaching and professional staff" and that the "increase in citation impact is indicative of the quality of research undertaken at the university, coupled with a rapid growth in influence and reach, and has positioned the university as amongst the best in the world."

The Canberra Times reports that the vice-chancellor has said  that part of the improvement was the result of a talent acquisition campaign while noting that many faculty were complaining about pressure and excessive workloads.

Leigh Sullivan, DVC for research and innovation, has a piece in the Campus Morning Mail that hints at reservations about UC's apparent success, which is " a direct result of its Research Foundation Plan (2013-2017) and "a strong emphasis on providing strategic support for research excellence in a few select research areas where UC has strong capability." He notes that when the citation scores of research stars are excluded there has still been a significant increase in citations and warns that what goes up can go down and that performance can be affected by changes in the ranking methodology.

The website riotact quotes the vice-chancellor on the improvement in research quality as evidence by the citation score and as calling for more funding for universities: the "government has to really think and look hard at how well we support our universities. That's not to say it badly supports us, it's that the university sector deserves to be on the radar of our government as a major national asset."

The impressive ascent of UC is unique to THE. No serious ranking puts it in the top 200 or anywhere near. In the current Shanghai Rankings it is in the 601-700 band and has been falling for the last two years. In Webometrics it is 730th in the world and 947th for Excellence, that is publications in the 10% most cited in 25 disciplines.  In  University Ranking by Academic Performance it is 899th and in the CWUR Rankings it doesn't even make the top 1,000.

Round University Ranking and Leiden Ranking do not rank UC at all.

Apart from THE UC does best in the QS rankings where it is 484th in the world and 26th in Australia.

So how could UC perform so brilliantly in THE rankings when nobody else has recognised that brilliance? What does THE know that nobody else does? Actually, it does not perform brilliantly in the THE rankings, just in the citations indicator which is supposed to measure research influence or research impact.

This year UC has a score of 99.2 which puts it in the top twenty for citations just behind Nova Southeastern University in Florida and Cankaya University in Turkey and ahead of Harvard, Princeton and Oxford. The top university this year is Aswan University in Egypt replacing Babol Noshirvani University of Technology in Iran. 

No, THE is not copying the interesting methodology of the Fortunate 500. This is the result of an absurd methodology that THE is unable or unwilling for some reason to change.

THE has a self-inflicted  problem with  a small number of papers that have hundreds or thousands of "authors" and collect thousands of citations. Some of these are from the CERN project and THE has dealt with them  by using a modified form of fractional counting for papers with more than a thousand authors. That has removed the privilege of institutions that contribut to CERN projects but has replaced it with the privilege of those that contribute to the Global Burden of Disease Study (GBDS) whose papers tend to have hundreds but not thousands of contributors and sometimes receive over a thousand citations. As a result, places like Tokyo Metropolitan University, National Research University MEPhI and Royal Holloway London have been replaced as citation super stars by St Georges' London, Brighton and Sussex Medical School, and Oregon Health and Science University.

It would be a simple matter to apply fractional counting to all papers, dividing the number of citations by the number of authors. After all Leiden Ranking and Nature Index manage to do it but THE for some reason has chosen not to follow.

The problem is compounded by counting self-citations, by hyper-normalisation so that the chances of hitting the jackpot with an unusually highly cited paper are increased, and by the country bonus that boosts the scores for universities by virtue of their location in low scoring countries. 

And so to UC's apparent success this year. This is entirely the result of it's citation score which is entirely dependent on THE's methodology. 

Between 2014 and 2018 UC had 3,825 articles in the Scopus database of which 27 were linked to the GBDS which is funded by the Bill and Melinda Gates Foundation. Those 27 articles, each with hundreds of contributors, have received 18,431 citations all of which are credited to UC and its contributor. The total number of citations is 53,929 so those 27 articles accounted for over a third of UC's citations. Their impact might be even greater if they were cited disproportionately soon after publication.

UC has of course improved its citation performance even without those articles but it is clear that they have made an outsize contribution. UC is not alone here. Many universities in the top 100 for citations in the THE world rankings owe their status to the GBDS: Anglia Ruskin, Reykjavik, Aswan, Indian Institute of Technology Ropar, the University of Peradeniya, Desarrollo, Pontifical Javeriana and so on.

There is absolutely nothing wrong with the GBDS nor with UC encouraging researchers to take part. The problem lies with THE and its reluctance to repair an indicator that produces serious distortions and is an embarrassment to those universities who apparently look to the THE rankings to validate their status.

Monday, September 16, 2019

What should universities do about organised cheating?

Every so often the world of higher education is swept by a big panic about systemic and widespread cheating. The latest instance is concern about contract cheating or essay mills that provide bespoke essays or papers for students.

It seems that the Australian government will introduce legislation to penalise the supply or advertising of cheating services to students. There are already laws in several American states and there have been calls for the UK to follow suit.

There is perhaps a bit of hypocrisy here. If universities in Europe Australia and North America admit more and more students who lack the cognitive or language skills to do the required work and if they chose to use assessment methods that are vulnerable to deception and dishonesty such as unsupervised essays and group projects then cheating is close to inevitable. 

On the supply side there appear to be large numbers of people around the world without decent academic jobs or jobs of any sort who are capable of producing academic of a high standard, sometimes worth an A grade or a first. The Internet has made it possible for lazy or incompetent students to link up with competent writers.

The Daily Mail has reported that Kenya hosts a medium sized industry with  students and academics slaving away to churn out essays for British and American students. This is no doubt a hugely exploitative business but consider the consequences of shutting down the essay mills. Many educated Kenyans are going to suffer financially. Many students will drop out, resort to  other forms of cheating, or will demand more support and counselling and transitional or foundation programmes.

If universities are serious about the scourge of essay mills they need to work on both the supply and the demand side. They might start by offering the essay writers in Kenya to apply scholarships for undergraduate or postgraduate courses or posts in EAP departments. 

On the demand side the solution seems to be simple. Stop admitting students because they show leadership ability, have overcome adversity, will make the department look like Britain, America or the world, will help craft an interesting class, and admit them because they have demonstrated an ability to do the necessary work.












https://www.studyinternational.com/news/australia-essay-mills-contract-cheating-penalty-law/