Wednesday, May 10, 2017

Ranking article: The building of weak expertise: the work of global university rankers

Miguel Antonio Lim, The building of weak expertise: the work of global university rankers

University rankers are the subject of much criticism, and yet they remain influential in the field of higher education. Drawing from a two-year field study of university ranking organizations, interviews with key correspondents in the sector, and an analysis of related documents, I introduce the concept of weak expertise. This kind of expertise is the result of a constantly negotiated balance between the relevance, reliability, and robustness of rankers’ data and their relationships with their key readers and audiences. Building this expertise entails collecting robust data, presenting it in ways that are relevant to audiences, and engaging with critics. I show how one ranking organization, the Times Higher Education (THE), sought to maintain its legitimacy in the face of opposition from important stakeholders and how it sought to introduce a new “Innovation and Impact” ranking. The paper analyzes the strategies, methods, and particular practices that university rankers undertake to legitimate their knowledge—and is the first work to do so using insights gathered alongside the operations of one of the ranking agencies as well as from the rankings’ conference circuit. Rather than assuming that all of these trust-building mechanisms have solidified the hold of the THE over its audience, they can be seen as signs of a constant struggle for influence over a skeptical audience.

Higher Education 13 April 2017
DOI: 10.1007/s10734-017-0147-8

https://link.springer.com/article/10.1007%2Fs10734-017-0147-8

What good are international faculty?

In the previous post I did a bit of fiddling around with correlations and found that UK universities' scores for the international student indicator in the QS world rankings did not correlate very much with beneficial outcome for students such as employability and course completion. They did, however, correlate quite well with spending per student.

That would suggest that British universities want lots of international students because it is good for their finances.

What about international faculty?

Comparing the scores for various outcomes with the QS international faculty score shows that in most cases correlation is low and statistically insignificant. This includes course satisfaction and   satisfaction with teaching (Guardian University League Tables), and completion and student satisfaction (THE TEF simulation).

There is, however, one metric that is positively, albeit modestly, and significantly associated with international students and that is the Research Excellence Framework score (REF) score (from Complete University Guide): .284 (sig (2-tailed) .043), N 51).

So it seems that international students are valued for the money they bring with them and international faculty for boosting research quality.

Caveat: this applies to highly ranked universities in the UK. How far it is true of other places or even less prestigious British institutions remains to be seen.


Monday, May 01, 2017

What good are international students?

There has been  a big fuss in the UK about the status of international students. Many in the higher education  industry are upset that the government insists on including these students in the overall total of immigrants, which might lead at some point to a reduction in their numbers. Leading twitterati have erupted in anger. Phil Baty of THE has called the government's decision "bizarre, bonkers & deeply depressing" and even invoked the name of the arch demon Enoch Powell in support.

So are international students a benefit to British universities? I have just done a quick correlation of the scores for the international students indicator in the QS World University Rankings to see whether there is any link between positive outcomes for students and the number of international students.

This is of course only suggestive. QS provides scores for only 51 universities included in their world top 500 and the situation might be different for other countries. Another caveat is that international students might provide net economic benefits for surrounding communities although that is far from settled.

Here are the correlations with the QS international student score (significance 2 tailed and N in brackets).

Value added .182 (.206; 50)    From the Guardian Rankings 2016. Compares entry qualifications with degrees awarded.

Career .102  (.480; 50) Graduate-level employment or postgraduate study six months after graduation. Also from the Guardian rankings. The correlation with the graduate destinations indicator, based on the same data, in the Times Higher Education TEF simulation is even lower, .018, and turns negative after benchmarking, -172.

Students completing degrees  .128 (.376; 50). From the TEF simulation. Again, the correlation turns negative after benchmarking.

QS Employers reputation survey .234 (.140; 41). From the 2016 world rankings.

So the number of international students has a slight and statistically insignificant relationship with the quality of teaching and learning as measured by value added, graduate employability, course completion and reputation with employers. Why then are universities so desperate to get as many as possible?

This, I think, is the answer. The correlation between the QS international students indicator and spending per student, measured by the Guardian ranking is .414 (.003; 50) which is very significant considering the noise generated in comparisons of this sort. Of course, correlation  does not equal causation, but it seems a reasonable hypothesis that it is the money brought by international students that makes them so attractive to British universities.


Sunday, April 23, 2017

UTAR and the Times Higher Education Asian University Rankings


Recently, Universiti Tunku Abdul Rahman (UTAR), a private Malaysian university, welcomed what appeared to be an outstanding performance in the Times Higher Education (THE) Asian Universities Rankings, followed by a good score in the magazine’s Young University Rankings. This has been interpreted as a remarkable achievement not just for UTAR but also for Malaysian higher education in general.

In the Asian rankings, UTAR is ranked in the top 120 and second in Malaysia behind Universiti Malaya (UM) and ahead of the major research universities, Universiti Sains Malaysia, Universiti Kebangsaan Malaysia and Universiti Putra Malaysia.

This is sharp contrast to other rankings. There is a research based ranking published by Middle East Technical University that puts UTAR 12th in Malaysia and 589th in Asia. The Webometrics ranking, which is mainly web based with one research indicator, has it 17th in Malaysia and 651st in Asia.

The QS rankings, known to be kind to South East Asian universities, puts UTAR in the 251-300 band for Asia and 14th= in Malaysia behind places like Taylor’s University and Multi Media University and in the same band as Universiti Malaysia Perlis and Universiti Malaysia Terengganu. UTAR does not appear in the Shanghai rankings or the Russian Round University Rankings.

Clearly, THE is the odd man out among rankings in its assessment of UTAR. I suspect that if challenged a spokesperson for THE might say that this is because they measure things other than research. That is very debatable. Bahram Bekhradnia of the Higher Education Policy Institute has argued in a widely-cited report that these rankings are of little value because they are almost entirely research-orientated.

In fact, UTAR did not perform so well in the THE Asian rankings because of teaching, internationalisation or links with industry. It did not even do well in research. It did well because of an “outstanding” score for research impact and it got that score because of the combination of a single obviously talented researcher with a technically defective methodology.

Just take a look at UTAR’s scores for the various components in the THE Asian rankings. For Research UTAR got a very low score of 9.6, the lowest of the nine Malaysian universities featured in these rankings (100 represents the top score in all the indicators).

For Teaching it has a score of 15.9, also the lowest of the ranked Malaysian universities.

For International Orientation, it got a score of 33.2. This was not quite the worst in Malaysia. Universiti Teknologi MARA (UiTM), which does not admit non– bumiputra Malaysians, let alone international students, did worse.

For Industry Income UTAR’s score was 32.9, again surpassed by every Malaysian university except UiTM.

So how on earth did UTAR manage to get into the top 120 in Asia and second in Malaysia?

The answer is that it got an “excellent” score of 56.7 for Research Impact, measured by field-normalised citations, higher than every other Malaysian university, including UM, in these rankings.

That score is also higher than several major international research universities such as National Taiwan University, the Indian Institute of Technology Bombay, Kyoto University and Tel Aviv University. That alone should make the research impact score very suspicious. Also, compare the score with the low score for research which combines three metrics, research reputation, research income and publications. Somehow UTAR has managed to have a huge impact on the research world even though it receives little money for research, does not have much of a reputation for research, and does not publish very much.

The THE research impact (citations) indicator is very problematical in several ways. It regularly produces utterly absurd results such as Alexandria University in Egypt in fourth place for research impact in the world in 2010 and St George’s, University of London (a medical school), in first place last year, or Anglia Ruskin University, a former art school, equal to Oxford and well ahead of Cambridge University.

In addition, to flog a horse that should have decomposed by now, Veltech University in Chennai, India, according to THE has biggest research impact in Asia and perhaps, if it qualified for the World Rankings, in the world.  This was done by massive self-citation by exactly one researcher and a little bit of help from a few friends.

Second in Asia for research, THE would have us believe, is King Abdulaziz University of Jeddah which has been on  recruiting spree of adjunct faculty whose duties might include visiting the university but certainly do require putting its name as secondary affiliation in research papers.

To rely on the THE rankings as a measure of excellence is unwise. There were methodological changes in 2011, 2015 and 2016, which have contributed to universities moving up or down many places even if there has been no objective change. Middle East Technical University in Ankara, for example, fell from 85th place in 2014-15 to the 501-600 band in 2015-6 and then to the 601-800 band in 2016-17. Furthermore, adding new universities means that the average scores from which the final scores are calculated are likely to fluctuate.

In addition, THE has been known to recalibrate the weight given to its indicators in their regional rankings and this has sometimes worked to the advantage of whoever is the host of THE’s latest exciting and prestigious summit. In 2016, THE’s Asian rankings featured an increased weight for research income from industry and a reduced one for teaching and research reputation. This was to the disadvantage of Japan and to the benefit of Hong Kong where the Asian summit was held.

So, is UTAR really more influential among international researchers than Kyoto University or the National Taiwan University?

What actually happened to UTAR is that it has an outstanding medical researcher who is involved in a massive international medical project with hundreds of collaborators from hundreds of institutions that produces papers that have been cited hundreds of times and will in the next few years be cited thousands of times. One of these papers had, by my count, 720 contributors from 470 universities and research centres and has so far received 1,036 citations, 695 in 2016 alone.

There is absolutely nothing wrong with such projects but it is ridiculous to treat every one of those 720 contributors as though they were the sole author of the paper with credit for all the citations, which is what THE does. This could have been avoided simply by using fractional counting and dividing the number of citations by the number of authors or number of affiliating institutions. This is an option available in the Leiden Ranking, which is the most technically expert of the various rankings. THE already does this for publications with over 1,000 contributors but that is obviously not enough.

I would not go as far as Bahram Bekhradnia and other higher education experts and suggest that universities should ignore rankings altogether. But if THE are going to continue to peddle such a questionable product then Malaysian universities would be well advised to keep their distance. There are now several other rankings on the marking that could be used for benchmarking and marketing.

It is not a good idea for UTAR to celebrate its achievement in the THE rankings. It is quite possible that the researcher concerned will one day go elsewhere or that THE will tweak its methodology again. If either happens the university will suffer from a precipitous fall in the rankings along with a decline in its public esteem. UTAR and other Malaysian universities would be wise to treat the THE rankings with a great deal of caution and scepticism.


Thursday, April 13, 2017

University Challenge: It wasn't such a dumb ranking.

A few years ago I did a crude ranking of winners and runners-up, derived from Wikipedia, of the UK quiz show University Challenge,  to show that British university rankings were becoming too complex and sophisticated. Overall it was not too dissimilar to national rankings and certainly more reasonable than the citations indicator of the THE world rankings. At the top was Oxford, followed by Cambridge and then Manchester. The first two were actually represented by constituent colleges. Manchester is probably underrepresented because it was expelled for several years after its team tried to sabotage a show by giving answers like Marx, Trotsky and Lenin to all or most of the questions, striking a blow against bourgeois intellectual hegemony or something. 

Recently Paul  Greatrix of Wonk HE did a list of the ten dumbest rankings ever. The University Challenge ranking was ninth because everybody knows who will win. He has a point. Cambridge and Oxford colleges are disproportionately likely to be in the finals.

Inevitably there is muttering about not enough women and ethnic minorities on the teams. The Guardian  complains that only 22 % of this years contestants were women and none of the finalists. The difference between the number of finalists and the number of competitors might, however, suggest that there is a bias in favour of women in the processes of team selection.

Anyway, here is a suggestion for anyone concerned that the university challenge teams don't look like Britain or the world.  Give each competing university or college the opportunity, if they wish, to submit two teams, one of which will be composed of women and/or aggrieved minorities and see what happens.

James Thompson at the Unz Review has some interesting comments. It seems that general knowledge is closely associated with IQ and to a lesser extent with openness to experience. This is in fact a test of IQ aka intelligence aka general mental ability. 

So it's not such a dumb ranking after all.



Wednesday, April 12, 2017

Exactly how much is five million Euro worth converted into ranking places

One very useful piece of information to emerge from the Trinity College Dublin (TCD) rankings fiasco is the likely effect on the rankings of injecting money into universities.

When TCD reported to Times Higher Education THE  that it had almost no income at all, 355 Euro in total, of which 111 Euro was research income and 5 Euro from industry, it was ranked in the 201 - 250 band in the world university rankings. Let's be generous and say that it was 201st. But when the correct numbers were inserted, 355 million in total (of which 111 million is research income and 5 million industry income) it was in 131st= place.

So we can say crudely that increasing (or rather reporting) overall institutional income by 5 million Euro (keeping the proportions for research income and industry income constant) translates into one place in the overall world rankings.

Obviously this is not going to apply as we go up the rankings. I suspect that Caltech will need a lot more than an extra 5 million Euro or 5 million anything to oust Oxford from the top of the charts.

Anyway, there it is. Five million Euro and the national flagship advances one spot up the THE world rankings. It sounds a lot but when the minister for arts, sports and tourism spends 120 Euro for hat rental, and thousands for cars and hotels, there are perhaps worse things the Irish government could do with the taxpayers' money.

Tuesday, April 11, 2017

Should graduation rates be included in rankings?


There is a noticeable trend for university rankings to become more student- and teaching-centred. Part of this is a growing interest in using graduation rates as a ranking metric. Bob Morse of US News says "[t]his is why we factor in graduation rates. Getting into college means nothing if you can't graduate."

The big problem though is that if universities can influence or control standards for graduation then the value of this metric is greatly diminished. A high graduation rate might mean effective teaching and meritocratic admissions: it might also mean nothing more than a relaxation of standards.

But we do know that dropping out or not finishing university is the road to poverty and obscurity. Think of poor Robert Zimmerman, University of Minnesota dropout, singing for a pittance in coffee bars or Kingsley Amis toiling at University College Swansea for 13 years, able to afford only ten cigarettes a day, after failing his Oxford B Litt exam. Plus all those other failures like Mick Jagger (LSE) and Bill Gates (Harvard). So it could be that graduation rates as a ranking indicator are here to stay.


Friday, April 07, 2017

Job Application

A few years ago an elderly school teacher told me about a pupil who when asked to write an application for a dream job chose Archbishop of Canterbury "because I believe in God and know lots of Bible stories." These days he'd probably be over-qualified but never mind.

So I think it's time to start sending out applications to Ranking Task Forces and the like. I know those zeros at the end of a number are important, that I should click submit AFTER filling in the data field, and that Stellenbosch is in Africa.

Update: Corrected a spelling error in the title without a complaint from anyone.

Thursday, April 06, 2017

Trinity College Shoots Itself in the Other Foot

The story so far. Trinity College Dublin (TCD) has been flourishing over the last decade according to the Shanghai and Round University Rankings (RUR) world rankings which have a stable methodology.  The university leadership has, however, been complaining about its decline in the Times Higher Education (THE) and QS rankings, which is attributed to the philistine refusal of the government to give TCD the money that it wants.

It turns out that the decline in the THE rankings was due to a laughable error. TCD had submitted incorrect data to THE, 355 Euro for total income, 111 for research income and 5 for income from industry instead of 355 million, 111 million and 5 million. Supposedly, this was the result of an "innocent mistake." 

Today, the Round University Rankings released their 2017 league table. These rankings are derived from Global Institutional Profiles Project (GIPP) run by Thomson Reuters and now by Clarivate Analytics and used until 2014 by THE. TCD has fallen from 102nd place to 647th, well below Maynooth and the Dublin Institute of Technology. The decline was catastrophic for the indicators based on institutional data and very slight for those derived from surveys and bibliometric information.

What happened? It was not the tight fists of the government. TCD apparently just submitted the data form to GIPP without providing data. 

No doubt another innocent mistake. It will be interesting to see what the group of experts in charge of rankings at TCD has to say about this.

By the way, University College Dublin continues to do well in these rankings, falling a little bit from 195th to 218th. 


Doing Something About Citations and Affiliations

University rankings have proliferated over the last decade. The International Rankings Expert Group's (IREG) inventory of national rankings counted 60 and there are now 40 international rankings including global, regional, subject, business school and system rankings.

In addition, there have been  a variety of spin offs and extracts from the global rankings, especially those published by Times Higher Education, including Asian, Latin American, African, MENA, Young University rankings and most international universities. The value of these varies but that of the Asian rankings must now be considered especially suspect.

THE have just released the latest edition of their Asian rankings using the world rankings indicators with a recalibration of the weightings. They have reduced the weighting given to the teaching and research reputation surveys and increased that for research income, research productivity and income from industry. Unsurprisingly, Japanese universities, with good reputations but affected by budget cuts, have performed less well than in the world rankings.

These rankings have, as usual, produced some results that are rather counter intuitive and illustrate the need for THE, other rankers and the academic publishing industry to introduce some reforms in the presentation and counting of publications and citations.

As usual, the oddities in the THE Asian rankings have a lot to do with the research impact indicator supposedly measured by citations. This, it needs to be explained, does not simply count the number of citations but compares them with the world average for over three hundred fields, five years of publications and six years of citations. Added to all that is a "regional modification" applied to half of the indicator by which the score for each university is divided by the square root of the score for the country in which the university is located. This effectively gives a boost to everybody except those places in the top scoring country, one that can be quite significant for countries with a low citation impact.

What this means is that a university with  a minimal number of papers can rack up a large and disproportionate score if it can collect large numbers of citations for a relatively small number of papers. This appears to be what has contributed to the extraordinary success of the institution variously known as Vel Tech University, Veltech University, Veltech Dr. RR & Dr. SR University and Vel Tech Rangarajan Dr Sagunthala R & D Institute of Science and Technology.

The university has scored a few local achievements, most recently ranking 58th for engineering institutions in the latest Indian NRIF rankings, but internationally, as Ben Sowter indicated in Quora, it is way down the ladder or even unable to get onto the bottom rung.

So how did it get to be the third best university and best private university in India according to the THE Asian rankings? How could it have the highest research impact of any university in Chennai, Tamil Nadu, India and Asia and perhaps the highest or second highest in the world.

Ben Sowter of QS Intelligence Unit has provided the answer. It is basically due to industrial scale self-citation.

"Their score of 100 for citations places them as the topmost university in Asia for citations, more than 6 points clear of their nearest rival. This is an indicator weighted at 30%. Conversely, and very differently from other institutions in the top 10 for citations, with a score of just 8.4 for research, they come 285/298 listed institutions. So an obvious question emerges, how can one of the weakest universities in the list for research, be the best institution in the list for citations?
The simple answer? It can’t. This is an invalid result, which should have been picked up when the compilers undertook their quality assurance checks.
It’s technically not a mistake though, it has occurred as a result of the Times Higher Education methodology not excluding self-citations, and the institution appears to have, for either this or other purposes, undertaken a clear campaign to radically promote self-citations from 2015 onwards.
In other words and in my opinion, the university has deliberately and artificially manipulated their citation records, to cheat this or some other evaluation system that draws on them.
The Times Higher Education methodology page explains: The data include the 23,000 academic journals indexed by Elsevier’s Scopus database and all indexed publications between 2011 and 2015. Citations to these publications made in the six years from 2011 to 2016 are also collected.
So let’s take a look at the Scopus records for Vel Tech for those periods. There are 973 records in Scopus on the primary Vel Tech record for the period 2011–2015 (which may explain why Vel Tech have not featured in their world ranking which has a threshold of 1,000). Productivity has risen sharply through that period from 68 records in 2011 to 433 records in 2015 - for which due credit should be afforded.
The issue begins to present itself when we look at the citation picture. "
He continues:
 "That’s right. Of the 13,864 citations recorded for the main Vel Tech affiliation in the measured period 12,548 (90.5%) are self-citations!!
A self-citation is not, as some readers might imagine, one researcher at an institution citing another at their own institution, but that researcher citing their own previous research, and the only way to a group of researchers will behave that way collectively on this kind of scale so suddenly, is to have pursued a deliberate strategy to do so for some unclear and potentially nefarious purpose.
It’s not a big step further to identify some of the authors who are most clearly at the heart of this strategy by looking at the frequency of their occurence amongst the most cited papers for Vel Tech. Whilst this involves a number of researchers, at the heart of it seems to be Dr. Sundarapandian Vaidyanathan, Dean of the R&D Center.
Let’s take as an example, a single paper he published in 2015 entitled “A 3-D novel conservative chaotic system and its generalized projective synchronization via adaptive control”. Scopus lists 144 references, 19 of which appear to be his own prior publications. The paper has been cited 114 times, 112 times by himself in other work."

In addition, the non-self citations are from a very small number of people, including his co-authors. Basically his audience is himself and a small circle of friends.

Another point is that Dr Vaidyanathan has published in a limited of journals and conference proceedings the most important of which are the International Journal of Pharmtech Research and the International Journal of Chemtech Research, both of which have Vaidyanathan as an associate editor. My understanding of Scopus procedures for inclusion and retention in the database is that the number of citations is very important. I was once associated with a journal that was highly praised by the Scopus reviewers for the quality of its contents but rejected because it had few citations. I wonder if Scopus's criteria include watching out for self-citations.

The Editor in Chief of the International Journal of Chemtech Research is listed as Bhavik J Bhatt who received his Ph D from the University of Iowa in 2013 and does not appear to have ever held a full time university post.

The Editor in Chief of the International Journal of Pharmtech Research is Moklesur R Sarker, associate professor at Lincoln University College Malaysia, which in 2015 was reported to be in trouble for admitting bogus students.

I will be scrupulously fair and quote Dr Vaidyanathan.

"I joined Veltech University in 2009 as a Professor and shortly, I joined the Research and Development Centre at Veltech University. My recent research areas are chaos and control theory. I like to stress that research is a continuous process, and research done in one topic becomes a useful input to next topic and the next work cannot be carried on without referring to previous work. My recent research is an in-depth study and discovery of new chaotic and hyperchaotic systems, and my core research is done on chaos, control and applications of these areas. As per my Scopus record, I have published a total of 348 research documents. As per Scopus records, my work in chaos is ranked as No. 2, and ranked next to eminent Professor G. Chen. Also, as per Scopus records, my work in hyperchaos is ranked as No. 1, and I have contributed to around 50 new hyperchaotic systems. In Scopus records, I am also included in the list of peers who have contributed in control areas such as ‘Adaptive Control’, ‘Backstepping Control’, ‘Sliding Mode Control’ and ‘Memristors’. Thus, the Scopus record of my prolific research work gives ample evidence of my subject expertise in chaos and control. In this scenario, it is not correct for others to state that self-citation has been done for past few years with an intention of misleading others. I like to stress very categorically that the self-citations are not an intention of me or my University.         
I started research in chaos theory and control during the years 2010-2013. My visit to Tunisia as a General Chair and Plenary Speaker in CEIT-2013 Control Conference was a turning point in my research career. I met many researchers in control systems engineering and I actively started my research collaborations with foreign faculty around the world. From 2013-2016, I have developed many new results in chaos theory such as new chaotic systems, new hyperchaotic systems, their applications in various fields, and I have also published several papers in control techniques such as adaptive control, backstepping control, sliding mode control etc. Recently, I am also actively involved in new areas such as fractional-order chaotic systems, memristors, memristive devices, etc."
...
"Regarding citations, I cite the recent developments like the discovery of new chaotic and hyperchaotic systems, recent applications of these systems in various fields like physics, chemistry, biology, population ecology, neurology, neural networks, mechanics, robotics, chaos masking, encryption, and also various control techniques such as active control, adaptive control, backstepping control, fuzzy logic control, sliding mode control, passive control, etc,, and these recent developments include my works also."


His claim that self citation was not his intention is odd. Was he citing in his sleep or was he possessed by an evil spirit when he wrote his papers or signed off on them? The claim about citing recent developments that include his own work misses the point. Certainly somebody like Chomsky would cite himself when reviewing developments in formal linguistics but he would also be cited by other people. Aside from himself and his co-authors Dr Vaidyanathan is cited by almost nobody.

The problems with the citations indicator in the THE Asian rankings do not end there. Here are a few cases of universities with very low scores for research and unbelievably high scores for research impact

King Abdulaziz University is ranked second in Asia for research impact. This is an old story and it is achieved by the massive recruitment of adjunct faculty culled from the lists of highly cited researchers.

Toyota Technological Institute is supposedly best in Japan for research impact, which I suspect would be news to most Japanese academics, but 19th for research.

Atilim University in Ankara is supposedly the best in Turkey for research impact but also has a very low score for research.

The high citations score for Quaid i Azam University in Pakistan results from participation in the multi-author physics papers derived from the CERN projects. In addition, there is one hyper productive researcher in applied mathematics.

Tokyo Metropolitan University gets a high score for citation because of a few much cited papers in physics and molecular genetics.

Bilkent university is a contributor to frequently cited multi-author papers in genetics.

According to THE Universiti Tunku Abdul Rahman (UTAR) is the second best university in Malaysia and best for research impact, something that will come as a surprise to anyone with the slightest knowledge of Malaysian higher education. This is because of participation in the global burden of disease study, whose papers propelled Anglia Ruskin University to the apex of British research. Other universities with disproportionate scores for research impact include Soochow University  China, North East Normal University China, Jordan University of Science and Technology, Panjab University India, Comsats Institute of Information Technology Pakistan and Yokohama City University Japan.

There are some things that the ranking and academic publishing industries need to do about the collection, presentation and distribution of publications and citations data.


1.  All rankers should exclude self- citations from citation counts. This is very easy to do, just clicking a box, and has been done by QS since 2011. It would be even better if intra-university and intra-journal citations were excluded as well.

2.  There will almost certainly be a growing problem with the recruitment of adjunct staff who will be asked to do no more than list an institution as a secondary affiliation when publishing papers. It would be sensible if academic publishers simply insisted that there be only one affiliation per author. If they do not it should be possible for rankers to count only the first named author.

3.  The more fields there are the greater the chances that rankings can be skewed by strategically or accidentally placed citations. The number of fields used for normalisation should be kept to a reasonable number.

4. A visit to the Leiden Ranking website and a few minutes tinkering with their settings and parameters will show that citations can be used to measure several different things. Rankers should use more than one indicator to measure citations.

5. It defies common sense for any ranking to give a greater weight to citations than to publications. Rankers need to review the weighting given to their citation indicators. In particular,  THE needs to think about their regional modification. which has the effect, noted above, of increasing the citations score for nearly everybody and so pushing the actual weighting of the indicator above 30 per cent.

6. Academic publishers and databases like Scopus and Web of Science need to audit journals on a regular basis.



Tuesday, April 04, 2017

The Trinity Affair Gets Worse


Trinity College Dublin (TCD) has been doing extremely well over the last few years, especially in research. It has risen in the Shanghai ARWU rankings from the 201-300 to the 151-200 band and from 174th to 102nd  in the RUR rankings.

You would have thought that would be enough for any aspiring university and that they would be flying banners all over the place. But TCD has been too busy lamenting its fall in the Times Higher Education  (THE) and QS world rankings, which it attributed to the reluctance of the government to give it as much money as it wanted. Inevitably, a high powered Rankings Steering Group headed by the Provost was formed to turn TCD around.

In September last year the Irish Times reported that the reason or part of the reason for the fall  in the THE world rankings was that incorrect data had been supplied.  The newspaper said that:

"The error is understood to have been spotted when the college – which ranked in 160th place last year – fell even further in this year’s rankings.
The data error – which sources insist was an innocent mistake – is likely to have adversely affected its ranking position both this year and last. "
I am wondering why "sources" were so keen to insist that it was an innocent mistake. Has someone been hinting that it might have been deliberate?

It now seems that the mistake was not just a misplaced decimal point. It was a decimal point moved six places to the left so that TCD reported a total income of 355 Euro, a research income of 111 Euro and 5 Euro income from industry instead of 355, 111, and 5 million respectively. I wonder what will happen to applications to the business school.

What is even more disturbing, although perhaps not entirely surprising, is that THE's game-changing auditors did not notice.


Sunday, March 19, 2017

The ten smartest university rankings in the world (or lists if you want to be pedantic)

Paul Greatrix at Wonk HE has just published a list of the ten dumbest rankings in the world. Some I would agree with but the choice of others seems a little odd. He objects to U-Multirank because it is expensive which is unfair when you consider the money that universities are spending on summits, consultancies, audits, ranking task forces and the like. I personally find the Webometrics methodology comprehensible although I admit that I am still not sure about exactly what a bad practice is.

Anyway, the dumbest rankings list should be supplemented with a list of the smartest rankings. Criteria for inclusion are innovative and imaginative methodology, inclusion of formerly marginalised institutions, groups or individuals, cutting edge insights, or significant social utility. They are not in order since they are all, like all rankings and all US liberal arts colleges, unique, some of them extremely so.



  • The Campus Squirrel Listings. "The quality of an institution of higher learning can often be determined by the size, health and behavior of the squirrel population on campus." Top of the charts with five acorns are Kansas State University, Rice University, Ursinus College, Lehigh University, Susquehanna University, and the US Naval Academy.
  • The Fortunate 500 University Rankings by the Higher School of Economics Moscow uses a brilliantly sophisticated methodology that is unbiased by exam results, teaching or research. Linkoping University in Sweden is number one.
  • Ben Sowter of QS has said that his favourite ranking is GreenMetrics because it is the only one in which his alma mater, the University of Nottingham, is top. Similarly, I am very fond of the Research Ranking of African Universities (sorry, dead link) in which my former employer, Umar ibn El-Kanemi College of Education, Science and Technology, Nigeria,  is ranked 988th.
  • The Times Higher Education World University Rankings and spin offs have  done wonderful work over the years in identifying unsuspected pockets of excellence. Last year they had Anglia Ruskin University in Cambridge equal to Oxford for research impact measured by citations and well ahead of that other place in Cambridge.
  • This tradition is continued in the 2017 Asian Universities Rankings which has discovered  that Veltech University is the third best university in India and the best in Asia for  research impact.
  • Princeton review's Stone Cold Sober Universities (staying off alcohol and drugs) is very predictable. Brigham Young University in Utah is always first and the higher rankings are filled with service academies and Christian schools. As long as the Air Force Academy stays in the top ten the world can sleep safely.
  • Three years ago Huffington Post published a list of the coldest colleges in the USA. Number one was not the university of Alaska but Minnesota State University.
  • There does not seem to be a formal ranking of universities that produce comedians but if there was then Cambridge, whose graduates include John Cleese, Peter Cook and Richard Ayoade, would surely be at the top. Oxford would obviously be the best for producing dancers.





Tuesday, February 28, 2017

Will Asia start rising again?

Times Higher Education (THE) has long suffered from the curse of field-normalised citations which without fail produce interesting (in the Chinese curse sense) results every year.

Part of THE's citation problem is the kilo-paper issue, papers mainly in particle physics with hundreds or thousands of authors and hundreds or thousands of citations. The best known case is 'Combined Measurement of the Higgs Boson Mass in pp Collisions    ...   ' in Physical Review Letters which has 5,154 contributors.

If every contributor to such papers is given equal credit for such citations then his or her institution would be awarded thousands of citations. Combined with other attributes of this indicator this means that a succession of improbable places, such as Tokyo Metropolitan University and Middle East Technical University,  have soared to the research impact peaks in the THE world rankings.

THE have already tried a couple of variations to counting citations for this sort of paper. In 2015 they introduced a cap, simply not counting any paper with more than a thousand authors. Then in 2016 they decided to give a minimum credit of 5% of citations to such authors.

That meant that in the 2014 THE world rankings an institution with one contributor to a paper with 2,000 authors and 2,000 citations would be counted as being cited 2,000 times, in 2015 not at all and in 2016 100 times. The result was that many universities in Japan, Korea, France and Turkey suffered catastrophic falls in 2015 and then made a modest comeback in 2016.

But there may be more to come. A paper by Louis de Mesnard in the European Journal of Operational Research  proposes a new formula -- (n+2)/3n -- so that if a paper has two authors each one gets two thirds of the credit. If it has 2,000 authors each one is assigned 334 citations.

Mesnard's paper has been given star billing in an article in THE which suggests that the magazine is thinking about using his formula in the next world rankings.

If so, we can expect headlines about the extraordinary recovery of Asian universities in contrast to the woes of the UK and the USA suffering from the ravages of Brexit and Trump-induced depression. 


Monday, February 27, 2017

Worth Reading 8

Henk F Moed, Sapienza University of Rome

A critical comparative analysis of five world university rankings



ABSTRACT
To provide users insight into the value and limits of world university rankings, a comparative analysis is conducted of 5 ranking systems: ARWU, Leiden, THE, QS and U-Multirank. It links these systems with one another at the level of individual institutions, and analyses the overlap in institutional coverage, geographical coverage, how indicators are calculated from raw data, the skewness of indicator distributions, and statistical correlations between indicators. Four secondary analyses are presented investigating national academic systems and selected pairs of indicators. It is argued that current systems are still one-dimensional in the sense that they provide finalized, seemingly unrelated indicator values rather than offering a data set and tools to observe patterns in multi-faceted data. By systematically comparing different systems, more insight is provided into how their institutional coverage, rating methods, the selection of indicators and their normalizations influence the ranking positions of given institutions.

" Discussion and conclusions

The overlap analysis clearly illustrates that there is no such set as ‘the’ top 100 universities in terms of excellence: it depends on the ranking system one uses which universities constitute the top 100. Only 35 institutions appear in the top 100 lists of all 5 systems, and the number of overlapping institutions per pair of systems ranges between 49 and 75. An implication is that national governments executing a science policy aimed to increase the number of academic institutions in the ‘top’ of the ranking of world universities, should not only indicate the range of the top segment (e.g., the top 100), but also specify which ranking(s) are used as a standard, and argue why these were selected from the wider pool of candidate world university rankings."



Scientometrics DOI 10.1007/s11192-016-2212-y 

Tuesday, February 21, 2017

Never mind the rankings, THE has a huge database



There has been a debate, or perhaps the beginnings of a debate, about international university rankings following the publication of Bahram Bekhradnia's report to the Higher Education Policy Institute with comments in University World News by Ben SowterPhil BatyFrank Ziegele and Frans van Vought  and Philip Altbach and Ellen Hazelkorn and a guest post by Bekhradnia in this blog.

Bekhradnia argued that global university rankings were damaging and dangerous because they encourage an obsession with research, rely on unreliable or subjective data, and emphasise spurious precision. He suggests that governments, universities and academics should just ignore the rankings.

Times Higher Education (THE) has now published a piece by THE rankings editor Phil Baty that does not really deal with the criticism but basically says that it does not matter very much because the THE database is bigger and better than anyone else's. This he claims is "the true purpose and enduring legacy" of the THE world rankings.

Legacy? Does this mean that THE is getting ready to abandon rankings, or maybe just the world rankings, and go exclusively into the data refining business? 

Whatever Baty is hinting at, if that is what he is doing, it does seem a rather insipid defence of the rankings to say that all the criticism is missing the point because they are the precursor to a big and sophisticated database.

The article begins with a quotation from Lydia Snover, Director of Institutional Research, at MIT:

“There is no world department of education,” says Lydia Snover, director of institutional research at the Massachusetts Institute of Technology. But Times Higher Education, she believes, is helping to fill that gap: “They are doing a real service to universities by developing definitions and data that can be used for comparison and understanding.”

This sounds as though THE is doing something very impressive that nobody else has even thought of doing. But Snover's elaboration of this point in an email gives equal billing to QS and THE as definition developers and suggests the definitions and data that they provide will improve and expand in the future, implying that they are now less than perfect. She says:

"QS and THE both collect data annually from a large number of international universities. For example, understanding who is considered to be “faculty” in the EU, China, Australia, etc.  is quite helpful to us when we want to compare our universities internationally.  Since both QS and THE are relatively new in the rankings business compared to US NEWS, their definitions are still evolving.  As we go forward, I am sure the amount of data they collect and the definitions of that data will expand and improve."

Snover, by the way , is a member of 
the QS advisory board, as is THE's former rankings  "masterclass" partner, Simon Pratt.

Baty offers a rather perfunctory defence of the THE rankings. He talks about rankings bringing great insights into the shifting fortunes of universities. If we are talking about year to year changes then the fact that THE purports to chart shifting fortunes is a very big bug in their methodology. Unless there has been drastic restructuring universities do not change much in a matter of months and any ranking that claims that it is detecting massive shifts over a year is simply advertising its deficiencies.

The assertion that the THE rankings are the most comprehensive and balanced is difficult to take seriously. If by comprehensive it is meant that the THE rankings have more indicators than QS or Webometrics that is correct. But the number of indicators does not mean very much if they are bundled together and the scores hidden from the public and if some of the indicators, the teaching survey and research survey for example, correlate so closely that they are effectively the same thing. In any case, The Russian Round University Rankings have 20 indicators compared with THE's 13 in the world rankings.

As for being balanced, we have already seen Bekhradnia's analysis showing that even the teaching and international outlook criteria in the THE rankings are really about research. In addition, THE gives almost a third of its weighting to citations. In practice that is often even more because the effect of the regional modification, now applied to half the indicator, is to boost in varying degrees the scores of everybody except those in the best performing country. 

After offering a scaled down celebration of the rankings, Baty then dismisses critics while announcing that THE "is quietly [seriously?] getting on with a hugely ambitious project to build an extraordinary and truly unique global resource." 


Perhaps some elite universities, like MIT, will find the database and its associated definitions helpful but whether there is anything extraordinary or unique about it remains to be seen.







Saturday, February 18, 2017

Searching for the Gold Standard: The Times Higher Education World University Rankings, 2010-2014


Now available at the Asian Journal of University Education. The paper has, of course, already been outdated by subsequent developments in the world of university rankings


ABSTRACT

This paper analyses the global university rankings introduced by Times Higher Education (THE) in partnership with Thomson Reuters in 2010 after the magazine ended its association with its former data provider Quacquarelli Symonds. The distinctive features of the new rankings included a new procedure for determining the choice and weighting of the various indicators, new criteria for inclusion in and exclusion from the rankings, a revised academic reputation survey, the introduction of an indicator that attempted to measure innovation, the addition of a third measure of internationalization, the use of several indicators related to teaching, the bundling of indicators into groups, and most significantly, the employment of a very distinctive measure of research impact with an unprecedentedly large weighting. The rankings met with little enthusiasm in 2010 but by 2014 were regarded with some favour by administrators and policy makers despite the reservations and criticisms of informed observers and the unusual scores produced by the citations indicator. In 2014, THE announced that the partnership would come to an end and that the magazine would collect its own data. There were some changes in 2015 but the basic structure established in 2010 and 2011 remained intact.


Saturday, February 11, 2017

What was the greatest ranking insight of 2016?

It is now difficult to imagine a world without university rankings. If they did not exist we would have to make judgements and decisions based on the self-serving announcements of bureaucrats and politicians, reputations derived from the achievements of past decades and popular and elite prejudices.

Rankings sometimes tell us things that are worth hearing. The first edition of the Shanghai rankings revealed emphatically that venerable European universities such as Bologna, the Sorbonne and Heidelberg were lagging behind their Anglo-Saxon competitors. More recently, the rise of research based universities in South Korea and Hong Kong and the relative stagnation of Japan has been documented by global rankings. The Shanghai ARWU also show the steady decline in the relative research capacity of a variety of US institutions including Wake Forest University, Dartmouth College, Wayne State University, the University of Oregon and Washington State University .

International university rankings have developed a lot in recent years and, with their large databases and sophisticated methodology, they can now provide us with an expanding wealth of "great insights into the strengths and shifting fortunes" of major universities.

So what was the greatest ranking insight of 2016?  Here are the first three on my shortlist. I hope to add a few more over the next couple of weeks. If anybody has suggestions I would be happy to publish them.

One. Cambridge University isn't even the best research university in Cambridge.
You may have thought that Cambridge University was one of the best research universities in the UK or Europe, perhaps even the best. But when it comes to research impact, as measured by field and year normalised citations with a 50% regional modification it isn't even the best in Cambridge. That honour, according to THE goes to Anglia Ruskin University, a former art school. Even more remarkable is that this achievement was due to the work of a single researcher. I shall keep the name a secret  in case his or her office becomes a stopping point for bus tours.

Two. The University of Buenos Aires and the Pontifical Catholic University of Chile rival the top European, American and Australian universities for graduate employability. 
The top universities for graduate employability according to the Quacquarelli Symonds (QS) employer survey are pretty obvious: Harvard, Oxford, Cambridge, MIT, Stanford. But it seems that there are quite a few Latin American universities in the world top 100 for employability. The University of Buenos Aires is 25th and the Pontifical Catholic University of Chile 28th in last year's QS world rankings employer survey indicator. Melbourne is 23rd, ETH 26th, Princeton 32nd and New York University 36th.

Three. King Abdulaziz University is one of the world's  leading universities for engineering.
The conventional wisdom seems settled, pick three or four from MIT, Harvard, Stanford, Berkeley, perhaps even a  star rising in the East like Tsinghua or the National University of Singapore. But in the Shanghai field rankings for Engineering last year the fifth place went to King Abdulaziz University in Jeddah. For highly cited researchers in engineering it is second in the world surpassed only by Stanford.