Sunday, September 17, 2017

Criticism of rankings from India

Some parts of the world seem to be increasingly sceptical of international rankings, or least those produced by Times Higher Education (THE). MENA (Middle East and North Africa) and Africa did not seem to be very enthusiastic about THE's snapshot or pilot rankings. Many Latin American universities have chosen not to participate in the world and regional rankings.

India also seems to be suspicious of the rankings. An article by Vyasa Shastri in the E-paper, livemint, details some of the ways in which universities might attempt to manipulate rankings to their advantage.

It is well worth reading although I have one quibble. The article refers to King Abdulaziz University recruiting faculty who would list the university as their secondary affiliation (now 41) when publishing papers. The original idea was to get top marks in the Shanghai Ranking's highly cited researchers indicator. The article correctly notes that the Shanghai rankings no longer count secondary affiliations but they can still help in the Nature and Science and publications indicators and in citations and publications metrics in other rankings.

Also, other Saudi universities do not recruit large numbers of secondary affiliations. There are only four for the rest of Saudi Arabia although I notice that there are now quite a few for Chinese and Australian universities, including five for the University of Melbourne.

Last word, I hope, on Babol Noshirvani University of Technology

If you type in 'Babol University of Technology" rather than 'Babol Noshirvani University of Technology' into the Scopus search box then the university does have enough publications to meet THE's criteria for inclusion the world rankings.

So it seems that it was those highly cited researchers in engineering that propelled the university into the research impact stratosphere. That, and a rather eccentric methodology.

Saturday, September 09, 2017

More on Babol Noshirvani University of Technology

To answer the question in the previous post, how did Babol Noshirvani University of Technology in Iran do so well in the latest THE rankings, part of the answer is that it has two highly cited researchers in engineering, Davood Domiri Ganji and Mohsen Sheikholeslami. I see no reason to question the quality of their research.

But I still have a couple of questions. First THE say that they exclude universities whose research output is less than 1,000 articles between 2012 and 2016. But checking with Scopus indicates that the university had 468 articles over that period, or 591 documents of all kinds including conference papers, book chapters and reviews, which seems way below the threshold level for inclusion. Is it possible that THE have included the Babol University of Medical Sciences in the count of publications or citations? 

Those documents have been cited a total of 2,601 times, which is respectable but not quite on a scale that would rival Oxford and Chicago. It is possible that some or one of those articles have, for some reason, got an unusual number of citations compared to the world average and that this has distorted the indicator score. If so, then we have yet another example of a defective methodology producing absurd results.




Friday, September 08, 2017

Why did Babol Noshirvani University of Technology do so well in the THE rankings?

The THE world rankings and their regional offshoots have always been a source of entertainment mixed with a little bit of bewilderment. Every year a succession of improbable places jumps into the upper reaches of the citations indicator which is supposed to measure global research impact. Usually it is possible to tell what happened  Often it is because of participation in a massive international physics project, although not so much over the last couple of years, contribution to a global medical or genetics survey, or even assiduous self-citation.

However, after checking with Scopus and the Web of Science, I still cannot see exactly how Babol Noshirvani University of Technology got into 14th place for this metric, equal to Oxford and ahead of Yale and Johns Hopkins, in the latest world rankings and 301-350 overall, well ahead of every other Iranian university?

Can anybody help with an explanation? 

Tuesday, September 05, 2017

Highlights from THE citations indicator


The latest THE world rankings were published yesterday. As always, the most interesting part is the field- and year- normalised citations indicator that supposedly measures research impact.

Over the last few years, an array of implausible places have zoomed into the top ranks of this metric, sometimes disappearing as rapidly as they arrived.

The first place for citations this year goes to MIT. I don't think anyone would find that very controversial.

Here are some of the institutions that feature in the top 100 of THE's most important indicator which has a weighting of 30 per cent.

2nd     St. George's, University of London
3rd=    University of California Santa Cruz, ahead of Berkeley and UCLA
6th =   Brandeis University, equal to Harvard
11th=   Anglia Ruskin University, UK, equal to Chicago
14th=   Babol Noshirvani University of Technology, Iran, equal to Oxford
16th=   Oregon Health and Science University
31st     King Abdulaziz University, Saudi Arabia
34th=   Brighton and Sussex Medical School, UK, equal to Edinburgh
44th     Vita-Salute San Raffaele University, Italy, ahead of the University of Michigan
45th=   Ulsan National Institute of Science and Technology, best in South Korea
58th=   University of Kiel, best in Germany and equal to King's College London
67th=   University of Iceland
77th=   University of Luxembourg, equal to University of Amsterdam












Thursday, August 24, 2017

Milestone passed

The previous post was the 1,000th.

Comment by Christian Scholz

This comment is by Christian Schulz of the University of Hamburg. He points that the University of Hamburg's rise in the Shanghai rankings was not the result of highly cited researchers moving from other institutions but the improvement of research within the university.

If this is something that applies to other German universities, then it could be that Germany has a policy of growing its own researchers rather than importing talent from around the world. It seems to have worked very well for football so perhaps the obsession of British universities with importing international researchers is not such a good idea..


I just wanted to share with you, that we did not acquire two researchers to get on the HCR List to get a higher rank in the Shanghai Ranking. Those two researchers are Prof. Büchel and Prof. Ravens-Sieberer. Prof. Büchel is working at our university for over a decade now and Prof. Ravens-Sieberer is at our university since 2008.

Please also aknowledge, that our place in the Shanghai Ranking was very stable from 2010-2015. We were very unpleasent, when they decided to only use the one-year list of HCR, because in 2015 none of our researchers made it on the 2015-list, which caused the descend from 2015 to 2016.

Guest Post by Pablo Achard

This post is by Pablo Achard of the University of Geneva. It refers to  the Shanghai subject rankings. However, the problem of outliers in subject and regional rankings is one that affects all the well known rankings and will probably become more important over the next few years


How a single article is worth 60 places

We can’t repeat it enough: an indicator is bad when a small variation in the input is overly amplified in the output. This is the case when indicators are based on very few events.

I recently came through this issue (again) with Shanghai’s subject ranking of universities. The universities of Geneva and Lausanne (Switzerland) share the same School of Pharmacy and a huge share of published articles in this discipline are signed under the name of both institutions. But in the “Pharmacy and pharmaceutical sciences” ranking, one is ranked between the 101st and 150th position while the other is 40th. Where does this difference come from?

Comparing the scores obtained under each category gives a clue

Geneva
Lausanne
Weight in the final score
PUB
46
44.3
1
CNCI
63.2
65.6
1
IC
83.6
79.5
0.2
TOP
0
40.8
1
AWARD
0
0
1
Weighted sum
125.9
166.6


So the main difference between the two institutions is the score in “TOP”. Actually, the difference in the weighted sum (40.7) is almost equal to the value of this score (40.8). If Geneva and Lausanne had the same TOP score, they would be 40th and 41st

Surprisingly, a look at other institutions for that TOP indicator show only 5 different values : 0, 40.8, 57.7, 70.7 and 100. According to the methodology page of the ranking, “TOP is the number of papers published in Top Journals in an Academic Subject for an institution during the period of 2011-2015. Top Journals are identified through ShanghaiRanking’s Academic Excellence Survey […] The list of the top journals can be found here  […] Only papers of ‘Article’ type are considered.”
Looking deeper, there is just one journal in this list for Pharmacy: NATURE REVIEWS DRUG DISCOVERY. As its name indicates, this recognized journal mainly publishes ‘reviews’. A search on Web of Knowledge shows that in the period 2011-2015, only 63 ‘articles’ were published in this journal. That means a small variation in the input is overly amplified.

I searched for several institutions and rapidly found this rule: Harvard published 4 articles during these five years and got a score of 100 ; MIT published 3 articles and got a score of 70.7 ; 10 institutions published 2 articles and got a 57.7 and finally about 50 institutions published 1 article and got a 40.8.

I still don’t get why this score is so unlinear. But Lausanne published one single article in NATURE REVIEWS DRUG DISCOVERY and Geneva none (they published ‘reviews’ and ‘letters’ but no ‘articles’) and that small difference led to at least a 60 places gap between the two institutions.


This is of course just one example of what happens too often: rankers want to publish sub-rankings and end up with indicators where outliers can’t be absorbed into large distributions. One article, one prize or one  co-author in a large and productive collaboration all of the sudden makes very large differences in final scores and ranks. 

Friday, August 18, 2017

Comment on the 2017 Shanghai Rankings

In the previous post I referred to the vulnerabilities that have developed in the most popular world rankings, THE, QS and Shanghai ARWU, indicators that have a large weighting and can be influenced by universities that know how to work the system or sometimes are just plain lucky.

In the latest QS rankings four universities from Mexico, Chile, Brazil and Argentina have 90+ scores for the academic reputation indicator, which has a 40% weighting. All of these universities have low scores for citations per faculty which would seem at odds with a stellar research reputation. In three cases QS does not even list the score in its main table.

I have spent so much time on the normalised citation indicator in the THE world and regional rankings that I can hardly bear to revisit the issue. I will just mention the long list of universities that have achieved improbable glory by a few researchers, or sometimes just one, on a multi-author international physics, medical or genetics project.

The Shanghai rankings were once known for their stability but have become more volatile recently. The villain here is the highly cited researchers indicator which has a 20% weighting and consists of those scientists included in the  lists now published by Clarivate Analytics.

It seems that several universities have now become aware that if they can recruit a couple of extra highly cited researchers to the faculty they can get a significant boost in these rankings. Equally, if they should be so careless to lose one or two then the ranking consequences could be most unfortunate.

In 2016 a single highly cited researcher was worth 10.3 points in the Shanghai rankings, or 2.06 on the overall score after weighting, which is the difference between 500th place and 386th. That is a good deal, certainly much better than hiring a team of consultants or sending staff for excruciating transformational sharing sessions

Of course, as the number of HiCis increases the value of each incremental diminishes so it would make little difference if a top 20 or 30 university added or lost a couple of researchers.

Take a look at some changes in the Shanghai rankings between 2016 and 2017. The University of Kyoto fell three places from 32nd to 35th place or 0.5 points from 37.2 to 36.7. This was due to a fall in the number of highly cited researchers from seven to five which meant a fall of 2.7 in the HiCi score or a weighted 0.54 points in the overall score.

McMaster University rose from 83rd to 66th  gaining 2.5 overall points. The HiCi  score went from 32.4 to 42.3,  equivalent to  1.98 weighted overall points, representing an increase in the number of such researchers from 10 to 15.

Further down the charts,the University of Hamburg rose from 256th  with an overall  score of 15.46 to  188th with a  score of 18.69, brought about largely by an improvement in the  HiCi score  from zero to 15.4 which was the result of the acquisition of tworesearchers.

Meanwhile the Ecole Polytechnique of Paris fell from 303rd place to 434th partly because of the loss of its only highly cited researcher.

It is time for ShanghaiRanking to start looking around for a Plan B for their citations indicator.









Wednesday, August 16, 2017

Problems with global rankings

There is a problem with any sort of standardised testing. A test that is useful when a score has no financial or social significance becomes less valid when coaching industries workout how to squeeze a few points out of docile candidates and motivation becomes as important as aptitude.

Similarly, a metric used to rank universities may be valid and reliable when nobody cares about the rankings. But once they are used to determine bureaucrats' bonuses, regulate immigration, guide student applications and distribute research funding then they become less accurate. Universities will learn how to apply resources in exactly the right place, submit data in exactly the right way and engage productively with the rankers. The Trinity College Dublin data scandal, for example, has indicated how much a given reported income can affect ranks in the THE world rankings.

All of the current "big three" of global rankings have indicators that have become the source of volatility and that are given a disproportionate weighting. These are the normalised citations indicator in the THE rankings, the QS academic survey and the highly cited researchers list in the Shanghai ARWU.

Examples in the next post.


Monday, August 14, 2017

Some implications of the Universitas 21 rankings

Universitas 21 (U21) produces an annual ranking not of universities but of 50 national university systems. There are 25 criteria grouped in four categories, resources, connectivity, environment and output. There is also an overall league table.

The resources section consists of various aspects of expenditure on tertiary education. Output includes publications,  citations,  performance in the Shanghai rankings, tertiary enrolment, graduates and graduate employment .

The top five in the overall rankings are USA, Switzerland, UK, Denmark and Sweden. No surprises there. The biggest improvements since 2013 have been by China, Malaysia, Russia, Saudi Arabia, Singapore and South Africa.

It is interesting to compare resources with output. The top ten for resources comprise six European countries, three of them in Scandinavia, Canada, the USA, Singapore and Saudi Arabia.

The bottom 10 includes two from Latin America, four, including China, from Asia, three from Eastern Europe, and South Africa.

There is a significant relationship correlation of .732 between resources and output. But the association is not uniform.  China is in 43rd place for resources but is 21st for output.  Saudi Arabia in the top ten for resources but 33rd for output. Malaysia is 11th for resources  but 38th for output.

I have constructed a table showing the relationship between resources and output by dividing  the score for output by resources and we get a table showing how efficient systems are at converting money into employable graduates, instructing students and doing research. This is very crude as is the data and the way in which U21 combines them but it might have some interesting implications

The top ten are:
1. China
2. USA
3. Italy
4. Russia
5. Bulgaria
6. Australia
7. UK
8. Ireland
9. Israel
10. Denmark

We have heard a lot about the lavish funding given to Chinese tertiary education. But it seems that China is also very good at turning resources into research and teaching.

The bottom ten are:

41. Austria
42. Brazil
43. Serbia
44. Chile
45. Mexico
46. India
47. Turkey
48. Ukraine
49. Saudi Arabia
50. Malaysia

At the moment the causes of low efficiency are uncertain. But it seems reasonable that the limitations of primary and secondary school systems and cultural attitudes to science and knowledge may be significant. The results of standardised tests such as PISA and TIMSS should be given careful attention.


Sunday, August 13, 2017

The Need for a Self Citation Index

In view of the remarkable performance of Veltech University in the THE Asian Rankings, rankers, administrators and publishers need to think seriously about the impact of self-citation, and perhaps also intra-institutional ranking. Here is the abstract of an article by Justin W Flatt, Alessandro Blassime, and Effy Vayena.

Improving the Measurement of Scientific Success by Reporting a Self-Citation Index

Abstract

: 
Who among the many researchers is most likely to usher in a new era of scientific breakthroughs? This question is of critical importance to universities, funding agencies, as well as scientists who must compete under great pressure for limited amounts of research money. Citations are the current primary means of evaluating one’s scientific productivity and impact, and while often helpful, there is growing concern over the use of excessive self-citations to help build sustainable careers in science. Incorporating superfluous self-citations in one’s writings requires little effort, receives virtually no penalty, and can boost, albeit artificially, scholarly impact and visibility, which are both necessary for moving up the academic ladder. Such behavior is likely to increase, given the recent explosive rise in popularity of web-based citation analysis tools (Web of Science, Google Scholar, Scopus, and Altmetric) that rank research performance. Here, we argue for new metrics centered on transparency to help curb this form of self-promotion that, if left unchecked, can have a negative impact on the scientific workforce, the way that we publish new knowledge, and ultimately the course of scientific advance.
Keywords:
 publication ethics; citation ethics; self-citation; h-index; self-citation index; bibliometrics; scientific assessment; scientific success


Saturday, August 12, 2017

The public sector: a good place for those with bad school grades

From the Economist ranking of British universities, which is based on the difference between expected and actual graduate earnings.

 That, as Basil Fawlty said in a somewhat different context, explains a lot.  

"Many of the universities at the top of our rankings convert bad grades into good jobs. At Newman, a former teacher-training college on the outskirts of Birmingham, classes are small (the staff:student ratio is 16:1), students are few (around 3,000) and all have to do a work placement as part of their degree. (Newman became a university only in 2013, though it previously had the power to award degrees.)

Part of Newman’s excellent performance can be explained because more than half its students take education-related degrees, meaning many will work in the public sector. That is a good place for those with bad school grades. Indeed, in courses like education or nursing there is no correlation between earnings and the school grades a university expects." 

Friday, August 11, 2017

Malaysia and the Rankings Yet Again

Malaysia has had a complicated relationship with global university rankings. There  was a fleeting moment of glory in 2004 when Universiti Malaya, the national flagship, leaped into the top 100 of the THES-QS world rankings. Sadly, it turned out that this was the result of an error by the rankers who thought that ethnic minorities were international faculty and students. Since then the country's leading universities have gone up and down, usually because of methodological changes rather than any merit or fault of their own.

Recently though, Malaysia seems to have adopted sensible, if not always popular, policies and made steady advances in the Shanghai rankings. There are now three universities in the top 500, UM, Universiti Sains Malaysia (USM) and Universiti Kebangsaan Malaysia (UKM). UM has been rising since 2011 although it fell a bit last year because of the loss of a single highly cited researcher listed in the Thomson Reuters database.

The Shanghai rankings rely on public records and focus on research in the sciences. For a broader based ranking with a consistent methodology and teaching metrics we can take a look at the Round University Rankings. There UM is overall 268th. For the 20 metrics included in these rankings UM's scores range from very good for number of faculty and reputation (except outside the region) to poor for doctoral degrees and normalised citations.

The story told by these rankings is that Malaysia is making steady progress in providing resources and facilities, attracting international students and staff, and producing a substantial amount of research in the natural sciences. But going beyond that is going to be very difficult. Citation counts indicate that Malaysian research gets little attention from the rest of the world. The Shanghai rankings report that UM has zero scores for highly cited researchers and papers in Nature and Science.

In this year's QS world rankings, UM reached 114th place overall and there are now hopes that it will soon reach the top 100. But it should be noted that UM's profile is very skewed with a score of 65.7 for academic reputation and 24.3 for citations per faculty. Going higher without an improvement in research quality will be very challenging since the reputation curve becomes very steep at this level, with dozens of survey responses needed just to go up a few points.

It might be better if Malaysia focused more on the Shanghai rankings, the Round University Rankings and the US News Best Global Universities. Progress in these rankings is often slow and gradual but their results are usually fairly consistent and reliable.







Tuesday, August 08, 2017

Excellent Series on Rankings

I have just come across a site, ACCESS, that includes a lot of excellent material on university rankings by Ruth A Pagell, who is Emeritus Faculty Librarian at Emory University and Adjunct Faculty at the University of Hawaii.

I'll provide specific links to some of the articles later

Go here  


Saturday, August 05, 2017

There is no such thing as free tuition

It is reported that the Philippines is introducing free tuition in state universities.It will not really be free. The government will have to find P100 billion from a possible  "re-allocation of resources."

If there is a graduate premium for degrees from Philippine universities then this measure will increase existing social inequalities and result in a transfer of wealth from the working class and small businesses to the privileged educated classes.

Unless lecturers work for nothing and buildings and facilities materialize, Hogwarts style, out of nothing, tuition is never free.




Who educates the world's leaders?

According to Times Higher Education (THE), the UK has educated more heads of state and government than any other country. The USA is a close second followed by France. No doubt this will get a let of publicity as the THE summit heads for London but, considering the state of the world, is it really something to be proud of?



Thursday, August 03, 2017

America's Top Colleges: 2017 Rankings



America's Top Colleges is published by Forbes business magazine. It is an unabashed assessment of institutions from the viewpoint of the student as investor. The metrics are post-graduate success, debt, student experience, graduation rate and academic success.

The top three colleges are Harvard, Stanford and Yale.

The top three liberal arts colleges are Pomona, Claremont McKenna and Williams.

The top three low debt private colleges are College of the Ozarks, Berea College and Princeton.

The top three STEM colleges are MIT, Caltech and Harvey Mudd College.







Wednesday, August 02, 2017

Ranking Rankings



Hobsons, the education technology company, has produced a ranking of global university rankings. The information provided is very limited and i hope there will be more in a while. Here are the top five according to a survey of international students inbound to the USA.

1.    QS World University Rankings
2.    THE World University Rankings
3.     Shanghai ARWU
4.     US News Best Global Universities
5.     Center for World University Rankings (formerly published at King Abdulaziz University).        



University of Bolton head thinks he's worth his salary



George Holmes, vice-Chancellor of the University of Bolton with a salary of GBP 220,120 and owner of a yacht and a Bentley, is not ashamed of his salary. According to an article by Camilla Turner in the Daily Telegraph, he says that he has had a very successful career and he hopes his students will get good jobs and have Bentleys.

The university is ranked 86th in the Guardian 2018 league table which reports that 59.2% of graduates have jobs or in postgraduate courses six months after graduation. It does not appear in the THE or QS world rankings.

Webometrics puts it 105th in the UK and 1846th in the world so I suppose he could claim to be head of a top ten per cent university.

Perhaps Bolton should start looking for the owner of a private jet for its next vice-Chancellor. it might do even better.



Tuesday, August 01, 2017

Highlights from the Princeton Review

Here are the top universities in selected categories in the latest Best Colleges Ranking from Princeton Review. The rankings are based entirely on survey data and are obviously subjective and vulnerable to sampling error.

Most conservative students: University of Dallas, Texas
Most liberal students: Reed College, Oregon
Best campus food: University of Massachusetts Amherst
Happiest students: Vanderbilt University, Tennessee
Party schools: Tulane University, Louisiana
Don't inhale: US Coast Guard Academy, Connecticut
Best college library: University of Chicago, Illinois
Best-run college: University of Richmond, Virginia
Most studious students: Harvey Mudd College, California
Most religious students: Thomas Aquinas College, California
Least religious students: Reed College, Oregon
Best athletic facilities: Auburn University, Alabama.

Monday, July 31, 2017

The world is safe for another year

The Princeton Review has just published the results of its annual survey of 382 US colleges with 62 lists of various kinds. I'll publish a few of the highlights later but for the moment here is one which should make everyone happy.

"Don't inhale" refers to nor using marijuana. Four of the top five places are held by service academies (Coast Guard, Naval, Army, Air Force).

The academies also get high scores in the stone cold sober rankings (opposite of party schools) so everyone can feel a bit safer when they sleep tonight.


Wednesday, July 19, 2017

Comments on an Article by Brian Leiter

Global university rankings are now nearly a decade and a half old. The Shanghai rankings (Academic Ranking of World Universities or ARWU) began in 2003, followed a year later by Webometrics and the THES-QS rankings which, after an unpleasant divorce, became the Times Higher Education (THE) and the Quacquarelli Symonds (QS) world rankings. Since then the number of rankings with a variety of audiences and methodologies has expanded.

We now have several research-based rankings, University Ranking by Academic Performance (URAP) from Turkey, the National Taiwan University Rankings, Best Global Universities from US NewsLeiden Ranking, as well as rankings that include some attempt to assess and compare something other than research, the Round University Rankings from Russia and U-Multirank from the European Union. And, of course, we also have subject rankingsregional rankings, even age group rankings.

It is interesting that some of these rankings have developed beyond the original founders of global rankings. Leiden Ranking is now the gold standard for the analysis of publications and citations. The Russian rankings use the same Web of Science database that THE did until 2014 and it has 12 out of the 13 indicators used by THE plus another eight in a more sensible and transparent arrangement. However, both of these receive only a fraction of the attention given to the THE rankings.

The research rankings from Turkey and Taiwan are similar to the Shanghai rankings but without the elderly or long departed Fields and Nobel award winners and with a more coherent methodology. U-Multirank is almost alone in trying to get at things that might be of interest to prospective undergraduate students.

It is regrettable that an article by Professor Brian Leiter of the University of Chicago in the Chronicle of Higher Education , 'Academic Ethics: To Rank or Not to Rank' ignores such developments and mentions only the original “Big Three”, Shanghai, QS and THE. This is perhaps forgivable since the establishment media, including THE and the Chronicle, and leading state and academic bureaucrats have until recently paid very little attention to innovative developments in university ranking. Leiter attacks the QS rankings and proposes that they should be boycotted while trying to improve the THE rankings.

It is a little odd that Leiter should be so caustic, not entirely without justification, about QS while apparently being unaware of similar or greater problems with THE.

He begins by saying that QS stands for “quirky silliness”. I would not disagree with that although in recent years QS has been getting less silly. I have been as sarcastic as anyone about the failings of QS: see here and here for an amusing commentary.

But the suggestion that QS is uniquely bad in contrast to THE is way off the target. There are many issues with the QS methodology, especially with its employer and academic surveys, and it has often announced placings that seem very questionable such as Nanyang Technological University (NTU) ahead of Princeton and Yale or the University of Buenos Aires in the world top 100, largely as a result of a suspiciously good performance in the survey indicators. The oddities of the QS rankings are, however, no worse than some of the absurdities that THE has served up in their world and regional rankings.  We have had places like University of Marakkesh Cadi Ayyad University in Morocco, Middle East Technical University in Turkey, Federico Santa Maria Technical University in Chile, Alexandria University and Veltech University in India rise to ludicrously high places, sometimes just for a year or two, as the result of a few papers or even a single highly cited author.

I am not entirely persuaded that NTU deserves its top 12 placing in the QS rankings. You can see here QS’s unconvincing reply to a question that I provided. QS claims that NTU's excellence is shown by its success in attracting foreign faculty, students and collaborators, but when you are in a country where people show their passports to drive to the dentist, being international is no great accomplishment. Even so, it is evidently world class as far as engineering and computer science are concerned and it is not impossible that it could reach an undisputed overall top ten or twenty ranking the next decade.

While the THE top ten or twenty or even fifty looks quite reasonable, apart from Oxford in first place, there are many anomalies as soon as we start breaking the rankings apart by country or indicator and THE has pushed some very weird data in recent years. Look at these places supposed to be regional or international centers of across the board research excellence as measured by citations: St Georges University of London, Brandeis University, the Free University of Bozen-Bolsano,  King Abdulaziz University, the University of Iceland, Veltech University. If QS is silly what are we to call a ranking where Anglia Ruskin University is supposed to have a greater research impact than Chicago, Cambridge or Tsinghua.

Leiter starts his article by pointing out that the QS academic survey is largely driven by the geographical distribution of its respondents and by the halo effect. This is very probably true and to that I would add that a lot of the responses to academic surveys of this kind are likely driven by simple self interest, academics voting for their alma mater or current employer. QS does not allow respondents to vote for the latter but they can vote for the former and also vote for grant providers or collaborators.

He says that “QS does not, however, disclose the geographic distribution of its survey respondents, so the extent of the distorting effect cannot be determined". This is not true of the overall survey. QS does in fact give very detailed figures about the origin of its respondents and there is good evidence here of probable distorting effects. There are, for example, more responses from Taiwan than from Mainland China, and almost as many from Malaysia as from Russia. QS does not, however, go down to subject level when listing geographic distribution.

He then refers to the case of University College Cork (UCC) asking faculty to solicit friends in other institutions to vote for UCC. This is definitely a bad practice, but it was in violation of QS guidelines and QS have investigated. I do not know what came of the investigation but it is worth noting that the message would not have been an issue if it had referred to the THE survey.

On balance, I would agree that THE ‘s survey methodology is less dubious than QS’s and less likely to be influenced by energetic PR campaigns. It would certainly be a good idea if the weighting of the QS survey was reduced and if there was more rigorous screening and classification of potential respondents.

But I think we also have to bear in mind that QS does prohibit respondents from voting for their own universities and it does average results out over a five- year period (formerly three years).

It is interesting that while THE does not usually combine and average survey results it did so in the 2016-17 world rankings combining the 2015 and 2016 survey results. This was, I suspect, probably because of a substantial drop in 2016 in the percentage of respondents from the arts and humanities that would, if unadjusted, have caused a serious problem for UK universities, especially those in the Russell Group.

Leiter then goes on to condemn QS for its dubious business practices. He reports that THE dropped QS because of its dubious practices. That is what THE says but it is widely rumoured within the rankings industry that THE was also interested in the financial advantages of a direct partnership with Thomson Reuters rather than getting data from QS.

He also refers to QS’s hosting a series of “World Class events” where world university leaders pay $950 for “seminar, dinners, coffee breaks” and “learn best practice for branding and marketing your institution through case studies and expert knowledge” and the QS stars plan where universities pay to be audited by QS in return for stars that they can use for promotion and advertising. I would add to his criticism that the Stars program has apparently undergone a typical “grade inflation” with the number of five-star universities increasing all the time.

Also, QS offers specific consulting services and it has a large number of clients from around the world although there are many more from Australia and Indonesia than from Canada and the US. Of the three from the US one is MIT which has been number one in the QS world rankings since 2012, a position it probably achieved after a change in the way in which faculty were classified.

It would, however, be misleading to suggest that THE is any better in this respect. Since 2014 it has launched a serious and unapologetic “monetisation of data” program.

There are events such as the forthcoming world "academic summit" where for 1,199 GBP (standard university) or 2,200 GBP (corporate), delegates can get "Exclusive insight into the 2017 Times Higher Education World University Rankings at the official launch and rankings masterclass,”, plus “prestigious gala dinner, drinks reception and other networking events”. THE also provides a variety of benchmarking and performance analysis services, branding, advertising and reputation management campaigns and a range of silver and gold profiles, including adverts and sponsored supplements. THE’s data clients include some illustrious names like the National University of Singapore and Trinity College Dublin plus some less well-known places such as Federico Santa Maria Technical University, Orebro University, King Abdulaziz University, National Research Nuclear University MEPhI Moscow, and Charles Darwin University.

Among THE’s activities are regional events that promise “partnership opportunities for global thought leaders” and where rankings like “the WUR are presented at these events with our award-winning data team on hand to explain them, allowing institutions better understanding of their findings”.

At some of these summits the rankings presented are trimmed and tweaked and somehow the hosts emerge in a favourable light. In February 2015, for example, THE held a Middle East and North Africa (MENA) summit that included a “snapshot ranking” that put Texas A and M University Qatar, a branch campus that offers nothing but engineering courses, in first place and Qatar University in fourth. The ranking consisted of precisely one indicator out of the 13 that make up THE’s world university rankings, field and year normalised citations. United Arab Emirates University (UAEU) was 11th and the American University of Sharjah in the UAE 14th.  

The next MENA summit was held in January 2016 in Al Ain in UAE. There was no snapshot this time and the methodology for the MENA rankings included 13 indicators in THE’s world rankings. Host country universities were now in fifth (UAEU) and eighth place (American University in Sharjah). Texas A and M Qatar was not ranked and Qatar University fell to sixth place.

Something similar happened to Africa. In 2015, THE went to the University of Johannesburg for a summit that brought together “outstanding global thought leaders from industry, government, higher education and research” and which unveiled THE’s Africa ranking based on citations (with the innovation of fractional counting) that put the host university in ninth place and the University of Ghana in twelfth.

In 2016 the show moved on to the University of Ghana where another ranking was produced based on all the 13 world ranking indicators. This time the University of Johannesburg did not take part and the University of Ghana went from 12th place to 7th.

I may have missed something but so far I do not see sign of THE Africa or MENA summits planned for 2017. If so, then African and MENA university leaders are to be congratulated for a very healthy scepticism.

To be fair, THE does not seem to have done any methodological tweaking for this year’s Asian, Asia Pacific and Latin American rankings.

Leiter concludes that American academics should boycott the QS survey but not THE’s and that they should lobby THE to improve its survey practices. That, I suspect, is pretty much a nonstarter. QS has never had much a presence in the US anyway and THE is unlikely to change significantly as long as its commercial dominance goes unchallenged and as long as scholars and administrators fail to see through its PR wizardry. It would be better for everybody to start looking beyond the "Big Three" rankings.





Monday, July 03, 2017

Proving anything you want from rankings

It seems that university rankings can be used to prove almost anything that journalists want to prove.

Ever since the Brexit referendum experts and pundits of various kinds have been muttering about the dread disease that is undermining or about to undermine the research prowess of British universities. The malignity of Brexit is so great that it can send its evil rays back from the future.

Last year, as several British universities tumbled down the Quacquarelli Symonds (QS) world rankings, the Independent claimed that “[p]ost-Brexit uncertainty and long-term funding issues have seen storm clouds gather over UK higher education in this year’s QS World University Rankings”.

It is difficult to figure out how anxiety about a vote that took place on June 24th 2016 could affect a ranking based on institutional data for 2014 and bibliometric data from the previous five years.

It is just about possible that some academics or employers might have woken up on June 24th to see that their intellectual inferiors had joined the orcs to raze the ivory towers of Baggins University and Bree Poly and then rushed to send a late response to the QS opinion survey. But QS, to their credit, have taken steps to deal with that sort of thing by averaging out survey responses over a period of five years.

European and American universities have been complaining for a long time that they do not get enough money from the state and that their performance in the global rankings is undermined because they do not get enough international students or researchers. That is a bit more plausible. After all, income does account for three separate indicators in the Times Higher Education (THE) world rankings so reduced income would obviously cause universities to fall a bit. The scandal over Trinity College Dublin’s botched rankings data submission showed precisely how much a given increase in reported total income (with research and industry income in a constant proportion) means for the THE world rankings. International metrics account for 10% of the QS rankings and 7.5% of the THE world rankings. Whether a decline in income or the number of international students has a direct effect or indeed any effect at all on research output or the quality of teaching is quite another matter.

The problem with claims like this is that the QS and THE rankings are very blunt instruments that should not be used to make year by year analyses or to influence government or university policy. There have been several changes in methodology, there are fluctuations in the distribution of survey responses by region and subject and the average scores for indicators may go up and down as the number of participants changes. All of these mean that it is very unwise to make extravagant assertions about university quality based on what happens in those rankings.

Before making any claim based on ranking changes it would be a good idea to wait a few years until the impact of any methodological change has passed through the system

Another variation in this genre is the recent claim in the Daily Telegraph that “British universities are slipping down the world rankings, with experts blaming the decline on pressure to admit more disadvantaged students.”

Among the experts is Alan Smithers of the University of Buckingham who is reported as saying “universities are no longer free to take their own decisions and recruit the most talented students which would ensure top positions in league tables”.

There is certainly good evidence that British university courses are becoming much less rigorous. Every year reports come in about declining standards everywhere. The latest is the proposal at Oxford to allow students to do take home instead of timed exams.

But it is unlikely that this could show up in the QS or THE rankings. None of the global rankings has a metric that measures the attributes of graduates except perhaps the QS employers survey. It is probable that a decline in the cognitive skills of admitted undergraduate students would eventually trickle up to the qualities of research students and then to the output and quality of research but that is not something that could happen in a single year especially when there is so much noise generated by methodological changes.

The cold reality is that university rankings can tell us some things about universities and how they change over perhaps half a decade and some metrics are better than others but it is an exercise in futility to use overall rankings or indicators subject to methodological tweaking to argue about how political or economic changes are impacting western universities.

The latest improbable claim about rankings is that Oxford’s achieving parity with Cambridge in the THE reputation rankings was the result of  a positive image created by appointing its first female Vice Chancellor.

Phil Baty, THE’s editor, is reported as saying that ‘Oxford University’s move to appoint its first female Vice Chancellor sent a “symbolic” wave around the world which created a positive image for the institution among academics.’

There is a bit of a problem here. Louise Richardson was appointed Vice -Chancellor in January 2016. The polling for the 2016 THE reputation rankings took place between January and March 2016. One would expect that if the appointment of Richardson had any effect on academic opinion at all then it would be in those months. It certainly seems more likely than an impact that was delayed for more than a year. If the appointment did affect the reputation rankings then it was apparently a negative one for Oxford’s score fell massively from 80.4 in 2015 to 69.1 in 2016 (compared to 100 for Harvard in both years). 

So, did Oxford suffer in 2016 because spiteful curmudgeons were infuriated by an upstart intruding into the dreaming spires?

The collapse of Oxford in the 2016 reputation rankings and its slight recovery in 2017 almost certainly had nothing to do with the new Vice-Chancellor.

Take a look at the table below. Oxford’s reputation score tracks the percentage of THE survey responses from the arts and humanities. It goes up when there are more respondents from those subjects and goes down when there are fewer. This is the case for British universities in general and also for Cambridge except for this year.

The general trend since 2011 has been for the gap between Cambridge and Oxford to fall steadily and that trend happened before Oxford acquired a new Vice-Chancellor although it accelerated and finally erased the gap this year.

What is unusual about this year’s reputation ranking is not that Oxford recovered as the number of arts and humanities respondents increased but that Cambridge continued to fall.

I wonder if it has something to do with Cambridge’s “disastrous” performance in the THE research impact (citations) indicator in recent years.  In the 2014-15 world rankings Cambridge was 28th behind places like Federico Santa Maria Technical University and Bogazici University. In 2015-16 it was 27th behind St Petersburg Polytechnic University. But a greater humiliation came in the 2016-17 rankings. Cambridge fell to 31st in the world for research impact. Even worse it was well behind Anglia Ruskin University, a former art school. For research impact Cambridge University wasn’t the best university in Europe or England. It wasn’t even the best in Cambridge, at least if you trusted the sophisticated THE rankings.

Rankings are not entirely worthless and if they did not exist no doubt they would somehow be invented. But it is doing nobody any good to use them to promote the special interests of university bureaucrats and insecure senior academics.

Table: Scores in THE reputation rankings


Year
Oxford
Cambridge
Gap
% responses arts and
humanities
2011
68.6
80.7
12.1
--
2012
71.2
80.7
9.5
7%
2013
73.0
81.3
8.3
10.5%
2014
67.8
74.3
6.5
9%
2015
80.4
84.3
3.9
16%
2016
67.6
72.2
4.6
9%
2017
69.1
69.1
0
12.5%