Friday, February 08, 2019

Are Turkish Universities Declining? More misuse of Rankings


Sorry to get repetitive.

Another article has appeared offering the Times Higher Education  (THE) world rankings as evidence for the decline of national universities.

Matin Gurcan in Al-Monitor argues that Turkish universities are in decline for several reasons, including uncontrolled boutique private universities, excessive government control, academic purges, lack of training for research staff and rampant nepotism.

We have been here beforeBut are Turkish universities really in decline?

The main evidence offered is that there are fewer Turkish universities at the higher levels of the THE rankings. The other rankings that are now available are ignored.

It is typical of the current state of higher education journalism that many commentators seem unaware that there are now many university rankings and that some of them are as valid and accurate as THE's if not better. The ascendancy of the THE is largely a creation of a lazy and compliant media.

Turkish universities have certainly fallen in the THE rankings.

In 2014 there were six Turkish universities in the world's top 500 and four in the top 200. Leading the pack was Middle East Technical University (METU) in 85th place, up from the 221-250 band in 2013

A year later there were four in the top 500 and none in the top 200. METU was in the 500-600 band.

Nepotism, purges, lack of training were not the cause. They were as relevant here as goodness was to Mae West's diamonds. What happened was that in 2015 THE made a number of changes to the methodology of its flagship citations indicator. The country adjustment which favoured universities in countries with low citation counts was reduced. There was a switch from Web of Science to Scopus as  the data source. Citations to mega-papers such as those emanating from the CERN projects, with thousands of contributors and thousands were no longer counted.

Some Turkish universities were heavily over-invested in the CERN project, which took them to an unusually high position in 2014. In 2015 they went crashing down the THE rankings largely as a result of the methodological adjustments. 

Other rankings such as URAP and National Taiwan University show that Turkish universities, especially METU, have declined but not nearly as much or as quickly as the THE rankings appear to show. 

In the Round University Rankings there were seven Turkish universities in the top 500 in 2014, six in 2015, and seven in 2018, METU was 375th in 2014, 308th in 2015, and  437th in 2018: a significantly decline but much less than the one recorded by THE.

Meanwhile the US News Best Global Universities rankings show three Turkish universities, including METU, in the top 500.

I do not dispute that Turkish universities have problems or the significance of the trends mentioned by Matin Gurcan.  The evidence of the rankings is that they are declining at least in comparison with other universities especially in Asia. The THE world rankings are not, however, a good source of evidence.



Friday, January 18, 2019

Top universities for research impact in the emerging economies

It is always interesting to read the reactions of the good and the great of the academic world to the latest rankings from Times Higher Education (THE). Rectors, vice chancellors and spokespersons of various kinds gloat over their institutions' success. Opposition politicians and journalists demand rolling heads if a university falls.

What nobody seems to do is take a look at the ranking methodology or the indicator scores which can often say as much about a university's success or failure than any amount of government funding, working as a team or dynamic leadership.

The citations indicator in the latest THE Emerging Economies Rankings is supposed to measure research impact and officially accounts for twenty per cent of the total weighting. In practice its weighting is effectively larger because universities in every country except the one with the most impact benefit from a country adjustment.

Here are the top ten universities in the emerging world for research impact according to THE.

1.  Jordan University of Science and Technology
2.  Cyprus University of Technology
3.  University of Desarrolio
4.  Diego Portales University
5.  Southern University of Science and Technology China
6.  University of Crete
7.  University of Cape Town
8.  Indian Institute of Technology Indore
9.  Pontifical Javeriana University
10. University of Tartu



Friday, January 11, 2019

Where does prestige come from? : Age, IQ, research or money?

Prestige is a wonderful thing. Universities use it to attract capable students, prolific researchers, grants and awards and, of course, to rise in the global rankings.

This post was inspired by a series of tweets that started with a claim that the prestige of  universities was dependent on student IQ. 

That is a fairly plausible idea. There is good evidence that employers expect universities to guarantee that graduates have a certain level of cognitive ability, are reasonably conscientious and, especially in recent years, conform to prevailing social and political orthodoxy. At the moment, general cognitive ability appears to be what contributes most to graduate employability although it may be less important than it used to be.

Then there was a suggestion that when it came to prestige it was actually age and research that mattered. Someone also said that it might be money.

So I have compared these metrics or proxies with universities' scores on various reputation surveys, which could be indicative, perhaps not perfectly, of their prestige

I have taken the median ACT or SAT scores of admitted students at the top fifty selective colleges in the USA as a substitute for  IQ, with which they have a high correlation. The data is from the supplement to a paper in the Journal of Intelligence by Wai, Brown and Chabris. 

The  endowments of those colleges and the financial sustainability scores in the Round University Rankings are used to measure money. The  number of research publications listed in the latest CWTS Leiden Ranking represents research. 

I have looked at the correlations of these with the reputation scores in the rankings by QS (academic reputation and employer reputation), Times Higher Education (THE) (research reputation and teaching reputation), RUR  (research reputation, teaching reputation and reputation outside region), and the Emergence/Trendence survey of graduate employability.

Since we are looking at a small fraction of the world's institutions in just one country the generalisability of this exercise is limited.

So what do we come up with? First, there are several highly selective liberal arts colleges in the US that are overlooked by international rankings. About half of the top 50 schools by SAT/ACT scores in the US do not show in the global rankings. An international undergraduate student wanting to study in the USA would do well to look beyond these rankings and think about places that are still highly selective such as Harvey Mudd, Pomona and Amherst Colleges.

Let's take a look at the four attributes. Age doesn't matter. There is no significance correlation between an institution's age and any of the reputation indicators. The lowest correlation, -.15, is with the RUR world research reputation indicator and the highest, but still not significant, .36, is with the THE teaching reputation indicator.

Research, however, is important. The correlation between total publications in the most recent Leiden Ranking varies from .48, RUR reputation outside region, to .63, THE teaching reputation.

So are standardised test scores. There is a significant correlation between SAT/ACT scores and the reputation indicators in the QS, RUR and Emerging/Trendence survey, ranging from .48 for the RUR world research reputation and reputation outside region to .72 for the Emerging/Trendence ranking. But the correlation with the THE teaching and research reputation indicators is not significant.

The RUR composite financial sustainability indicator correlates highly with the QS, RUR and Emerging/Trendence rankings, ranging from .47 for the QS employers' survey to .71 for the RUR world teaching reputation score but not for the THE indicators with which it is .15 for research and .16 for  teaching.

Endowment value appears to be the biggest influence on reputation. it correlates significantly with all reputation indicators, ranging from .42 for the RUR world research reputation indicator to .72 for Emerging/Trendence.

Of the four inputs the one that has the highest correlation with the three RUR reputation indicators, .71, .63, and .64 and the QS academic survey, .59, is financial sustainability.

Endowment value has the highest correlation with the QS employer survey, .57, and the two THE indicators, .66 and .71. Endowment and SAT are joint top for the Emerging/Trendence employability survey, .72. 

So it's seems that the best way to a good reputation, at least for selective American colleges, would be money. Test scores and research output can also help. But age doesn't matter.






Monday, December 03, 2018

Interesting Times for Times Higher?

Changes may be coming for the "universities Bible", aka Times Higher Education, and its rankings, events, consultancies and so on.

It seems that TES Global is selling off its very lucrative cash cow and that, in addition to private equity firms, the  RELX group which owns Scopus and Clarivate Analytics are in a bidding war.

Scopus currently provides the data for the THE rankings and Clarivate used to. If one of them wins the war there may be implications for the THE rankings, especially for the citations indicator.

If anybody has information about what is happening please send a comment.





Thursday, November 15, 2018

THE uncovers more pockets of research excellence

I don't want to do this. I really would like to start blogging about whether rankings should measure third missions or developing metrics for teaching and learning. But I find it difficult to stay away from the THE rankings, especially the citations indicator.

I have a couple of questions. If someone can help please post a comment here. 

Do the presidents, vice-chancellors, directors, generalissimos, or whatever of universities actually look at or get somebody to look at the indicator scores of the THE world rankings and their spin-offs?

Does anyone ever wonder how a ranking that produces such such imaginative and strange results for research influence, measured by citations, command the respect and trust of those hard-headed engineers, MBAs and statisticians running the world's elite universities?

These questions are especially relevant as THE are releasing subject rankings. Here are the top universities in the world for research impact (citations) in various subjects. For computer science and engineering they refer to last year's rankings.

Clinical, pre-clinical and health: Tokyo Metropolitan University

Life Sciences: MIT

Physical sciences: Babol Noshirvani University of Technology

Psychology: Princeton University

Arts and humanities: Universite de Versailles Saint Quentin-en-Yvelines

Education: Kazan Federal University

Law: Iowa State University

Social sciences: Stanford University

Business and economics: Dartmouth College

Computer Science: Princeton University

Engineering and technology: Victoria University, Australia.










https://www.timeshighereducation.com/world-university-rankings/by-subject

Saturday, November 10, 2018

A modest suggestion for THE

A few years ago the Shanghai rankings did an interesting tweak on their global rankings. They deleted the two indicators that counted Nobel and Fields awards and produced an Alternative Ranking.

There were some changes. The University of California San Diego and the University of Toronto did better while Princeton and Vanderbilt did worse.

Perhaps it is time for Times Higher Education (THE) to consider doing something similar for their citations indicator. Take a look at their latest subject ranking, Clinical, Pre-clinical and Health. Here are the top ten for citations, supposedly a measure of research impact or influence.

1.   Tokyo Metropolitan University
2.   Auckland University of Technology
3.   Metropolitan Autonomous University, Mexico
4.   Jordan University of Science and Technology
5.   University of Canberra 
6.   Anglia Ruskin University
7.   University of the Philippines
8.   Brighton and Sussex Medical School
9.   Pontifical Javeriana University, Colombia
10. University of Lorraine.

If THE started producing alternative subject rankings without the citations indicator they would be a bit less interesting but a lot more credible.











Friday, November 02, 2018

Ranking Rankings: Measuring Stability

I have noticed that some rankings are prone to a large amount of churning. Universities may rise or fall dozens of places over a year, sometimes as a result of methodological changes, changes in the number or type of universities ranked, errors and corrections of errors (fortunately rare these days), changes in data collection and reporting procedures, or because there is a small number of data points.

Some ranking organisations like to throw headlines around about who's up or down, the rise of Asia, the fall of America, and so on. This is a trivialisation of any serious attempt at the comparative evaluation of universities, which do not behave like volatile financial markets. Universities are generally fairly stable institutions: most of the leading universities of the early twentieth century are still here while the Ottoman, Hohenzollern, Hapsburg and Romanov empires are long gone.

Reliable rankings should not be expected to show dramatic changes from year to year, unless there has been radical restructuring like the recent wave of mergers in France. The validity of a ranking system is questionable if universities bounce up or down dozens, scores, even hundreds of ranks every year.

The following table shows the volatility of the global rankings listed in the IREG Inventory of international rankings. U-Multirank is not listed because it does not provide overall ranks and UniRank and Webometrics do not give access to previous editions. 

Average rank change is the number of places that each of the top thirty universities has fallen or climbed between the two most recent editions of the ranking.

The most stable rankings are the Shanghai ARWU, followed by the US News global rankings and the National Taiwan University rankings. The GreenMetric rankings, Reuters Innovative Universities and the high quality research indicator of Leiden Ranking show the highest levels of volatility.

This is a very limited exercise. We might get different results if we examined all of the universities in the rankings or analysed changes over several years.



rank
ranking
address
average rank change
1
Shanghai ARWU
China
0.73
2
US News Best Global Universities
USA
0.83
3
National Taiwan University Rankings
Taiwan
1.43
4
THE World University Rankings
UK
1.60
5
Round University Rankings
Russia
2.28
6
University Ranking by Academic Performance
Turkey
2.23
7
QS World University Rankings
UK
2.33
8
Nature Index
UK
2.60
9
Leiden Ranking Publications
Netherlands
2.77
10
Scimago
Spain
3.43
11
Emerging/Trendence
France
3.53
12
Center for World University Ranking
UAE
4.60
13
Leiden Ranking % Publications in top 1%
Netherlands
4.77
14
Reuters Innovative Universities
USA
6.17
15
UI GreenMetric
Indonesia
13.14

Monday, October 29, 2018

Is THE going to reform its methodology?


An article by Duncan Ross in Times Higher Education (THE) suggests that the World University Rankings are due for repair and maintenance. He notes that these rankings were originally aimed at a select group of research orientated world class universities but THE is now looking at a much larger group that is likely to be less internationally orientated, less research based and more concerned with teaching.

He says that it is unlikely that there will be major changes in the methodology for the 2019-20 rankings next year but after that there may be significant adjustment.

There is a chance that  the industry income indicator, income from industry and commerce divided by the number of faculty, will be changed. This is an indirect attempt to capture innovation and is unreliable since it is based entirely on data submitted by institutions. Alex Usher of Higher Education Strategy Associates has pointed out some problems with this indicator.

Ross seems most concerned, however, with the citations indicator which at present is normalised by field, of which there are over 300, type of publication and year of publication. Universities are rated not according to the number of citations they receive but by comparison with the world average of citations to documents of a specific type in a specific field in a specific year. There are potentially over 8,000 boxes into which any single citation could be dropped for comparison.

Apart from anything else, this has resulted in a serious reduction in transparency. Checking on the scores for Highly Cited Researchers or Nobel and fields laureates in the Shanghai rankings can be done in few minutes. Try comparing thousands of world averages with the citation scores of a university.

This methodology has produced a series of bizarre results, noted several times in this blog. I hope I will be forgiven for yet again listing some of the research impact superstars that THE has identified over the last few years: Alexandria University, Moscow Nuclear Research University MEPhI, Anglia Ruskin University, Brighton and Sussex Medical School, St George's University of London, Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology, Babol Noshirvani University of Technology, Oregon Health and Science University, Jordan University of Science and Technology, Vita-Salute San Raffaele University.

The problems of this indicator go further than just a collection of quirky anomalies. It now accords a big privilege to medical research as it once did to fundamental physics research. It offers a quick route to ranking glory by recruiting highly cited researchers in strategic fields and introduces a significant element of instability into the rankings.

So here are some suggestions for THE should it actually get round to revamping the citations indicator.

1. The number of universities around the world that do a modest amount of  research of any kind is relatively small, maybe five or six thousand. The number that can reasonably claim to have a significant global impact is much smaller, perhaps two or three hundred. Normalised citations are perhaps a reasonable way of distinguishing among the latter, but pointless or counterproductive when assessing the former. The current THE methodology might be able to tell whether  a definitive literary biography by a Yale scholar has the same impact in its field as cutting edge research in particle physics at MIT but it is of little use in assessing the relative research output of mid-level universities in South Asia or Latin America.

THE should therefore consider reducing the weighting of citations to the same as research output or lower.

2.  A major cause of problems with the citations indicator is the failure to introduce complete fractional counting, that is distributing credit for citations proportionately among authors or institutions. At the moment THE counts every author of a paper with less than a thousand authors as though each of them were the sole author of the paper. As a result, medical schools that produce papers with hundreds of authors now have a privileged position in the THE rankings, something that the use of normalisation was supposed to prevent.

THE has introduced a moderate form of fractional counting for papers with over a thousand authors but evidently this is not enough.

It seems that some, rankers do not like fractional counting because it might discourage collaboration. I would not dispute that collaboration might be a good thing, although it is often favoured by institutions that cannot do very well by themselves, but this is not sufficient reason to allow distortions like those noted above to flourish.

3. THE have a country bonus or regional modification which divides a university's citation impact score by the square root of the score of the country in which the university is located. This was supposed to compensate for the lacking of funding and networks that afflicts some countries, which apparently does not affect their reputation scores or publications output. The effect of this bonus is to give some universities a boost derived not from their excellence but from the mediocrity or worse of their compatriots. THE reduced the coverage of this bonus to fifty percent of the indicator in 2015.  It might well be time to get rid of it altogether

4. Although QS stopped counting self-citations in 2011 THE continue to do so. They have said that overall they make little difference. Perhaps, but as the rankings expand to include more and more universities it will become more likely that a self-citer or mutual-citer will propel undistinguished  schools up the charts. There could be more cases like Alexandria University or Veltech University.

5. THE needs to think about what they are using citations to measure. Are they trying to assess research quality in which they case they should use citations per papers? Are they trying to estimate overall research impact in which case the appropriate metric would be total citations.

6. Normalisation by field and document type might be helpful for making fine distinctions among elite research universities but lower down it creates or contributes to serious problems when a single document or an unusually productive author can cause massive distortions. Three hundred plus fields may be too many and THE should think about reducing the number of fields. 

7. There has been a proliferation in recent years In the number of secondary affiliations. No doubt most of these are making a genuine contribution to the life of both or all of the universities with which they are affiliated. There is, however, a possibility of serious abuse if the practice continues. It would be greatly to THE's credit if they could find some way of omitting or reducing the weighting of secondary affiliations. 

8. THE are talking about different models of excellence. Perhaps they could look at the Asiaweek rankings which had a separate table for technological universities or Maclean's with its separate rankings for doctoral/medical universities and primarily undergraduate schools. Different weightings could be given to citations for each of these categories.

Thursday, October 18, 2018

How many indicators do university rankings need?

The number of indicators used in international university rankings varies a lot. At one extreme we have the Russian Round University Rankings (RUR), which have 20 indicators. At the other, Nature Index and Reuters Top 100 Innovative Universities have just one.

In general, the more information provided by rankings the more helpful they are. If, however, the indicators produce very similar results then their value will be limited. The research and postgraduate teaching surveys in the THE world rankings and the RUR correlate so highly that they are in effect measuring the same thing.

There is probably an optimum number of indicators for a ranking, perhaps higher for general than for  research-only rankings, above which no further information is provided. 

A paper by Guleda Dogan of Hacettepe University, Ankara, looks at the indicators in three university rankings the Shanghai Academic Ranking of World Universities, the National Taiwan University Rankings and University Ranking by Academic Performance (URAP) and finds that they is a very high degree of internal  similarity:


"Results of the analyses show that the intra-indicators used in ARWU, NTU and URAP are highly similar and that they can be grouped according to their similarities. The authors also examined the effect of similar indicators on 2015 overall ranking lists for these three rankings. NTU and URAP are affected least from the omitted similar indicators, which means it is possible for these two rankings to create very similar overall ranking lists to the existing overall ranking using fewer indicators."






Wednesday, October 10, 2018

The link between rankings and standardised testing

The big hole in current international university rankings is the absence of anything that effectively measures the quality of graduates. Some rankings use staff student ratio or income as a proxy for the provision of resources, on the assumption that the more money that is spent or the more teachers deployed then the better the quality of teaching. QS has an employer survey that asks about the universities from where employers like to recruit but that has many problems.

There is a lot of of evidence that university graduates are valued to a large extent because they are seen as intelligent, conscientious and, depending on place and field, open-minded or conformist. A metric that correlates with these attributes would be helpful in assessing and comparing universities. 

A recent article in The Conversation by Jonathon Wai suggests that the US News America's Best Colleges rankings are highly regarded partly because they measure the academic ability of admitted students, which correlates very highly with that of graduates.

The popularity of these rankings is based on their capturing the average ability of students as measured by the SAT or ACT. Wai reports from a paper that he wrote in collaboration with Matt Brown and Christopher Chabris in the Journal of Intelligence that finds that there is a large correlation between the average SAT or ACT scores of students and overall scores in America's Best Colleges, .982 for national universities and .890 for liberal arts colleges. 

The correlation with the THE/WSJ US college rankings is less but still very substantial, .787, and also for the THE World University Rankings, .659.

It seems that employers and professional schools expect universities to certify the intelligence of their graduates. The value of standardised tests such as ACT, SAT, GRE, LSAT, GMAT, which correlate highly with one another, is that  they are a fairly robust proxy for general intelligence or general mental ability. Rankings could be valuable if they provideda clue to the ability of graduates .

It is, however, a shame that the authors should support their argument by referring only to one global ranking, the THE world rankings. There are now quite a few international rankings that are as good as or better than the THE tables.

I have therefore calculated the correlations between the average SAT/ACT scores of 30 colleges and universities in the USA and their scores in various global rankings. 

The source for student scores is a supplement to the article by Wai at al, from which I have taken the top 30 ranked by SAT/ACT. I have used those rankings listed in the IREG inventory that provide numerical scores and and not just ranks. The GreenMetric was not used since only one US school out of these thirty, Washington University in St Louis, took part in that ranking. I used two indicators from Leiden Ranking, which does not give a composite score, total publications and the percentage of papers in the top 1% of journals.

It is interesting that there are many liberal arts colleges in the US that are not included in the international rankings. Prospective undergraduates looking for a college in the USA would do well to look beyond the global rankings. Harvey Mudd College, for example, is highly selective and its graduates much sought after but it does not appear in any of the rankings below.

The results are interesting. The indicator that correlates most significantly with student ability is Leiden Ranking's percentage of papers in the top 1% of journals. Next is CWUR, which does explicitly claim to measure graduate quality. The US News world rankings and the Shanghai rankings, which only include research indicators, also do well.

We are looking at just 30 US institutions here. There might be different results if we looked at other countries or a broader range of US schools.

So, it seems that if you want to look at the ability of students or graduates, an international ranking based on research is as good as or better than one that tries to measure teaching excellence with the blunt instruments currently available.



Ranking
Address
correlation
significance
N
1
Leiden Ranking: papers in top 10% of journals
Netherlands
.65
.001*
22
2
Center for World University Ranking
UAE
.59
.003*
22
3
US News Best Global Universities
USA
.58
.004*
22
4
Shanghai ARWU
China
.57
.004*
24
5
Round University Rankings
Russia
.55
.008*
22
6
THE World University Rankings
UK
.51
.014*
22
7
QS World university Rankings
UK
.49
.025*
21
8
University Ranking by Academic Performance
Turkey
.48
.015*
25
9
Nature Index Fractional Count
USA
.45
.039*
21
10
National Taiwan University
Taiwan
.32
.147
22
11
Leiden: total Publications
Netherlands
.21
.342
22


*significant at 0.05 level