Thursday, January 26, 2017

Comments on the HEPI Report

The higher education industry tends to respond to global rankings in two ways. University bureaucrats and academics either get overexcited, celebrating when they are up, wallowing in self-pity when down, or they reject the idea of rankings altogether.

Bahram Bekhradnia of the Higher Education Policy Institute in the UK has  published a report on international rankings which adopts the second option. University World News has several comments including a summary of the report by Bekhradnia.

To start off, his choice of rankings deserves comment. He refers to the "four main rankings", Academic Ranking of World Universities (ARWU) from Shanghai, Quacquarelli Symonds (QS), Times Higher Education (THE) and U- Multirank. It is true that the first three are those best known to the public, QS and Shanghai by virtue of their longevity and THE because of its skilled branding and assiduous cultivation of the great, the good and the greedy of the academic world. U- Multirank is chosen presumably because of its attempts to address, perhaps not very successfully, some of the issues that the author discusses.

But focusing on these four gives a misleading picture of the current global ranking scene. There are now several rankings that are mainly concerned with research -- Leiden, Scimago, URAP, National Taiwan University, US News -- and redress some of the problems with the Shanghai ranking by giving due weight to the social sciences and humanities, leaving out decades old Nobel and Fields laureates and including more rigorous markers of quality. In addition, there are rankings that measure web activity, environmental sustainability, employability and innovation. Admittedly, they do not do any of these very well but the attempts should at least be noted and they could perhaps lead to better things.

In particular, there is now an international ranking from Russia, Round University Ranking (RUR), which could be regarded as an improved version of the THE world rankings and which tries to give more weight to teaching, It uses almost the same array of metrics as THE plus some more but with rational and sensible weightings, 8% for field normalised citations, for example, rather than 30%.

Bekhradnia has several comments on the defects of current rankings. First, he says that they are concerned entirely or almost entirely with research. He claims that there are indicators in the QS and THE rankings that are actually although not explicitly about research. International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.

Bekhradnia is being a little unfair to THE. He asserts that if universities add to their faculty with research-only staff this will add to their faculty student metric, supposedly a proxy for teaching quality, thus turning the indicator into a measure of research. This is true of QS but it appears that THE does require universities to list research staff separately and excludes them from some indicators as appropriate. In any case, the number of research only staff is quite small outside the top hundred or so for most universities 

It is true that most rankings are heavily, perhaps excessively, research-orientated but it would be a mistake to conclude that this renders them totally useless for evaluating teaching and learning. Other things being equal, a good record for research is likely to be associated with positive student and graduate outcomes such as satisfaction with teaching, completion of courses and employment.

For English universities the Research Excellence Framework (REF) score is more predictive of student success and satisfaction, according to indicators in the Guardian rankings and the recent THE Teaching Excellence Framework simulation than the percentage of staff with educational training or certification, faculty salaries or institutional spending, although it is matched by staff student ratio.

If you are applying to English universities and you want to know how likely you are to complete your course or be employed after graduation, probably the most useful things to  know are average entry tariff (A levels), staff student ratio and faculty scores for the latest REF. There are of course intervening variables and the arrows of causation do not always fly in the same direction but scores for research indicators are not irrelevant to comparisons of teaching effectiveness and learning outcomes.

Next, the report deals with the issue of data, noting that internal data checks by THE and QS do not seem to be adequate. He refers to the case of Trinity College Dublin where a misplaced decimal point caused the university to drop several places in the THE word rankings. He then goes on to criticise QS for "data scraping" that is getting information from any available source. He notes that they caused Sultan Qaboos University (SQU) to drop 150 places in their world rankings apparently because QS took data from the SQU website that identified non teaching staff as teaching. I assume that the staff in question were administrators: if they were researchers then it would not have made any difference.

Bekhradnia is correct to point out that data from web sites is often incorrect or subject to misinterpretation. But to assume that such data is necessarily inferior to that reported by institutions to the rankers is debatable. QS has no need to be apologetic about resorting to data scraping. On balance information about universities is more likely to be correct if it comes from one of several and similar competitive sources, if it is from a source independent of the ranking organisation and the university, if it has been collected for reasons other than submission to the rankers, or if there are serious penalties for submitting incorrect data.

The best data for university evaluation and comparison is likely to be from third party databases that collect masses of information or from government agencies that require accuracy and honesty. After that institutional data from web sites and the like is unlikely to be significantly worse  than that specifically submitted for ranking purposes.

There was an article in University World News in which Ben Sowter of QS took a rather defensive position with regard to data scraping. He need not have done so. In fact it would not be a bad idea for QS and others to do a bit more.

Bekhradnia goes on to criticise the reputation surveys. He notes that recycling unchanged responses over a period of five years, originally three, means that it is possible that QS is counting the votes of dead or retired academics. He also points out that the response rate to the surveys is very low. All this is correct although it is nothing new. But it should be pointed out that what is significant is not how many respondents there are but how representative they are of the group that is being investigated. The weighting given to surveys in the THE and QS rankings is clearly too much and QS's  methods of selecting respondents are rather incoherent and can produce counter-intuitive results such as extremely high scores for some Asian and Latin American universities.

However, it is going too far to suggest that surveys should have no role. First reputation and perceptions are far from insignificant. Many students would, I suspect, prefer to go a university that is overrated by employers and professional schools than to one that provides excellent instruction and facilities but has failed to communicate this to the rest of the world.

In addition, surveys can provide a reality check when a university does a bit of gaming. For example King Abdulaziz University (KAU) has been diligently offering adjunct contracts to dozens of highly cited researchers around the world that require them to put the university as a secondary affiliation and thus allow it to get huge numbers of citations. The US News Arab Region rankings have KAU in the top five among Arab universities for a range of research indicators, publications, cited publications, citations, field weighted citation impact, publications in the top 10 % and the top 25%. But its academic reputation rank was only 26, definitely a big thumbs down.

Bekhradnia then refers to the advantage that universities get in the ARWU rankings simply by being big. This is certainly a valid point. However, it could be argued that quantity is a necessary prerequisite to quality and enables the achievement of economies of scale. 

He also suggest that the practice of presenting lists in order is  misleading since a trivial difference in the raw data could mean a substantial difference in the presented ranking. He proposes that it would be better to group universities into bands. The problem with this is that when rankers do resort to banding, it is fairly easy to calculate an overall score by adding up the published components. Bloggers and analysts do it all the time.

Bekhradnia concludes:
"The international surveys of reputation should be dropped
– methodologically they are flawed, effectively they only
measure research performance and they skew the results in
favour of a small number of institutions."

This is ultimately self defeating. The need and the demand for some sort of  ranking is too widespread to set aside. Abandon explicit rankings and we will probably have implicit rankings of recommendations by self declared experts.

There is much to be done to make rankings better. The priority should be finding objective and internationally comparable measures of student attributes and attainment. That will be some distance in the future. For the moment what universities should be doing is to focus not on composite rankings but on the more reasonable and reliable indicators within specific rankings. 

Bekhradnia does have a very good point at the end:

"Finally, universities and governments should discount therankings when deciding their priorities, policies and actions.In particular, governing bodies should resist holding seniormanagement to account for performance in flawed rankings.Institutions and governments should do what they do becauseit is right, not because it will improve their position in therankings."

I would add that universities should stop celebrating when they do well in the rankings. The grim fate of Middle East Technical University should be present in the mind of every university head.







Sunday, January 22, 2017

What's Wrong with Ottawa?

The University of Ottawa (UO) has been a great success over the last few years, especially in research. In 2004 it was around the bottom third of the 202-300 band in the Shanghai Academic Ranking of World Universities. By 2016 it had reached the 201st place, although the Shanghai rankers still recorded it as being in the 201-300 band. Another signing of a highly cited researcher, another paper in Nature, a dozen more papers listed in the Science Citation Index and it would have made a big splash by breaking into the Shanghai top 200.

The Shanghai rankings have, apart from recent problems with the Highly Cited Researchers indicator, maintained a stable methodology so this is a very solid and remarkable achievement.

A look at the individual components of these rankings shows that UO has improved steadily in the the quantity and the quality of research. The score for publications rose from 37.8 to 44.4 between 2004 and 2016, from 13.0 to 16.1 for papers in Nature and Science, and from 8.7 to 14.5 for highly cited researchers (Harvard is 100 in all cases). For productivity (five indicators divided by number of faculty) the score went from 13.2 to 21.5 (Caltech is 100).

It is well known that the Shanghai rankings are entirely about research and ignore the arts and humanities. The Russian Round University Rankings (RUR), however, get their data from the same source as THE did until two years ago, include data from the arts and humanities, and have a greater emphasis on teaching related indicators.

In the RUR rankings, UO rose from 263rd place in 2010 to 211th overall in 2015, from 384th to 378th in five combined teaching indicators and from 177th to 142nd in five combined research indicators. Ottawa is doing well for research and and creeping up a bit for teaching related criteria, although the relationship between these and actual teaching may be rather tenuous.

RUR did not rank UO in 2016. I cannot find any specific reason  but it is possible that the university did not submit data for the Institutional Profiles at Research Analytics.

Just for completeness, Ottawa is also doing well in the Webometrics ranking, which is mainly about web activity but does include a measure of research excellence. It is in the 201st spot there also.

It seems, however, that this is not good enough. In September, according to Fulcrum, the university newspaper, there was a meeting of the Board of Governors which discussed not the good results from RUR, Shanghai Ranking and Webometrics. but a fall in the Times Higher Education (THE) World University Rankings from the 201-250 band in 2015-16 to the 250-300 band in 2016-17. One board member even suggested taking THE to court.

So what happened to UO in last year's THE world rankings? The only area where it fell was for Research, from 36.7 to 21.0. In the other indicators or indicator groups, Teaching, Industry Income, International Orientation, Research Impact (citations), it got the same score or improved.

But this is not very helpful. There are actually three components in the research group of indicators, which has a weighting of 30%, two of which are scaled. A fall in the research component might be caused by a fall in its score for research reputation, a decline in its reported research income, a decline in the number of publications, a rise in the number of academic staff, or some combination of these.

The fall in UO's research score could not have been caused by more faculty. The number of full time faculty was 1,284 in 2012-13 and 1,281 in 2013-14.

There was a fall of 7.6% in Ottawa's "sponsored research income" between 2013 and 2014 but I am not sure if that is enough to produce such a large decline in the combined research indicators.

My suspicion is -- and until THE disaggregate their indicators it cannot be anything more  -- that the problem lies with the 18% weighted survey of postgraduate teaching. Between 2015 and 2016 the percentage of survey respondents from the arts and humanities was significantly reduced while that from the social sciences and business studies was increased. This would be to the disadvantage of English speaking universities, including those in Canada, relatively strong in the humanities and to the advantage of Asian universities relatively strong in business studies. UO, for example, is ranked highly by Quacquarelli Symonds (QS) for English, Linguistics and Modern Languages, but not for Business Management Studies and Finance and Accounting.

This might have something to do with THE wanting to get enough respondents for business studies after they had been taken out of the social sciences and given their own section. If that is the case, Ottawa might get a pleasant surprise this year since THE are now treating law and education as separate fields and may have to find more respondents to get around the problem of small sample sizes. If so, this could help UO which appears to be strong in those subjects.

It seems, according to another Fulcrum article, that the university is being advised by Daniel Calto from Elsevier. He correctly points out that citations had nothing to do with this year's decline. He then talks about the expansion in the size of the rankings with newcomers pushing in front of UO. It is unlikely that this in fact had a significant effect on the university since most of the newcomers would probably enter below the 300 position and since there has been no effect on its score for teaching, international orientation, industry income or citations (research impact).

I suspect that Caito may have been incorrectly reported. Although he says it was unlikely that citations could have had anything to do with the decline, he is reported later in the article to have said that THE's exclusion of kilo-papers (with 1,000 authors) affected Ottawa. But the kilo-papers were excluded in 2015 so that could not have contributed to the fall between 2015 and 1016.

The Fulcrum article then discusses how UO might improve. M'hamed Aisati, a vice-president at Elsevier, suggest getting more citations. This is frankly not very helpful. The THE methodology means that more citations are meaningless unless they are concentrated in exactly the right fields. And if more citations are accompanied by more publications then the effect could be counter-productive.

If UO is concerned about a genuine improvement in research productivity and quality there are now several global rankings that are quite reasonable. There are even rankings that attempt to measure  things like innovation, teaching resources, environmental sustainability and web activity.

The THE rankings are uniquely opaque in that they hide the scores for specific indicators, they are extremely volatile, they depend far too much on dodgy data from institutions and reputation surveys that can be extremely unstable. Above all, the citations indicator is a hilarious generator of absurdity.

The University of Ottawa, and other Canadian universities, would be well advised to forget about the THE rankings or at least not take them so seriously.


Monday, January 09, 2017

Outbreak of Rankophilia

A plague is sweeping the Universities of the West, rankophilia or an irrational and obsessive concern with position and prospects in global rankings and a unwillingness to exercise normal academic caution and scepticism.

The latest victim is Newcastle University whose new head, Chris Day, wants to make his new employer one of the best in the world. Why does he want to do that?
"His ambition - to get the university into the Top 100 in the world - is not simply a matter of personal or even regional pride, however. With universities increasingly gaining income from foreign students who often base their choices of global rankings, improving Newcastle’s position in the league tables has economic consequences."
So Newcastle is turning its back on its previous vision of becoming a "civic university" and will try to match its global counterparts. It will do that by enhancing its research reputation.
'While not rowing back from Prof Brink’s mantra of “what are we good at, but what are we good for?”, Prof Day first week in the job saw him highlighting the need for Newcastle to concentrate on improving its reputation for academic excellence." '
It is sad that Day recognises that the core business of a university is not enough and that what really matters is proper marketing and shouting to rise up the tables.

Perhaps Newcastle will ascend into the magic 100, but the history of the THE rankings over the last few years is full of universities -- Alexandria, University of Tokyo, Tokyo Metropolitan University, University of Copenhagen, Royal Holloway, University of Cape Town, Middle East Technical University and others -- that have soared in the THE rankings for a while and then fallen, often because of nothing more than a twitch of a methodological finger.

Meanwhile Cambridge is yearning to regain its place in the QS top three and Yale is putting new emphasis on science and research with an eye on the global rankings.




Sunday, January 01, 2017

Ranking Teaching Quality

There has been a lot of talk lately about the quality of teaching and learning in universities. This has always an been an important element in national rankings such as the US News America's Best Colleges and the Guardian and Sunday Times in the UK, measured by things like standardised test scores, student satisfaction, reputation surveys, completion rates and staff student ratio.

There have been suggestions that university teaching staff need to be upgraded by attending courses in educational theory and practice or by obtaining some sort of certification or qualification.

The Higher Education Funding Council of England (HEFCE) has just published data on the number of staff with educational qualifications in English higher educational institutions.

The university with the largest number of staff with some sort of educational qualification is Huddersfield which unsurprisingly is very pleased. The university's website  reports HEFCE's assertion that “information about teaching qualifications has been identified as important to students and is seen as an indicator of individual and institutional commitment to teaching and learning.”

The top six universities are:
1.  University of Huddersfield
2.  Teesside University
3.  York St John University
4.  University of Chester
5= University of St Mark and St John
5= Edge Hill University.

The bottom five are:
104=   London School of Economics
104=   Courtauld Institute of Art
106.    Goldsmith's College
107=   University of Cambridge
107=   London School of Oriental and African Studies.

It seems that these data provide almost no evidence that a "commitment to teaching and learning" is linked with any sort of positive outcome. Correlations with the overall scores in the Guardian rankings and the THE Teaching Exercise Framework simulation  are negative (-550 and -410 [-204 after benchmarking]).

In addition, the correlation between the percentage of staff with teaching qualifications and the Guardian indicators is negative for student satisfaction with the course (-161, insignificant), student satisfaction with teaching (-.197, insignificant), value added (-.352) and graduate employment (-.379).

But there is a positive correlation with student satisfaction with feedback (.323).

The correlations with the indicators in the THE simulation were similar: graduate employment -.416, (-.249 after benchmarking), -449 completion (-.130, insignificant, after benchmarking), and -.186 student satisfaction, insignificant (-.056 after benchmarking, insignificant).

The report does cover a variety of qualifications so it is possible that digging deeper might show that some types of credentials are more useful than others. Also, there are intervening variables: Some of the high scorers, for example, are upgraded teacher training colleges with a relatively low status and a continuing emphasis on education as a subject.

Still, unless you count a positive association with feedback, there is no sign that forcing or encouraging faculty to take teaching courses and credentials will have positive effects on university teaching.

Wednesday, December 21, 2016

The University of Tokyo did not fall in the rankings. It was pushed.


Times Higher Education (THE) has published an article by Devin Stewart that refers to a crisis of Japanese universities. He says:

"After Japan’s prestigious University of Tokyo fell from its number one spot to number seven in Times Higher Education’s Asia University Rankings earlier this year, I had a chance to travel to Tokyo to interview more than 40 people involved with various parts of the country’s education system.
Students, academics and professionals told me they felt a blow to their national pride from the news of the rankings drop. I found that the THE rankings result underscored the complex problems plaguing the country’s institutions of higher learning. "
If Japanese academics and university administrators do actually believe that the fall of the University of Tokyo (aka Todai) in the THE Asian rankings is an indicator of complex problems and if they do feel that it is a blow to their national pride then there is indeed a crisis in Japanese higher education and that is one of a failure of critical thinking and a naive trust in unstable, opaque and methodologically dubious international rankings.

The fall of Todai in the THE Asian rankings was preceded by a fall in the World University Rankings (WUR) from 23rd place in the 2014 rankings (2014-2015) to 43rd in 2015 (2015-2016). Among Asian universities in the WUR it fell from first place to third.

This was not the result of anything that happened to Todai over the course of a year.  There was no exodus of international students, no collapse of research output, no mass suicide of faculty, no sudden and miraculous disappearance of citations. It was the result of a changing methodology including the exclusion from citation counts of mega-papers, mainly in particle physics, with more than a thousand authors. This had a disproportionate impact on the University of Tokyo, whose citation score fell from 74.7 to 60.9, and some other Japanese universities.

The university made a bit of a comeback in the world rankings this year, rising to 39th (with a slightly improved citations score of 62.4) after THE did some more tweaking and gave limited credit for citations of the mega-papers.

Todai did even worse in the 2016 Asian rankings, derived from the world rankings, falling to an embarrassing seventh place behind two Singaporean, two Chinese and two Hong Kong universities. How did that happen?  There was nothing like this in other rankings. Todai's position in the Shanghai Academic Ranking of World Universities (ARWU) actually improved between 2015 and 2016, from 21st to 20th, and in the Round University Rankings from 47th to 37th, and it remained the top Asian university in the CWUR, URAP and National Taiwan University rankings.

Evidently THE saw things that others did not. They decided that Hong Kong and Mainland China were separately entities for ranking purposes and that Mainland students, faculty and collaborators in Hong Kong universities would be counted as international. The international orientation score of the University of Hong Kong (UHK) in the Asian rankings accordingly went up from 81.9 to 99.5 between 2015 and 2016. Peter Mathieson of the University of Hong Kong was aware of this and warned everyone not to get too excited. Meanwhile universities such as Hong Kong University of Science and Technology (HKUST) and Nanyang Technological University (NTU) Singapore were getting higher scores for citations, almost certainly as a result of the methodological changes.

In addition, as noted in earlier posts, THE recalibrated the weighting assigned to its indicators, reducing that given to the research and teaching reputation surveys, where Todai is a high flier, and increasing that for income from industry where Peking and Tsinghua universities have perfect scores and NTU, HKUST and UHK do better than Tokyo.

In 2015 THE issued a health warning:

"Because of changes in the underlying data, we strongly advise against direct comparisons with previous years’ World University Rankings."

They should have done that for the 2016 Asian rankings which added further changes. It is regrettable that THE has published an article which refers to a fall in the rankings. There has been no fall in any real sense. There has only been a lot of recalibration and changes in the way data is processed.

Japanese higher education should not be ashamed of any decline in quality. If there had been any, especially in research, it would have have shown up in other more stable and less opaque rankings. They should, however, be embarrassed if they allow national and university policies to be driven by methodological tweaking.





Friday, December 16, 2016

A new Super-University for Ireland?

University rankings have become extremely influential over the last few years. This is not entirely a bad thing. The initial publication of the Shanghai rankings in 2003, for example, exposed the pretensions of many European universities revealing just how far behind they had fallen in scientific research.  It also showed China how far it had to go to achieve scientific parity with the West.

Unfortunately, rankings have also had malign effects. The THE and QS world rankings have acquired a great deal of respect, trust, even reverence that may not be entirely deserved. Both introduced significant methodological changes in 2015 and THE has made further changes in 2016 and the consequence of this is that there have been some remarkable rises and falls within the rankings that have had a lot of publicity but have little to do with any real change in quality.

In addition, both QS and THE have increased the number of ranked universities which can affect the mean score for indicators from which the processed scores given to the public are derived. Both have surveys that can be biased and subjective. Both are unbalanced: QS with a 50 % weighting for academic and employer surveys and THE with field and year normalised citations plus a partial regional modification with an official weighting of 30% (the modification means that everybody except the top scorer gets a bonus for citations). The remarkable rise of Anglia Ruskin University to parity with Oxford and Princeton in this year’s THE research impact (citations) indicator and the high placing of the Pontifical Catholic University of Chile and the National University of Colombia in QS’s employers survey are evidence that these rankings continue to be implausible and unstable. To make higher education policy dependent on their fluctuations is very unwise.

This is particularly true of the two leading Irish universities, Trinity College Dublin (TCD)  and University College Dublin (UCD), which have in fact been advancing in the Round University Rankings produced by a Russian organisation and ShanghaiRanking’s Academic Ranking of World Universities. These two global rankings have methodologies that are generally stable and transparent.

I pointed out in 2015 that TCD had been steadily rising in the Shanghai ARWU  since 2004, especially in the Publications indicator (papers in the Science Citation Index - Expanded and the Social Science Citation Index) and PCP (productivity per capita, that is the combined indicator scores divided by the number of faculty). This year, to repeat an earlier post, TCD’s publication score again went up very slightly from 31 to 31.1 (27.1 in 2004) and the PCP quite significantly from 19 to 20.8 (13.9 in 2004), compared to top scores of 100 for Harvard and Caltech respectively.

UCD has also continued to do well in the Shanghai rankings with the publications score rising this year from 34.1 to 34.2 (27.3 in 2004) and PCP from 18.0 to 18.1 (8.1 in 2014).

The Shanghai rankings are, of course, famous for not counting the arts and humanities and not trying to measure anything related to teaching. The RUR rankings from Russia are based on Thomson Reuters data, also used by THE until two years ago and they do include publications in the humanities and teaching-related metrics. They have 12 out of the 13 indicators in the THE World University Rankings, plus eight others, but with a sensible weighting, for example 8% instead of 30% for field normalised citations.

The RUR rankings show that TCD rose from 174th overall in 2010 to 102nd in 2016. (193rd to 67th for research). UCD rose from 213th overall to 195th (157th to 69th for research) although some Irish universities such as NUI Galway, NUI Maynooth, University College Cork, and Dublin City University have fallen.

It is thoroughly disingenuous for Irish academics to claim that academic standards are declining because of a lack of funds. Perhaps they will do so in the future. But so far everything suggests that the two leading Irish universities are making steady progress especially in research.

The fall of UCD in this year’s THE rankings this year and TCD’s fall in 2015 and the fall of both in the QS rankings mean very little. When there are such large methodological changes it is pointless to discuss how to improve in the rankings. Methodological changes can be made and unmade and universities made and unmade as the Middle East Technical University found in 2015 when it fell from 85th place in the THE world rankings to below 501st.

The Irish Times of November 8th  had an article by Philip O’Kane that proposed that Irish universities should combine in some ways to boost their position in the global rankings.

He suggested that:
“The only feasible course of action for Ireland to avert continued sinking in the world rankings is to create a new “International University of Ireland”.

This could be a world-class research university that consists exclusively of the internationally-visible parts of all our existing institutions, and to do so at marginal cost using joint academic appointments, joint facilities and joint student registration, in a highly flexible and dynamic manner.

Those parts that are not internationally visible would be excluded from this International University of Ireland.”

It sounds like he is proposing that universities maintain their separate identity for some things but present a united front for international matters. This was an idea that was proposed in India a while ago but was quickly shot down by Phil Baty of THE. It is most unlikely that universities could separate data for faculty, students, and income, and publications of their international bits and send the data to the rankers.

The idea of a full merger is more practical but could be pointless or even counter-productive. In 2012 a group of experts, headed by European Commissioner Frans  Van Vught, suggested that UCD and TCD be merged to become a single world class university.

The ironic thing about this idea is that a merger would help with the Shanghai rankings that university bosses are studiously pretending do not exist but would be of little or no use with the rankings that the bureaucrats and politicians do care about.

The Shanghai rankings are known for being as much about quantity as quality. A merger of TCD and UCD would produce a significant gain for the university by combining the number of publications, papers in Nature and Science, and highly cited researchers. It would do no good for Nobel and Fields awards since Trinity has two now and UCD none so the new institution would still only have two (ShanghaiRanking does not count Peace and Literature). Overall, it is likely that the new Irish super-university would rise about a dozen places in the Shanghai rankings, perhaps even getting into the top 150 (TCD is currently 162nd).

But it would probably not help with the rankings that university heads are so excited about. Many of the indicators in the QS and THE rankings are scaled in some way. You might get more citations by adding together those of TCD and UCD, for instance, but QS divide them by number of faculty which would also be combined if there was a merger. You could combine the incomes of TCD and UCD but then the combined income would be divided by the combined staff numbers.

The only place where a merger would be of any point is the survey criteria, 50% in QS and 33% in THE but the problem here is that the reputation of a new University of Dublin or Ireland or whatever it is called is likely to be inferior to that of TCD and UCD for some years to come. There are places where merging universities is a sensible way of pooling the strengths of a multitude of small specialist schools and research centres, for example France and Russia. But for Ireland, there is little point if the idea is to get ahead in the QS and THE  rankings.


It would make more sense for Irish universities to focus on the Shanghai rankings where, if present trends continue, TCD will catch up with Harvard in about 240 years although by then the peaks of the intellectual world will probably be in Seoul, Shanghai, Moscow, Warsaw and Tallinn. 

Saturday, December 03, 2016

Yale Engages with the Rankings

Over the last few years, elite universities have become increasingly concerned with their status in the global rankings. A decade ago university heads were inclined to ignore rankings or to regard them as insignificant, biased or limited. The University of Texas at Austin, for example, did not take part in the 2010 Times Higher Education (THE) rankings although it relented and submitted data in 2011 after learning that other US public institutions had done so and had scored better than in the preceding THES-QS rankings

It seems that things are changing. Around the world there excellence initiatives, one element of which is often improving the position of aspiring universities in international rankings, are proliferating.

It should be a major concern that higher education policies and priorities are influenced or even determined by publications that are problematic and incomplete in several ways. Rankings count what can be counted and that usually means a strong emphasis on research. Indeed, in the case of the Taiwan, URAP and Shanghai rankings that is all they are concerned with. Attempts to measure teaching, especially undergraduate teaching, have been rather haphazard. Although the US News Best US Colleges ranking includes measures of class size, admission standards, course completion and peer evaluation indicators in global rankings such as THE and Quacquarelli Symonds (QS) focus on inputs such as staff student ratio or income that might have some relation to eventual student or graduate outcomes.

It is sad that some major universities are less interested in developing the assessment of teaching or student quality and more in adjusting their policies and missions to the agenda of the rankings, particularly the THE world rankings.

Yale is now jumping on the rankings carousel. For decades it has been happily sitting on top of the US News college rankings making up the top three along with Princeton and Harvard. But Yale does much less well in the current global rankings. This year it is ranked 11th by the Shanghai rankings, 9th among US universities, 15th by QS, 7th among US universities and behind Nanyang Technological University and Ecole Polytechnique Federale Lausanne, and 12th in THE world rankings, 8th in the USA.

And so:

"For an example of investing where Yale must be strong, I want to touch very briefly on rankings, although I share your nervousness about being overly reliant on what are far-from-perfect indicators. With our unabashed emphasis on undergraduate education, strong teaching in Yale College, and unsurpassed residential experience, Yale has long boasted one of the very highest-ranked colleges, perennially among the top three. In the ratings of world research universities, however, we tend to be somewhere between tenth and fifteenth. This discrepancy points to an opportunity, and that opportunity is science, as it is the sciences that most differentiate Yale from those above us on such lists."


The reasons for the difference between the US and the world rankings are that Yale is relatively small compared to the other Ivy League members and the leading state universities, that it is strong in the arts and humanities, and that it has a good reputation for undergraduate teaching.

One of the virtues of global ranking is the exposure of the weaknesses of western universities especially in the teaching of and research in STEM subjects and it does no harm for Yale to shift a bit from the humanities and social sciences to the hard sciences. To take account of research based rankings with a consistent methodology such as URAP, National Taiwan University or the Shanghai rankings is quite sensible.  But Yale is asking for trouble if it becomes overly concerned with rankings such as THE or QS that are inclined to destabilising changes in methodology, rely on subjective survey data, assign disproportionate weights to certain indicators, emphasise input such as income or faculty resources rather than actual achievement, are demonstrably biased, and include indicators that are extremely counter-intuitive (Anglia Ruskin with a research impact equal to Princeton and greater than Yale, Pontifical Catholic University of Chile 28th in the world for employer reputation) .

Yale would be better off if it encouraged the development of cross-national tools to measure student achievement and quality of teaching or ranking metrics that assigned more weight to the humanities and social sciences.





Monday, November 21, 2016

TOP500 Supercomputer Rankings

Every six months TOP500 publishes a list of the five hundred most powerful computer systems n the world. This is probably a good guide to the economic, scientific and technological future of the world's nation states.

The most noticeable change since November 2015 is that the number of supercomputers in China has risen dramatically from 108 to 171 systems while the USA has fallen from 200 to 171. Japan has fallen quite considerably from 37 to 27 and Germany and the UK by one each. France has added two supercomputers to reach 20.

In the whole of Africa there is exactly one supercomputer, in Cape Town. In the Middle East there are five, all in Saudi Arabia, three of them operated by Aramco.

Here is a list of countries with the number of computers in the top 500.

China 171
USA 171
Germany 32
Japan 27
France 20
UK 17
Poland 7
Italy 6
India  5
Russia 5
Saudi Arabia 5
South Korea 4
Sweden 4
Switzerland 4
Australia 3
Austria 3
Brazil 3
Netherlands 3
New Zealand 3
Denmark 2
Finland 2
Belgium 1
Canada 1
Czech Republic 1
Ireland 1
Norway 1
Singapore 1
South Africa 1
Spain 1





Friday, November 18, 2016

QS seeks a Passion Integrity Empowerment and Diversity compliant manager

The big ranking brands seem to be suffering from a prolonged fit of megalomania, perhaps caused by the toxic gases of Brexit and the victory of the deplorables. The "trusted" THE, led by the "education secretary of the world", has just made a foray into the US college ranking  market, published a graduate employability ranking and is now going to the University of Johannesburg for a BRICS Plus Various Places summit.

Meanwhile the "revered" QS, creator of "incredibly successful ranking initiatives"  also appears to be getting ready for bigger and better things. They are advertising for a Ranking Manager who will be

"a suitably accomplished and inspirational leader", and possess "a combination of analytical capability, thought leadership and knowledge of the global higher education landscape" and " ensure an environment of Passion, Integrity, Empowerment and Diversity is maintained" and be "(h)ighly analytical with extensive data modelling experience" and have "great leadership attributes".

And so on and so on. Read it yourself. If you can get through to the end without laughing you could be a suitable candidate.

I can't wait to see who gets the job.

Wednesday, November 02, 2016

More on teaching-centred rankings

The UK is proposing to add a Teaching Excellence Framework (TEF) to the famous, or infamous, Research Excellence Framework (REF). The idea is that universities are  to be judged according to their teaching quality which is to be measured by how many students manage to graduate, how satisfied students are with their courses and whether graduates are employed or in postgraduate courses shortly after graduation.

There are apparently going to be big rewards for doing well according to these criteria. It seems that universities that want to charge high tuition fees must reach a certain level.

Does one have to be a hardened cynic to suspect that there is going to be a large amount of manipulation if this is put into effect? Universities will be ranked according to the proportion of students completing their degrees? They will make graduating requirements easier, abolish compulsory courses in difficult things like dead white poets, foreign languages or maths, or allow alternative methods of assessment, group work, art projects and so on. We have, for example, already seen how the number of first and upper second class degrees awarded by British universities has risen enormously in the last few years.

Universities will be graded by student satisfaction? Just let the students know, very subtly of course, that if they say their university is no good then employers are less likely to give them jobs. Employment or postgraduate courses six months after graduation? Lots of internships and easy admissions to postgraduate courses.

In any case, it is all probably futile. A look at the Guardian University Guide rankings in a recent post here shows that if you want to find out about student outcomes six months after graduation the most relevant number the is average entry tariff, that is 'A' level grades three or four years earlier.

I doubt very much that employers and graduate, professional and business schools are really interested in the difference between an A and an A* grade or even an A and a B. Bluntly, they choose from candidates who they think are intelligent and trainable, something which correlates highly with 'A' Level grades or, across the Anglosphere Lake, SAT, ACT and GRE scores, and display other non-cognitive characteristics such as conscientiousness and open-mindedness. Also, they tend to pick people who generally resemble themselves as much as possible. Employers and schools tend to select candidates from those universities that are more likely to produce large numbers of graduates with the desired attributes.

Any teaching assessment exercise that does not measure or attempt to measure the cognitive skills of graduates is likely to be of little value.

In June Times Higher Education (THE) ran a simulation of a ranking of UK universities that might result from the TEF exercise. There were three indicators, student completion of courses, student satisfaction and graduate destinations, that is number of graduates employed or in post graduate courses six months after graduation. In addition to absolute scores, universities were benchmarked for gender, ethnicity, age, disability and subject.

There are many questions about the methodology of THE exercise, some of which are raised in the comments on the THE report.

The THE simulation appears to confirm that students' academic ability is more important than anything else when it comes to their career prospects. Comparing the THE scores for graduate destinations (absolute) with the other indicators in the THE TEF simulation and the Guardian rankings we get the following correlations.

Graduate Destinations (THE absolute) and:

Average Entry Tariff (Guardian)  .772
Student completion (THE absolute)  .750
Staff student ratio (Guardian inverted)  .663
Spending per student (Guardian)  .612
Satisfaction with course (Guardian)  .486
Student satisfaction (THE absolute)  .472
Satisfaction with teaching (Guardian)  .443
Value added (Guardian)  .347
Satisfaction with feedback (Guardian)  -.239

So, a high score in the THE graduate destinations metric, like its counterpart in the Guardian rankings, is associated most closely with students' academic ability and their ability to finish their degree programmes, next with spending, moderately with overall satisfaction and satisfaction with teaching, and substantially less so with value added. Satisfaction with feedback has a negative association with career success narrowly defined.

Looking at the benchmarked score for Graduate Destinations we find that the correlations are more modest than with the absolute score. But average entry tariff is still a better predictor of graduate outcomes than value added.

Graduate Destinations (THE distance from benchmark) and:

Student completion (THE benchmarked) .487
Satisfaction with course (Guardian)  .404
Staff student ratio (Guardian inverted)  .385
Average entry tariff (Guardian)  .383
Spending per student (Guardian)  .383
Satisfaction with teaching (Guardian) .324
Student satisfaction (THE benchmarked)  .305
Value added (Guardian)  .255
Satisfaction with feedback  (Guardian)  .025

It is useful to know about student satisfaction and very useful for students to know how likely they are to finish their programmes. But until rankers and government agencies figure out how to estimate the subject knowledge and cognitive skills of graduates and the impact, if any, of universities then the current trend to teaching-centred rankings will not be helpful to anyone.

















Sunday, October 23, 2016

NORTH KOREA: Some advice on how to become a world-class university


The October 8th post has been republished in University World news
NORTH KOREA
Some advice on how to become a world-class university

Monday, October 17, 2016

More lamentation from Dublin

Rankings have become a major weapon in the struggle of universities around the world to get their fair share or what they think is their fair share of public money. The Times Higher Education (THE) world and regional rankings are especially useful in this regard. They have a well known brand name, occasionally confused with the "Times of London", and sponsor prestigious summits at which rankers, political leaders and university heads wallow together in a warm bath of mutual flattery.

In addition, the THE rankings are highly volatile with significant methodological changes in 2011, 2015 and 2016. Another source of instability is the growing number of ranked universities. The scores used for calculating the various indicators in these rankings are not raw but standardised scores derived from means and standard deviations. So if there is an influx of new universities then mean scores are likely to change and consequently the processed scores of those above or below the mean.

The THE rankings can be interpreted to provide useful arguments whatever happens. If Western universities rise that is a sign of authentic excellence but one that is threatened by reduced funding, restrictions on foreign students and researchers, and reputations sullied by xenophobic electorates. If they fall that means of course that those threats have materialised.

The QS rankings are also sometimes unstable, having made significant methodological changes in 2015 and giving a 50% weighting to very subjective reputation indicators.

Irish universities seem to be especially fond of using these rankings as a ploy to gain public favour and largess. In 2015 Ireland's top university, Trinity College Dublin (TCD), fell seven places in the QS world rankings and 22 places in THE's.

TCD announced of course that government cuts had a lot do with it. The Dean of Research said:
“Notwithstanding these combined achievements the cuts in funding and increased investments made by our global competition, continue to have a direct impact on the rankings. Trinity is battling against intense international competition, particularly from Asian universities and from certain European countries where governments are investing heavily in higher education. The continued reduction in government investment in Irish universities has impacted negatively on the international standing of our universities and our ability to compete in a global arena.”“Trinity’s top 100 position globally and top 30 in Europe is remarkable in the context of its reduced income. Trinity’s annual budget per academic is 45% lower than that of the average university in the world top 200.  It is to the credit of Trinity’s dedicated teaching and research staff that the University continues to maintain its global position against such challenges.”
“As a knowledge economy we need an excellent competitive education system.  Trinity remains a world leading research-intensive university and the knowledge and innovation created are critical for the economic development of Ireland.”
I pointed out in 2015 that TCD had been steadily rising in the Shanghai ARWU rankings since 2004, especially in the Publications indicator (papers in the Science Citation Index and the Social Science Citation Index) and PCP (productivity per capita, that is the combined indicator scores divided by the number of faculty). This year, TCD's publication score again went up very slightly from 31 to 31.1 and the PCP quite significantly from 19 to 20.8, compared to top scores of 100 for Harvard and Caltech respectively.

University College Dublin has also continued to do well in the Shanghai rankings with the publications score rising this year from 34.1 (27.3 in 2004) to 34.2 and PCP from 18.0 (8.1 in 2014) to 18.1.

The Shanghai rankings are famous for not counting the arts and humanities or trying to measure anything related to teaching. The RUR rankings from Russia are based on Thomson Reuters data, also used by THE until two years ago and they do include publications in the humanities and teaching-related metrics. They have 12 out of the 13 indicators in the THE world rankings, plus eight others, but with a sensible weighting, for example 8% instead of 30% for field normalised citations.

The RUR rankings show that TCD rose from 174th overall in 2010 to 102nd in 2016. (193rd to 67th for research).

University College Dublin  (UCD) rose from 213th overall to 195th (157th to 69th for research) although some Irish universities, NUI Galway, NUI Maynooth, University College Cork, and Dublin City University, have fallen.

Nonetheless TCD decided in March of this year to develop a rankings strategy  aimed at QS and THE with a Rankings Steering Group chaired by the Provost. The competence and knowledge displayed by such groups and committees often have little relationship to the status and salaries of its members and that appears to be the case for TCD.

It seems that there was a misplaced decimal point in the financial data submitted to THE for the 2016 rankings and that would have left TCD with a lower rating than it deserved and so it has withdrawn from the rankings until the error is corrected.

If TCD cannot find an administrator or a statistician to check things like that it really has no business asking for taxpayers' money. I suspect that decimal points are not misplaced -- or if they are it is to the right rather than the left --  in submissions for grants or subsidies.

This raises the question of whether the THE checking procedures are adequate. I was under the impression that if there was a change of 20% then red flags would start waving. For THE to allow a large change in reported income and therefore at least one, maybe two or three, income indicators sounds rather odd. What about that unique game changing audit?

Meanwhile UCD, 176th in the THE rankings last year, has dropped out of the top 200 altogether.

The  QS  rankings were also bad news for Ireland. Every university fell except for NUI Galway and there were none in the top 100.

But has there in fact been any real decline in the quality of TCD and UCD?

The evidence of RUR and the Shanghai rankings is that the two main universities are steadily improving or at least  holding their own, especially with regard to research. Possibly less highly regarded places like NUI Galway and NUI Maynooth are struggling but that could be fairly easy to remedy.

The Irish Universities Association issued a statement:

'The continued slide of the Irish Universities in the QS World University Rankings should be greeted with alarm. Strenuous efforts on the part of the universities has resulted in strong performance on some measures in the rankings such as those relating to research citations and internationalisation of the staff and student cohort. Unfortunately, this good work is being undermined by the negative impact of underfunding on key indicators such as the student:faculty ratio. The latter is highly influential in scoring in the QS rankings.
It would also appear likely that almost a decade of austerity is spilling over into the reputational component of the rankings, with consequent negative repercussions. IUA Chief Executive, Ned Costello said: “we can no longer hide from the corrosive effect which years of cutbacks are having on our higher education system. At a time when we are more dependent than ever on the talent of our people for our economic future, we simply must invest in our universities. An immediate injection of funding is required in the upcoming Budget and Estimates to fund more lecturers, deliver smaller group teaching and restore quality in our system.” '
The decline of TCD and and UCD in the QS and THE rankings cannot reasonably be attributed to any real deficiencies on the part of those universities. A decline in the number of lecturers would have a negative effect on the faculty student metric but would help indicators scaled for faculty size. The alleged decline is largely a consequence of methodological changes and adjustments, the instability resulting from the influx of new universities and growing ranking sophistication in other places.

It is a shame that researchers and scholars should collude with those rankings that show them in a bad light while ignoring more stable and less biased ones that show a continuing and genuine improvement especially in research.






Saturday, October 08, 2016

Will North Korea Engage with the Rankings?

Kim Jong-un has declared that Kim Il-sung University must become a world-class institution. No doubt there will be chuckles at Oxford,  Anglia Ruskin University, the University of Iceland and the Free University of Bozen - Bolzano but it could be surprisingly easy if being world class means getting a high place in the rankings. After all, there are now quite a few places appearing in the various global and regional tables that would have been just as surprising just a few years ago.

First, I should mention that there already is a ranking in which Kim Il-sung University is listed: a ranking of international influence as measured by Google's ranking of search results where the institution is 254th.

Here is my plan for North Korea to become world class in just a few years.

1. Offer adjunct professorships to 150 researchers and ask them to  put the university as a secondary affiliation. Maybe they can come and visit Pyongyang sometimes but that is not really necessary. In a little while they will be producing 150 papers or more a year with the university name on, eventually one thousand over a five year period, which will meet the threshold for inclusion in the THE world rankings.

2. Make sure that one or two of those adjunct professors are involved in multi-author, multi-cited projects (but make sure below 1,000 authors) with multiple citations. Medicine is probably a better bet than physics at the moment. This will get a good score in the THE citations indicator.

3. Make sure that research funds to the university go through something with the word industry in it. That way the university will go to the top of the THE Industry Income: Innovation indicator.

4. Don't forget the other rankings. Give the university a boost in the QS world rankings by drafting lots of research assistants who will count in the the student faculty ratio indicator.

5.   Start a branch campus somewhere and get a high score in the international indicators that nearly everybody has nowadays. If the branch is in the USA go for Princeton Review's top party school. 

6. Send a few hundred closely supervised graduate students abroad and tell them they know what to do for the QS reputation survey. When they come back as faculty with a co-authored article or two tell them they know what to do for the THE survey.

7. When Kim Il-sung University is a rising star of the university world, try hosting a summit to rise even higher. Better make sure that hotel is finished though.

Tuesday, October 04, 2016

About those predictions

On September 16th I made some predictions about the latest Times Higher Education (THE) world rankings and summit at Berkeley. My record is not perfect but probably a bit better than the professional pollsters who predicted a hung parliament at the last UK elections, a crushing defeat for Brexit and humiliation for Donald Trump in the Republican primaries.

I predicted that Trump would not be invited to give a keynote speech. I was right but it was a pity. He would certainly have added a bit of diversity to a rather bland affair and he does seem to have a talent for helping unpromising beginners into successful careers, something that the current fad for value added ranking is supposed to measure.

I also said that UC Berkeley as the summit host would get into the top ten again after falling to thirteenth last year. This has now become a tradition at THE summits. I suspect though that even THE will find it hard to get King's College London, the 2017 world summit host, into the top ten. Maybe they will have to settle for top twenty.

The prediction that adding books to the indicator mix would help British universities seems to have been fulfilled. Oxford was number one for the first time. I was also right about the renewed rise of Asia, some of it anyway.  The Korean favourites, Seoul National University, POSTECH, KAIST, Sungkyunkwan University, Korea University, have all risen significantly this year.

The decline of US public universities blamed on lack of funding? Yes, although I never thought Robert Reich would say that public higher education is dying.

Danger of Brexit and immigration controls for UK universities? I did not see anything specific but I did not look very hard and probably everybody thinks it's self evident.

I have to confess that I have not counted the number of times that the words prestige and prestigious were used at the summit or in the Christopher Priest novel. In the latter it is a contraction of prestidigitation and refers to the effect or the third segment of a stage illusion following the setup and the performance, the moment when the rabbit is pulled out of the hat or Anglia Ruskin revealed to have a greater world research impact than Cambridge or Imperial.

Phil Baty gave a masterclass and so did did Duncan Ross. I am pretty certain that no feminists complained about this outrageous sexism so I am prepared to admit that I was wrong there.

Incidentally, according to wikipedia a master class is "a class given to students of a particular discipline by an expert of that discipline -- usually music, but also painting, drama, any of the arts, or on any other occasion where skills are being developed."

Saturday, October 01, 2016

Who says rankings are of no significance?

From Mansion Global 


Six High-End Homes Near America’s Top-Ranked University

Who needs dorms at Stanford when you can live in one of these?


Stanford is, in case you haven't noticed, top of the Wall Street Journal/Times Higher Education US college ranking [subscription required for full results] and, more significantly, the world's 100 most innovative universities.