Saturday, December 03, 2016

Yale Engages with the Rankings

Over the last few years, elite universities have become increasingly concerned with their status in the global rankings. A decade ago university heads were inclined to ignore rankings or to regard them as insignificant, biased or limited. The University of Texas at Austin, for example, did not take part in the 2010 Times Higher Education (THE) rankings although it relented and submitted data in 2011 after learning that other US public institutions had done so and had scored better than in the preceding THES-QS rankings

It seems that things are changing. Around the world there excellence initiatives, one element of which is often improving the position of aspiring universities in international rankings, are proliferating.

It should be a major concern that higher education policies and priorities are influenced or even determined by publications that are problematic and incomplete in several ways. Rankings count what can be counted and that usually means a strong emphasis on research. Indeed, in the case of the Taiwan, URAP and Shanghai rankings that is all they are concerned with. Attempts to measure teaching, especially undergraduate teaching, have been rather haphazard. Although the US News Best US Colleges ranking includes measures of class size, admission standards, course completion and peer evaluation indicators in global rankings such as THE and Quacquarelli Symonds (QS) focus on inputs such as staff student ratio or income that might have some relation to eventual student or graduate outcomes.

It is sad that some major universities are less interested in developing the assessment of teaching or student quality and more in adjusting their policies and missions to the agenda of the rankings, particularly the THE world rankings.

Yale is now jumping on the rankings carousel. For decades it has been happily sitting on top of the US News college rankings making up the top three along with Princeton and Harvard. But Yale does much less well in the current global rankings. This year it is ranked 11th by the Shanghai rankings, 9th among US universities, 15th by QS, 7th among US universities and behind Nanyang Technological University and Ecole Polytechnique Federale Lausanne, and 12th in THE world rankings, 8th in the USA.

And so:

"For an example of investing where Yale must be strong, I want to touch very briefly on rankings, although I share your nervousness about being overly reliant on what are far-from-perfect indicators. With our unabashed emphasis on undergraduate education, strong teaching in Yale College, and unsurpassed residential experience, Yale has long boasted one of the very highest-ranked colleges, perennially among the top three. In the ratings of world research universities, however, we tend to be somewhere between tenth and fifteenth. This discrepancy points to an opportunity, and that opportunity is science, as it is the sciences that most differentiate Yale from those above us on such lists."


The reasons for the difference between the US and the world rankings are that Yale is relatively small compared to the other Ivy League members and the leading state universities, that it is strong in the arts and humanities, and that it has a good reputation for undergraduate teaching.

One of the virtues of global ranking is the exposure of the weaknesses of western universities especially in the teaching of and research in STEM subjects and it does no harm for Yale to shift a bit from the humanities and social sciences to the hard sciences. To take account of research based rankings with a consistent methodology such as URAP, National Taiwan University or the Shanghai rankings is quite sensible.  But Yale is asking for trouble if it becomes overly concerned with rankings such as THE or QS that are inclined to destabilising changes in methodology, rely on subjective survey data, assign disproportionate weights to certain indicators, emphasise input such as income or faculty resources rather than actual achievement, are demonstrably biased, and include indicators that are extremely counter-intuitive (Anglia Ruskin with a research impact equal to Princeton and greater than Yale, Pontifical Catholic University of Chile 28th in the world for employer reputation) .

Yale would be better off if it encouraged the development of cross-national tools to measure student achievement and quality of teaching or ranking metrics that assigned more weight to the humanities and social sciences.





Monday, November 21, 2016

TOP500 Supercomputer Rankings

Every six months TOP500 publishes a list of the five hundred most powerful computer systems n the world. This is probably a good guide to the economic, scientific and technological future of the world's nation states.

The most noticeable change since November 2015 is that the number of supercomputers in China has risen dramatically from 108 to 171 systems while the USA has fallen from 200 to 171. Japan has fallen quite considerably from 37 to 27 and Germany and the UK by one each. France has added two supercomputers to reach 20.

In the whole of Africa there is exactly one supercomputer, in Cape Town. In the Middle East there are five, all in Saudi Arabia, three of them operated by Aramco.

Here is a list of countries with the number of computers in the top 500.

China 171
USA 171
Germany 32
Japan 27
France 20
UK 17
Poland 7
Italy 6
India  5
Russia 5
Saudi Arabia 5
South Korea 4
Sweden 4
Switzerland 4
Australia 3
Austria 3
Brazil 3
Netherlands 3
New Zealand 3
Denmark 2
Finland 2
Belgium 1
Canada 1
Czech Republic 1
Ireland 1
Norway 1
Singapore 1
South Africa 1
Spain 1





Friday, November 18, 2016

QS seeks a Passion Integrity Empowerment and Diversity compliant manager

The big ranking brands seem to be suffering from a prolonged fit of megalomania, perhaps caused by the toxic gases of Brexit and the victory of the deplorables. The "trusted" THE, led by the "education secretary of the world", has just made a foray into the US college ranking  market, published a graduate employability ranking and is now going to the University of Johannesburg for a BRICS Plus Various Places summit.

Meanwhile the "revered" QS, creator of "incredibly successful ranking initiatives"  also appears to be getting ready for bigger and better things. They are advertising for a Ranking Manager who will be

"a suitably accomplished and inspirational leader", and possess "a combination of analytical capability, thought leadership and knowledge of the global higher education landscape" and " ensure an environment of Passion, Integrity, Empowerment and Diversity is maintained" and be "(h)ighly analytical with extensive data modelling experience" and have "great leadership attributes".

And so on and so on. Read it yourself. If you can get through to the end without laughing you could be a suitable candidate.

I can't wait to see who gets the job.

Wednesday, November 02, 2016

More on teaching-centred rankings

The UK is proposing to add a Teaching Excellence Framework (TEF) to the famous, or infamous, Research Excellence Framework (REF). The idea is that universities are  to be judged according to their teaching quality which is to be measured by how many students manage to graduate, how satisfied students are with their courses and whether graduates are employed or in postgraduate courses shortly after graduation.

There are apparently going to be big rewards for doing well according to these criteria. It seems that universities that want to charge high tuition fees must reach a certain level.

Does one have to be a hardened cynic to suspect that there is going to be a large amount of manipulation if this is put into effect? Universities will be ranked according to the proportion of students completing their degrees? They will make graduating requirements easier, abolish compulsory courses in difficult things like dead white poets, foreign languages or maths, or allow alternative methods of assessment, group work, art projects and so on. We have, for example, already seen how the number of first and upper second class degrees awarded by British universities has risen enormously in the last few years.

Universities will be graded by student satisfaction? Just let the students know, very subtly of course, that if they say their university is no good then employers are less likely to give them jobs. Employment or postgraduate courses six months after graduation? Lots of internships and easy admissions to postgraduate courses.

In any case, it is all probably futile. A look at the Guardian University Guide rankings in a recent post here shows that if you want to find out about student outcomes six months after graduation the most relevant number the is average entry tariff, that is 'A' level grades three or four years earlier.

I doubt very much that employers and graduate, professional and business schools are really interested in the difference between an A and an A* grade or even an A and a B. Bluntly, they choose from candidates who they think are intelligent and trainable, something which correlates highly with 'A' Level grades or, across the Anglosphere Lake, SAT, ACT and GRE scores, and display other non-cognitive characteristics such as conscientiousness and open-mindedness. Also, they tend to pick people who generally resemble themselves as much as possible. Employers and schools tend to select candidates from those universities that are more likely to produce large numbers of graduates with the desired attributes.

Any teaching assessment exercise that does not measure or attempt to measure the cognitive skills of graduates is likely to be of little value.

In June Times Higher Education (THE) ran a simulation of a ranking of UK universities that might result from the TEF exercise. There were three indicators, student completion of courses, student satisfaction and graduate destinations, that is number of graduates employed or in post graduate courses six months after graduation. In addition to absolute scores, universities were benchmarked for gender, ethnicity, age, disability and subject.

There are many questions about the methodology of THE exercise, some of which are raised in the comments on the THE report.

The THE simulation appears to confirm that students' academic ability is more important than anything else when it comes to their career prospects. Comparing the THE scores for graduate destinations (absolute) with the other indicators in the THE TEF simulation and the Guardian rankings we get the following correlations.

Graduate Destinations (THE absolute) and:

Average Entry Tariff (Guardian)  .772
Student completion (THE absolute)  .750
Staff student ratio (Guardian inverted)  .663
Spending per student (Guardian)  .612
Satisfaction with course (Guardian)  .486
Student satisfaction (THE absolute)  .472
Satisfaction with teaching (Guardian)  .443
Value added (Guardian)  .347
Satisfaction with feedback (Guardian)  -.239

So, a high score in the THE graduate destinations metric, like its counterpart in the Guardian rankings, is associated most closely with students' academic ability and their ability to finish their degree programmes, next with spending, moderately with overall satisfaction and satisfaction with teaching, and substantially less so with value added. Satisfaction with feedback has a negative association with career success narrowly defined.

Looking at the benchmarked score for Graduate Destinations we find that the correlations are more modest than with the absolute score. But average entry tariff is still a better predictor of graduate outcomes than value added.

Graduate Destinations (THE distance from benchmark) and:

Student completion (THE benchmarked) .487
Satisfaction with course (Guardian)  .404
Staff student ratio (Guardian inverted)  .385
Average entry tariff (Guardian)  .383
Spending per student (Guardian)  .383
Satisfaction with teaching (Guardian) .324
Student satisfaction (THE benchmarked)  .305
Value added (Guardian)  .255
Satisfaction with feedback  (Guardian)  .025

It is useful to know about student satisfaction and very useful for students to know how likely they are to finish their programmes. But until rankers and government agencies figure out how to estimate the subject knowledge and cognitive skills of graduates and the impact, if any, of universities then the current trend to teaching-centred rankings will not be helpful to anyone.