Monday, November 21, 2016

TOP500 Supercomputer Rankings

Every six months TOP500 publishes a list of the five hundred most powerful computer systems n the world. This is probably a good guide to the economic, scientific and technological future of the world's nation states.

The most noticeable change since November 2015 is that the number of supercomputers in China has risen dramatically from 108 to 171 systems while the USA has fallen from 200 to 171. Japan has fallen quite considerably from 37 to 27 and Germany and the UK by one each. France has added two supercomputers to reach 20.

In the whole of Africa there is exactly one supercomputer, in Cape Town. In the Middle East there are five, all in Saudi Arabia, three of them operated by Aramco.

Here is a list of countries with the number of computers in the top 500.

China 171
USA 171
Germany 32
Japan 27
France 20
UK 17
Poland 7
Italy 6
India  5
Russia 5
Saudi Arabia 5
South Korea 4
Sweden 4
Switzerland 4
Australia 3
Austria 3
Brazil 3
Netherlands 3
New Zealand 3
Denmark 2
Finland 2
Belgium 1
Canada 1
Czech Republic 1
Ireland 1
Norway 1
Singapore 1
South Africa 1
Spain 1





Friday, November 18, 2016

QS seeks a Passion Integrity Empowerment and Diversity compliant manager

The big ranking brands seem to be suffering from a prolonged fit of megalomania, perhaps caused by the toxic gases of Brexit and the victory of the deplorables. The "trusted" THE, led by the "education secretary of the world", has just made a foray into the US college ranking  market, published a graduate employability ranking and is now going to the University of Johannesburg for a BRICS Plus Various Places summit.

Meanwhile the "revered" QS, creator of "incredibly successful ranking initiatives"  also appears to be getting ready for bigger and better things. They are advertising for a Ranking Manager who will be

"a suitably accomplished and inspirational leader", and possess "a combination of analytical capability, thought leadership and knowledge of the global higher education landscape" and " ensure an environment of Passion, Integrity, Empowerment and Diversity is maintained" and be "(h)ighly analytical with extensive data modelling experience" and have "great leadership attributes".

And so on and so on. Read it yourself. If you can get through to the end without laughing you could be a suitable candidate.

I can't wait to see who gets the job.

Wednesday, November 02, 2016

More on teaching-centred rankings

The UK is proposing to add a Teaching Excellence Framework (TEF) to the famous, or infamous, Research Excellence Framework (REF). The idea is that universities are  to be judged according to their teaching quality which is to be measured by how many students manage to graduate, how satisfied students are with their courses and whether graduates are employed or in postgraduate courses shortly after graduation.

There are apparently going to be big rewards for doing well according to these criteria. It seems that universities that want to charge high tuition fees must reach a certain level.

Does one have to be a hardened cynic to suspect that there is going to be a large amount of manipulation if this is put into effect? Universities will be ranked according to the proportion of students completing their degrees? They will make graduating requirements easier, abolish compulsory courses in difficult things like dead white poets, foreign languages or maths, or allow alternative methods of assessment, group work, art projects and so on. We have, for example, already seen how the number of first and upper second class degrees awarded by British universities has risen enormously in the last few years.

Universities will be graded by student satisfaction? Just let the students know, very subtly of course, that if they say their university is no good then employers are less likely to give them jobs. Employment or postgraduate courses six months after graduation? Lots of internships and easy admissions to postgraduate courses.

In any case, it is all probably futile. A look at the Guardian University Guide rankings in a recent post here shows that if you want to find out about student outcomes six months after graduation the most relevant number the is average entry tariff, that is 'A' level grades three or four years earlier.

I doubt very much that employers and graduate, professional and business schools are really interested in the difference between an A and an A* grade or even an A and a B. Bluntly, they choose from candidates who they think are intelligent and trainable, something which correlates highly with 'A' Level grades or, across the Anglosphere Lake, SAT, ACT and GRE scores, and display other non-cognitive characteristics such as conscientiousness and open-mindedness. Also, they tend to pick people who generally resemble themselves as much as possible. Employers and schools tend to select candidates from those universities that are more likely to produce large numbers of graduates with the desired attributes.

Any teaching assessment exercise that does not measure or attempt to measure the cognitive skills of graduates is likely to be of little value.

In June Times Higher Education (THE) ran a simulation of a ranking of UK universities that might result from the TEF exercise. There were three indicators, student completion of courses, student satisfaction and graduate destinations, that is number of graduates employed or in post graduate courses six months after graduation. In addition to absolute scores, universities were benchmarked for gender, ethnicity, age, disability and subject.

There are many questions about the methodology of THE exercise, some of which are raised in the comments on the THE report.

The THE simulation appears to confirm that students' academic ability is more important than anything else when it comes to their career prospects. Comparing the THE scores for graduate destinations (absolute) with the other indicators in the THE TEF simulation and the Guardian rankings we get the following correlations.

Graduate Destinations (THE absolute) and:

Average Entry Tariff (Guardian)  .772
Student completion (THE absolute)  .750
Staff student ratio (Guardian inverted)  .663
Spending per student (Guardian)  .612
Satisfaction with course (Guardian)  .486
Student satisfaction (THE absolute)  .472
Satisfaction with teaching (Guardian)  .443
Value added (Guardian)  .347
Satisfaction with feedback (Guardian)  -.239

So, a high score in the THE graduate destinations metric, like its counterpart in the Guardian rankings, is associated most closely with students' academic ability and their ability to finish their degree programmes, next with spending, moderately with overall satisfaction and satisfaction with teaching, and substantially less so with value added. Satisfaction with feedback has a negative association with career success narrowly defined.

Looking at the benchmarked score for Graduate Destinations we find that the correlations are more modest than with the absolute score. But average entry tariff is still a better predictor of graduate outcomes than value added.

Graduate Destinations (THE distance from benchmark) and:

Student completion (THE benchmarked) .487
Satisfaction with course (Guardian)  .404
Staff student ratio (Guardian inverted)  .385
Average entry tariff (Guardian)  .383
Spending per student (Guardian)  .383
Satisfaction with teaching (Guardian) .324
Student satisfaction (THE benchmarked)  .305
Value added (Guardian)  .255
Satisfaction with feedback  (Guardian)  .025

It is useful to know about student satisfaction and very useful for students to know how likely they are to finish their programmes. But until rankers and government agencies figure out how to estimate the subject knowledge and cognitive skills of graduates and the impact, if any, of universities then the current trend to teaching-centred rankings will not be helpful to anyone.