Saturday, July 23, 2016

Another Important Ranking




Another ranking that should be looked at very carefully is the International Mathematical Olympiad, designed for pre-university students, the first of which was held in Romania in 1959. The competition includes problems in algebra, pre-calculus, complex geometry and functional equations.

Twenty years ago the Olympiad was dominated by ex-communist Eastern Europe. In 1996, first place was taken by Romania while Hungary was third and  Russia fourth. Now, East Asia and the Chinese diaspora are dominant: South Korea second, China third, Singapore fourth, Taiwan fifth, North Korea sixth, Hong Kong ninth, Japan tenth.

The USA is first this year, as it was in 2015, with an all-male team whose members have three South Asian and three Chinese surnames.

The rankings look pretty much like the PISA and TIMSS test scores. Combined with the recent coding competition and the Top500 supercomputing ranking, they suggest the intellectual and economic leaders of this century will be in East Asia and Eastern Europe including Russia.

The USA and the UK might do fairly well if they can introduce and maintain sensible immigration and educational selection policies.

The American success, unfortunately, is not good enough for the conventional education media. The team is not diverse enough: no women, no historically underrepresented minorities. So far nobody has protested about the absence of transgender or openly gay students but perhaps their time will eventually come.

Education Week reports that:

"According to Mark Saul, the director of competitions for the Mathematical Association of America, not a single African-American or Hispanic student—and only a handful of girls—has ever made it to the Math Olympiad team in its 50 years of existence."

To overcome this problem, participants in the events leading up to the Olympiad have competitions that test creativity and collaboration and are judged subjectively.   

"In the past few years, MathCounts added two new middle school programs to try to diversify its participant pool—National Math Club and the Math Video Challenge.

"Schools or teachers who sign up for the National Math Club receive a kit full of activities and resources, but there's no special teacher training and no competition attached.

The Math Video Challenge is a competition, but a collaborative one. Teams of four students make a video illustrating a math problem and its real-world application.

After the high-pressure Countdown round at this year's national MathCounts competition, in which the top 12 students went head to head solving complex problems in rapid fire, the finalists for the Math Video Challenge took the stage to show their videos. The demographics of that group looked quite different from those in the competition round—of the 16 video finalists, 13 were girls and eight were African-American students. The video challenge does not put individual students on the hot seat—so it's less intimidating by design. It also adds the element of artistic creativity to attract a new pool of students who may not see themselves as "math people."

An 8th grade team from the Ron Clark Academy, an independent middle school in Atlanta that serves low-income students, was among the finalists. The students illustrated a complicated multistep problem entirely through rap. None had ever been involved in a math competition before."

In other words, the competitions will be less and less about mathematics and more and more about making rap videos and the like. No doubt Russia, China and Korea will be flocking to the US to see how its done. Much the same thing has been happening with national competitive debating.



Here are this year's results and those for 2015 and 1996.


Rank 2016
Team
Rank 2015
Rank 1996
1
USA
1
2
2
South Korea
3
8
3
China
2
6
4
Singapore
10
25
5
Taiwan
18
20
6
North Korea
4
--
7=
Russia
8
4
7=
UK
22
5
9
Hong Kong
28
27
10
Japan
22
11
11
Vietnam
5
7
12=
Canada
9
16
12=
Thailand
12
47
14
Hungary
20
3
15=
Brazil
22
52
15=
Italy
29
25
17
Philippines
36
74
18
Bulgaria
29
11
19
Germany
27
10
20=
Romania
13
1
20=
Indonesia
29
70
22
Israel
40
15
23
Mexico
19
53
24
Iran
7
9
25=
Australia
6
23
25=
France
14
32
25=
Peru
16
--
28
Kazakhstan
28
25
29
Turkey
20
19
30=
Armenia
26
34
30=
Croatia
15
34
30=
Ukraine
11
18
33
Mongolia
35
44
34
India
34
37
35=
Bangladesh
33
--
35=
Belarus
39
21
37=
Czech Republic
45
28
37=
Sweden
60
40
39
Macau
35
48
40
Serbia
40
29
41
Saudi Arabia
41
--
42
Poland
17
13
43
Switzerland
45
62
44
Netherlands
43
59
45
Bosnia - Herzogovina
43
57
46
Austria
60
42
47
Portugal
52
13
48
Syria
54
--
49
Spain
72
48
50=
Lithuania
65
32
50=
Greece
51
22
52
Belgium
56
31
53
New Zealand
49
--
54
Azerbaijan
48
58
55
Slovakia
33
17
56
Malaysia
57
72
57
Argentina
52
29
58
South Africa
55
43
59=
Costa Rica
67
--
59=
Georgia
42
30
61
Estonia
70
55
62
Tajikistan
64
--
63=
Moldova
38
41
63=
Slovenia
73
44
63=
Cyprus
63
69
66=
Sri Lanka
70
53
66=
Colombia
49
46
68
El Salvador
95
--
69=
Albania
77
67
69=
Turkmenistan
58
72
71=
Finland
82
39
71=
Paraguay
67
--
73
Macedonia
74
46
74
Latvia
79
33
75
Ireland
77
61
76
Tunisia
75
--
77=
Kosovo
86
--
77=
Uzbekistan
58
--
79
Morocco
80
65
80
Nicaragua
82
--
81
Denmark
69
48
82
Algeria
62
--
83
Ecuador
80
--
84=
Kyrgyzstan
92
67
84=
Norway
65
37
86
Venezuela
96
--
87
Puerto Rico
90
--
88=
Montenegro
89

88=
Nigeria
88

90
Iceland
75
56
91=
Chile
97
71
91=
Pakistan
85
--
93
Uruguay
93
--
94
Trinidad & Tobago
82
60
95
Luxemburg
97
--
96=
Cambodia
86
--
96=
Myanmar
--
--
98
Uganda
100
--
99
Kenya
--
--
100=
Honduras
--
--
100=
Madagascar
--
--
102
Jamaica
102
--
103
Botswana
103
--
104=
Egypt


104=
Ghana
101
--
106
Tanzania
106
--
107=
Iraq
--
--
107=
Liechtenstein
90
--
109
Laos
--
--




Monday, July 18, 2016

I reported plagiarism in a PhD, but my university ignored it



From the Guardian 8th July by Anonymous. The PhD thesis consists of some poems and a long essay. I'm not sure which is worse: awarding a PhD for something with what appears to be such a slender amount of research, the unchecked plagiarism or the author's need to be anonymous.

"I vetted the thesis and found that it had 75 pages with uncredited verbatim sentences, often more than one per page. Sometimes they were from items cited in the bibliography, suggesting amateur citation skills – cut and paste instead of paraphrase (even though the candidate knew full well to use quote marks when quoting elsewhere in the thesis). On at least 10 occasions sentences were from items not cited in the bibliography at all, but from academic articles and reviews."..."This PhD sets a precedent that suggests other candidates would not have their doctorates stripped from them for using multiple uncredited texts in their creative writing. This also sets a precedent that a PhD with nearly a 100 verbatim borrowings in its critical writing does not lead to the removal of the doctorate from the doctor. Once it’s passed, it’s passed."

Saturday, July 16, 2016

This could be the most important ranking of all


TOP500 has announced its latest list of supercomputers. First place goes to China, which now has 167 systems in the top 500 compared to 165 for the USA.

"A new Chinese supercomputer, the Sunway TaihuLight, captured the number one spot on the latest TOP500 list of supercomputers released on Monday morning at the ISC High Performance conference (ISC) being held in Frankfurt, Germany.  With a Linpack mark of 93 petaflops, the system outperforms the former TOP500 champ, Tianhe-2, by a factor of three. The machine is powered by a new ShenWei processor and custom interconnect, both of which were developed locally, ending any remaining speculation that China would have to rely on Western technology to compete effectively in the upper echelons of supercomputing."

Here are the number of systems located in various other countries:

Japan 34
Germany 26
France 20
India 15
UK 13
South Korea 10
Russia 7
Italy 5
Saudi Arabia 5
Sweden 5
Australia 5
Brazil 4
South Africa 1
Canada 1
Singapore 1

Apart from Saudi Arabia, there are no supercomputers in any Muslim country. There are none in Africa except for the single system in South Africa. There are none in Latin America outside Brazil.

China has reached parity with the US and is now heading for supremacy.




Monday, July 11, 2016

More on THE’s bespoke rankings






Times Higher Education (THE) have just announced another regional ranking, this time for Latin America, at another prestigious summit in Bogota, Colombia.

It seems that THE have entered a stage of imperial overreach, announcing new projects, publishing various indicators as stand-alone rankings and moving into previous unranked corners of the world. They have tried rankings for the BRICS Plus countries, Africa, the Middle East and North Africa (MENA), Asia and Latin America, and apparently have plans to enter the lucrative US college ranking business and the UK teaching-orientated market.

The venture into regional rankings has been accompanied by a noticeable tendency to tailor their rankings to the benefit of the hosts of their summit series, which is presumably what they mean by bespoke. 

The MENA universities summit in Qatar in February 2015 was introduced by a single indicator (citations) “snapshot” ranking which put Texas A and M University at Qatar, a branch campus that offered nothing but engineering courses, at the top. In case anyone is wondering, this was the result of a single faculty member with a joint appointment with the mother campus in Texas who was on the list of authors for a hugely cited physics paper. Qatar University was fourth.  If the Teaching or Research indicator cluster had been used  the ranking would have been rather different.

In this snapshot, United Arab Emirates University was 11th and the American American University of Sharjah 17th.

In January 2016 THE produced another ranking for the MENA summit held at the United Emirates University in Al Ain, UAE, in February. This time THE simply used all of the indicators in the world rankings, not just the citations indicator. The UAE University was fifth and the American University of Sharjah eighth. Texas A and M University was not included and Qatar University was sixth.

THE then went to Africa for a summit at the University of Johannesburg. Once again they produced a citations based ranking but this time they used fractional counting, dividing the citations to mega-papers among the contributing researchers and institutions. When the ranking was announced, the University of Johannesburg was in ninth place ahead of University of Marrakech Cadi Ayyad of Morocco, which was a participant in various large scale physics projects. If THE had calculated citations without using fractional counting, as it did in the 2014-15 world rankings, then Cadi Ayyad would have been second in Africa and if they had used the overall results of the world rankings it would have been fourth.

At the next African summit at the University of Ghana in April 2016 THE used all the indicators of their world rankings without further adjustment. The world ranking methodology had been changed in 2015 to dilute the distorting effects of the regional modification to the citations indicator and multi-author papers were not counted.

In these rankings Cadi Ayyad was outscored by the University of Ghana, the local flagship. Using the world ranking methodology the University of Ghana went from the twelfth place it was given at the Johannesburg summit to seventh.

Next we come to the recent Asian University Rankings announced by THE at a summit held at the Hong Kong University of Science and Technology.

The big surprise of these rankings was the fall of the University of Tokyo (Todai) from first to seventh place behind the University of Singapore, Nanyang Technological University (NTU), rising from tenth, Peking University, the University of Hong Kong, Tsinghua University and the Hong Kong University of Science and Technology (HKUST).

The fall of Tokyo seems implausible. In the Shanghai research based ranking the University of Tokyo has always been in first place among Asian universities and last year the top six in Asia were all Japanese or Israeli institutions. There was no Hong Kong university in the top 150 and HKUST was behind at least 26 other Asian universities. Singaporean institutions also trailed behind the leading Japanese universities. Tokyo was also the top Asian university in the URAP and CWUR rankings.

In addition, Tokyo had been top of THE’s Asian rankings ever since they started. It seems hard to see how Tokyo could fall and NTU rise so quickly. Universities are big things often with tens of thousands of students, thousands of faculty, millions of dollars, thousands of papers, citations and patents. Changes like this are often associated with changes in methodology or occasionally with mergers or structural reorganisation.

Early this year there was an important seminar at Ural Federal University in Ekaterinberg that discussed the consequences of changes in rankings methodology. With the  latest edition of the THE Asian university rankings we have a perfect example of the damage that such changes can do to some universities and the benefits they might confer on others.

THE has informed the University of Tokyo and other Japanese universities that the reason why they are falling in the rankings is that they are not international enough and that they arenot funded well enough. THE is not being entirely disinterested here. Their world rankings have three indicators that measure income and three that measure international orientation in various ways.

And it does seem strange that after cruising along for a few years at the top of the Asian charts Todai should suddenly plunge to seventh place, overtaken by two Chinese, two Singaporean and two Hong Kong Universities, one of  which was summit host HKUST. 

What happened was that in 2015 and 2016 THE made a number of methodological decisions that worked to the advantages of universities in Hong Kong and Singapore and to the disadvantage of those in Japan, especially the University of Tokyo.

First, there were several methodological changes to the 2015 world rankings. The first of these was not counting  multinational papers with more than 1,000 authors. This had been a major problem of the THE rankings and in combination with some other features of their citation indicator, meant that, a university with a few hyper-papers and a low number of total papers, could soar to the top of the citations chart. Over the years a succession of unlikely institutions have been proclaimed as world or regional leaders for research impact: Alexandria University, Moscow State Engineering Physics Institute, Rice University, Tokyo Metropolitan University, Federico Santa Maria Technical University, Scuola Normale Superiore di Pisa.

It was a good idea for THE to do something about the multi-author problem but the obvious thing to do was to introduce fractional counting of citations so that if a university contributed one out of 100 authors to a paper then it would get 1/100th of the total citation count. This is a perfectly feasible option. It has been done by the Leiden Ranking and recently by the US News subject rankings and by THE for the 2015 Africa ranking

The solution chosen by THE worked to the disadvantage of some Japanese universities that had contributed to such mega-projects, especially  Tokyo Metropolitan University which had a perfect score for research impact in 2014, something that it liked to brag about in adverts. In contrast, universities in Hong Kong and Singapore did better for citations in 2015 because they were not involved in such projects.

Something else that helped universities in Hong Kong was that in 2015 THE started counting students and faculty from mainland China as international when they went to Hong Kong, which boosted the international outlook scores for Hong Kong universities. Peter Mathieson of the University of Hong Kong noticed and warned everybody not to get too excited.

In addition to this, THE has, as noted in a earlier post, recalibrated its world ranking indicators, reducing the weighting for the research and teaching reputation survey, where Todai does very well, and increasing that for industry income where Peking and Tsinghua have perfect scores of 100, and NTU, HKUST and the University of Hong Kong do better than the University of Tokyo.

By the way, the speakers at the Asian summit included the heads of the University of Hong Kong, the National University of Singapore, the Hong Kong University of Science and Technology, but nobody from Japan. 

And getting back to the Latin American summit in Colombia, THE did another bit of recalibration, lopping off 10% from the citations indicator and giving it to the research and teaching indicators. The result was that Federico Santa Maria University, Valparaiso, third in Latin America and best in Chile in the world rankings, was demoted to 13th place. The University of the Andes, Bogota, was, as one would now expect, tenth.

08/10/16 Updated to include a reference to the 2016 MENA summit.

Monday, July 04, 2016

Has someone been upgrading the simulation?




An article in the Independent by Matthew Norman suggests that we have slipped into a parallel universe, propelled through hyper-space into another reality. The evidence is Jeremy Corbyn as Leader of the Opposition, Leicester City top of the English Premier League, Brexit, Novak Djokovich losing at Wimbledon and so on.

My suspicion is that we haven't landed in a parallel universe. It is more likely that that we living in a computer simulation which from time to time needs to be updated, leading to temporary anomalies like neutrinos going backwards in time or Wales advancing to the Euro 2016 semi-finals.

Perhaps we will wake up tomorrow and find that something even more improbable has occurred, the inauguration of President Trump, Pete Best joining Paul McCartney for a reunion tour or Boris Johnson going into a monastery.

Or Google Inc. as number five research institution in the world, up from 195th two years ago, ahead of Yale and Princeton. That might actually give us a clue as to who is running the simulation.

Wednesday, June 29, 2016

Can Grit Save Higher Education?



American, Australian and British universities are facing a serious input crisis. The number of students capable of anything resembling a conventional university education is drying up and the spectre of extinction is haunting British and American campuses. London Metropolitan University is closing two campuses and cutting 400 jobs.  Twelve of those jobs will be managerial ones so the situation must be really desperate. Hull University has closed a campus at Scarborough. The Open University is closing several regional centres with 500 plus jobs at risk.

Meanwhile small colleges in the US are shutting down: Dowling College, New York, Burlington College, Vermont, Tennessee Temple University and no doubt more to come.

Many English speaking universities  have tried to cover the deficit by aggressively recruiting international students. That is helping a little and science and technology departments in the Russell Group, the Group of Eight and the Ivy League are becoming increasingly dependent on graduate students and faculty of East Asian origin or descent. But the number of students who can perform adequately at degree level is not infinite. There are signs that the Flynn Effect has run its course even in China and there seems to be an increasing large amount of test and credential fraud, plagiarism and ghost-writing associated with the influx of international students.

The problem is compounded by the pressure to admit increasing numbers of historically underrepresented  groups who may come with substantial loans and grants but are often inadequately prepared for higher education. Such students frequently find that attending classes with classmates who perform much better is a deeply painful and humiliating experience, all the more so since they have from childhood been steeped in a warm bath of self esteem and excused almost any anti-social behaviour.

The admission of increasing numbers of unprepared students can also have serious consequences down the road towards and after graduation. More students with poor ACT or SAT scores or failing to graduate on time, if ever, means that ranking scores will suffer, with serious consequences for applications and admissions, and that employers and graduate schools will be less welcoming.

So we have increasingly desperate efforts to find something, anything, that will predict academic success but where less able students can do just as well or better. The problem is that so far nothing has been found that matches the predictive validity of standardised tests that are highly correlated with general intelligence.

There has even been a serious proposal by a group of elite admissions officers to reward applicants for doing ordinary things like baby sitting or punish them for a lack of authenticity in their resume compliant extra curricular activities. Taken seriously this would effectively randomise university admissions and turn US higher education into a flat swamp of mediocrity.

The latest in a succession of attempts to find the really effective non-cognitive factor that will transform American higher education and achieve the holy grail of true diversity is something called "grit", supposedly discovered by Angela Duckworth a psychologist at the University of Pennsylvania and a certified genius and author of an instant New York Times best seller. Grit is supposed to be a combination of passion and perseverance and is claimed to be important in determining job and academic success. It is allegedly a better predictor of success than IQ, health, good looks, or even social intelligence. It is, it would seem, "going to change the world".

Unfortunately, a thorough meta-analysis suggests that grit is almost the same thing as conscientiousness, a long established personality trait, and that its impact on academic success is modest and much less than that of cognitive ability.

And so the search for the Really Significant Non-Cognitive Factor continues.

Sunday, June 26, 2016

David Cameron, Donald Trump, Leicester City, University Rankings and the end of deference





Recently there have several setbacks for the experts on both sides of the Atlantic. In May 2015 the pollsters, armed with all the techniques of scientific social science, got it very wrong about the UK general elections, drastically underestimating the Conservative margin of victory.

Now they seem to be doing even worse with Donald Trump. Pundits have queued  up to denounce him as a racist, misogynist, transphobic, xenophobic, anti-semitic, Ku Klux Klan appeasing liar. Successive ceilings above which he could not possibly rise have been declared only to evaporate and replaced by another. And yet he has won the Republican nomination.

Pundits, critics and mainstream journalists are now predicting that his campaign will implode. that he will never have enough money, needs the support of the party grandees, does not have enough support among women, Hispanics, gays or trans people, is a poor organiser, does not read from a teleprompter, has disgusting hair, talks in slow monosyllables, fails to grasp the nuances of various things and so on. Perhaps this time the army of the qualified and credentialed will be right and we will have another wonderful four or eight years of a Clinton presidency with Bill as the First Gentleman. Or perhaps not.

Then there is  Leicester City winning the premier league title, against 5000 to one odds.  At the beginning of the season was there anyone who predicted anything other than relegation?

The press has had a multitude of stories about the experts of various kinds who have been humbled by Leicester's rise. John Micklethwait of the Economist has recounted how every year but the last he bet on the team winning the premiership title (later on the story was betting that they would come top of their division). Had he not forgotten to do so so this year then he would have have won 100,000 pounds.

I too have a story about the perils of underestimating Leicester City. A few decades ago I was the owner of a complete set of LCFC autographs some of which I got from from my father who went around the town as an inspector for the Ministry of Pensions and National Insurance and occasionally got souvenirs from the organisations whose books he helped tidy up.

I got Gordon Banks's by queueing up at Lewis's in Humberston Gate and Jimmy Walsh's, then Leicester's captain just arrived from Glasgow Celtic, because he lived down the road in a semi detached now worth 130,000 pounds (Jamie Vardy's car is more than that) although I had to suffer the public embarrassment of being called a wee laddie by Mrs Walsh.

But at some point in the early or late seventies after Leicester left Division One, I gave the autographs away to somebody I can't remember. Today four signatures from that era plus Norman Wisdom are worth 275 pounds on ebay. That was an error even worse than selling an authentic vinyl copy of the Dylan Royal Albert Hall bootleg, now worth thirty, for ten dollars.

And of course we have the Brexit vote. It is unlikely that that there has ever been such unanimity about any matter of public concern from the various components of the dominant elite. Nearly every vice chancellor, all the managers of the premier league plus an imposing array of pop stars, rock stars and film stars have admonished the great unread (some of whom probably don't have passports!) that no decent person could possibly even dream of voting Leave.

The universities further elaborated on the perils of Brexit. Think about all the money that we get from the EU for research. The merit of this argument may have been blunted by the revelation that the field that got the most from the EU was Education.

The polls conducted with the latest markers of rigour such as sample sizes and margins of error, appeared to confirm that the British electorate was fully aware of the wisdom of their intellectual betters.

But clearly the academic elite had absolutely no idea about what was going on in the minds of over half of the population just as they had no idea of what was going on in the minds of Republican voters.

One wonders whether the economic catastrophe supposed to follow Brexit will actually be so catastrophic. A fall in the value of the pound or the FTSE index is not really a problem if you don't have any shares or pounds to start with.

It is possible that even the university rankers may also be suffering a loss of credibility. Last year the "revered" QS reported that the National University of Singapore and Nanyang Technological University were overtaking the Ivy League, a claim that met with some scepticism even, or especially, in Singapore, and the universally "trusted" and prestige dispensing THE rankers do not seem to have received very much support for their pilot projects in Africa and the Middle East.

Perhaps the age of deference to expertise is coming to an end.



Article in the Japan Times and an Unsuccessful Comment




Here is an article published in the Japan Times. followed by a comment which I attempted to post.

Todai tumbles from top of Asia university rankings to seventh place
The University of Tokyo, locally known as Todai, lost its crown in this year’s Times Higher Education Asia University rankings released Monday.
After occupying the number one spot for the past three years, the University of Tokyo came seventh in a list of Asia’s top 200 universities. Times Higher Education described this year’s results as “challenging” for Japan and blamed a lack of funding and poor international outlook for the country’s position.
Singapore achieved unprecedented success in this year’s rankings by taking the top two places, with the National University of Singapore at the top and Nanyang Technological University joint ranked second with Peking University, the highest-ranked Chinese institution.
A total of 39 Japanese universities made it into the top 200, with 14 listed in the top 100.
Kyoto University (11th), Tohoku University (23rd), Tokyo Institute of Technology (24th) and Osaka University (30th) all made the top 30, with four more ranked in the top 50.
Phil Baty, a Times Higher Education rankings editor, said, “Japan claims almost a fifth — 39 — of Asia’s top 200 universities in this year’s table, making it the most-represented nation, in joint place with China.
“However, while the list proves Japan has strength in depth, the majority of its universities appear in the bottom half of the table; just 14 Japanese institutions make the top 100, compared with 22 in China.
“Furthermore, Japan’s top-ranked institution — the University of Tokyo in seventh place — has been knocked off the number one position after three years at the helm. It is Japan’s only top 10 representative. Meanwhile, China, Singapore and Hong Kong have two, while South Korea has three.”
Baty notes that the past few years have seen a shift in the balance of power from West to East in terms of higher education funding and performance.
Comment

Todai fell from 1st place in last year's THE Asian rankings to 7th this year, while Nanyang Technological University rose from 10th to 2nd. Changes in International outlook and funding levels could not have had such a large effect in the space of just 12 months.

It should be noted that this year THE did a recalibration of the weighting of its indicators. That for research and teaching reputation, in which Todai does much better than Singapore and Hong Kong universities, was reduced from 33% to 25%.The weighting for industry income, in which Todai has an average score and Nanyang Technological University an almost perfect one, was increased from 2.5% to 7.5%.

In addition, THE has changed the process of collecting and analysing citations data, including not counting large-scale multi-author projects, in a way that has worked to the detriment of the University of Tokyo and to the advantage of the Singaporean universities.

The recommendations of THE should be taken with a big bucket of salt.


Wednesday, June 22, 2016

THE's bespoke Asian rankings: the strange decline of the University of Tokyo and the rise of Singapore




Times Higher Education (THE), in conjunction with their prestigious summit in Hong Kong, have revealed this year's Asian University Rankings which use essentially the same methodology as the world rankings but with some recalibration.

The most noticeable aspect of the new rankings is that the University of Tokyo (UT), which was first in 2013, 2014 and 2015, has now suddenly dropped to seventh place, behind the National University of Singapore (NUS) in first place, Nanyang Technological University (NTU) in Singapore up from tenth to second, Peking University, the University of Hong Kong, Tsinghua University and Hong Kong University of Science and Technology.

Tokyo is not the only Japanese university to suffer in these rankings. Tokyo Institute of Technology has gone from 15th last year to 24th, Osaka University from 18th to 30th and Tokyo Metropolitan University from 33rd to 52nd. 

The rise of NTU and the fall of Tokyo need some explanation. When we are talking about institutions with thousands of students and faculty that produce thousands of papers, citations and patents, it is not good enough to say that one has been investing and networking and the other has not. The time from the publication of budgets via research proposals to publication and citation is usually closer to a decade than to a year.

Let's take a look at the details. Between 2015 and this year UT suffered a modest fall for teaching (a cluster of five indicators) international outlook and industry income, a substantial fall of 5.6 points for research (a cluster of three indicators) and a large fall from 76.1 to 67.8 points for field-normalised citations.

Evidently the methodological changes introduced last year by THE and Elsevier, their new data partners, have had an effect on the citations indicator score of UT. The changes were excluding papers, mostly in physics, with a large number of authors, switching from the Web of Science to Scopus as a source of data about papers and citations and reducing the impact of the "regional modification" that awards a bonus to universities in countries with a low citation impact.

Meanwhile NUS rose 4.7 points for citations and NTU 9.7 points. It would seem then that these changes contributed significantly to Tokyo's decline and to the ascent of NUS and even more so that of NTU.

There is another factor at work. THE have told told us that they did some recalibration, that is changing the weighting of the indicators. They reduced the weighting of the teaching reputation survey from 15% to 10% and that of the research reputation survey from 18% to 15%. The weighting for research productivity and research income was increased from 6% to 7.5% each  and for income from industry from 2.5% to 7.5%.

So why did THE do this?  It seems that it was done after consulting with Asian universities because "many Asian institutions have only relatively recently arrived on the world stage, with investment focused on recent decades, so have had less time to accumulate reputation around the world."

But one could say something similar about all the indicators: Asian universities have only recently arrived on the world stage and so have had less time to accumulate research funds or research expertise, build up their faculty, develop international networks and so on.

And why give the large extra weighting to industry income because "many Asian nations have put their universities at the forefront of economic growth plans, where industry links are crucial?" Perhaps some countries have plans where industry links are not crucial or perhaps other criteria are equally or more crucial. In any case, industry income is  a very questionable indicator. Alex Usher of Higher Education Strategy Associates has already pointed out some of its flaws.

Anyway, whatever THE 's ostensible reasons for this recalibration, the consequences are quite clear. Taking points from the reputation survey has worked to the disadvantage of UT, which in THE's 2015 reputation ranking had scores of 18.0 for teaching reputation and 19.8 for research reputation, and in favor of NUS which had scores of 9.2 and 10.9. The scores for NTU, the University of Hong Kong and Hong Kong University of Science and Technology are much lower and are withheld. It is not clear what the exact effect is since this year the reputation scores are subject to an "exponential component" which has presumably reduced the spread of scores and therefore UT's advantage.

It is not possible to determine the effect of giving extra weighting to research productivity and research income since these are bundled with other indicators.

Giving a greater weight to industry income has hurt UT, which has a score of only 50.8, and helped NTU with a score of 99.9, the University of Hong Kong with a perfect score of 100 and Kong Kong University of Science and Technology with a score of 68.1.

It appears that Japanese universities do relatively badly in these rankings and those in Singapore and Hong Kong do so well largely because of the changes last year in the collection and processing of citations data and the recalibration this year of the indicator weightings.

The co-host of the Asian summit was Hong Kong University of Science and Technology and the list of "prestigious university leaders from around the world"  includes those from Hong Kong, Singapore and China but  not from Japan.



Sunday, June 19, 2016

Worth reading 6: The Berlin principles



Just heard about this from Gary Barron.

Barron, Gary R.S. 2016. "The Berlin Principles on Ranking Higher Education Institutions: limitations, legitimacy, and value conflict." Higher Education, Online First, pp.1-17.

Abstract

University rankings have been widely criticized and examined in terms of the environment they create for universities. In this paper I reverse the question by examining how ranking organizations have responded to criticisms. I contrast ranking values and evaluation with those practiced by academic communities. I argue that the business of ranking higher education institutions is not one that lends itself to isomorphism with scholarly values and evaluation and that this dissonance creates reputational risk for ranking organizations. I argue that such risk caused global ranking organizations to create the Berlin Principles on Ranking Higher Education Institutions, which I also demonstrate are decoupled from actual ranking practices. I argue that the Berlin Principles can be best regarded as a legitimizing practice to institutionalize rankings and symbolically align them with academic values and systems of evaluation in the face of criticism. Finally, I argue that despite dissonance between ranking and academic evaluation there is still enough similarity that choosing to adopt rankings as a strategy to distinguish one's institution can be regarded as a legitimate option for universities.


Dot Connection Time


Singapore-based World Scientific Publishing, whose subscription lists were used to collect names for the QS academic opinion survey, are advertising a new book, Top the IELTS: Opening the Gates to Top QS-Ranked Universities.  by Kaiwen Leong of Nanyang Technological University and Elaine Leong.

Nanyang Technological University is ranked 13th in the QS world rankings, ahead of Yale, Johns Hopkins and King's College London, and third in the Asian rankings.

World Scientific owns Imperial College Press.

Imperial College is eighth in the QS world rankings, ahead of Chicago and Princeton.

Friday, June 17, 2016

Dumbing Down at Oxbridge




The relentless levelling of British universities continues. The latest sign is a report from Oxford where the university is getting ready to crack down on colleges that make their students work too hard. Some of them apparently have to write as many as three essays a week and most work at least 40 hours a week, some longer, which is apparently twice as much as places like Northumbria University.

Many commentators have mocked the poor fragile students who cannot cope with with a fifty hour week. After all, that is nothing to what they can expect if they start legal, medical or research careers.

Something else that is a bit disturbing is that Oxford students apparently need so much time to do that amount of work. One would expect the admissions system at Oxford to select academically capable students who can do as little work as those at Northumbria and still perform much better. If Oxford students can only stay ahead by working so hard doesn't this mean that Oxford is failing to find the most intelligent students and has to make do with diligent mediocrities instead?

The villain of the piece is probably the abolition of the essay based Oxford entrance exam in 1995 (Cambridge abolished theirs in 1986) which threw the burden of selection onto A level grades and interviews. The subsequent wholesale inflation of A level grades has meant that an undue importance is now given to interviews which have been shown repeatedly to be of limited value as a selection tool, particularly at places like Oxbridge where the interviewers have sometimes been biased and eccentric.

So Oxford and  Cambridge are now planning to reintroduce written admission tests. They had better do it quickly if they want their graduates to compete with the Gaokao-hardened students from the East.


Thursday, June 09, 2016

THE is coming to America



Times Higher Education (THE) has just announced that American university rankings are not fit for purpose.

We have heard that before. In 2009 THE said the same thing about the world rankings that they had published in partnership with the consulting firm Quacquarelli Symonds (QS) since 2004.

The subsequent history of THE's international rankings provides little evidence that the magazine is qualified to make such a claim.

The announcement of 2009 was followed by months of consultation with all sorts of experts and organisations. In the end the world rankings of 2010, powered by data from Thomson Reuters (TR), were not quite what anyone had expected. There was an increased dependence on self-submitted data, a reduced but still large emphasis on subjective surveys, and four different measures of income, reduced to three in 2011. Altogether there were 14 indicators, reduced to 13 in 2011, all but two of which were bundled into three super-indicators, making it difficult for anyone to figure exactly why any institution was falling or rising.

There were also some extraordinary elements in the 2010 rankings the most obvious of which was  placing Alexandria University in 4th place in the world for  research impact
.
The rankings received a chorus of criticism mixed with some faint praise for trying hard. Philip Altbach of Boston College summed up the whole affair pretty well.

“Some of the rankings are clearly inaccurate. Why do Bilkent University in Turkey and the Hong Kong Baptist University rank ahead of Michigan State University, the University of Stockholm, or Leiden University in Holland? Why is Alexandria University ranked at all in the top 200? These anomalies, and others, simply do not pass the smell test."  
THE and TR returned to the drawing board. They did some tweaking here and there and in 2011 got Alexandria University out of the top 200 although more oddities would follow over the next few years, usually associated with the citations indicator. Tokyo Metropolitan University, Cadi Ayyad University of Marrakech, Santa Maria Federico Technical University, Middle East Technical University and the University of the Andes were at one point or another declared world class for research impact across the full range of the disciplines.

Eventually the anomalies got too much and after breaking with TR in 2015 THE decided to have a bit of a spring cleaning and tidied things up a bit.


For many universities and countries  the results of the 2015 methodological changes were catastrophic. There was a massive churning with universities going up and down the tables. Universite Paris-Sud, the Korean Advanced Institute of Science and Technology, Bogazici University and The Middle East Technical university fell scores of places.

THE claimed that this was an improvement. If it was then the previous editions must have been hopelessly inadequate. But if the previous rankings were the gold standard of rankings then those methodological changes were surely nothing but gratuitous vandalism.

THE has also ventured into far away regions with snapshot or pilot rankings. The Middle East was treated to a ranking with a single indicator that put Texas A and M University Qatar, a branch campus housing a single  faculty, in first place. For Africa there was a ranking consisting of data extracted from the world rankings without any modification of the indicators, which did not seem to impress anyone.


So one wonders where THE got the chutzpah to tell the Americans that their rankings are not fit for purpose. After all, US News was doing rankings for two decades before THE and their America's Best Colleges include metrics about retention and reputation as well as resources and selectivity. Also, there are now several rankings that already deal directly with  the concerns raised by THE.


The Forbes/CCAP rankings include measures of student satisfaction , degree of student indebtedness, graduation on time, and career success.

The Brookings Institution has a value added ranking that includes data from the college scorecard

The Economist has produced a very interesting ranking that compares expected and actual value added.

So exactly what is THE proposing to do

It seems that there will be a student engagement survey which apparently will be launched this week and will cover 1,000 institutions. They will also use data on cost, graduation rates and salaries from the Integrated Postsecondary Data System (IPEDS) and the College Scorecard. Presumably they are looking for some way of monetising all of this so probably large chunks of the data will only be revealed as part of benchmarking or consultancy packages.

I suspect that  the new rankings will like something like the Guardian university league tables just published in the UK but much bigger.

The Guardian rankings include measures of student satisfaction, selectivity, spending, staff student ratio and value added. The latter compares entry qualifications with the number of students getting good degrees (a first or upper second).

It seems that THE are planning something different from the research centred industry orientated university rankings that they have been doing so far and are venturing out into new territory, institutions that are two or three tiers below the elite and do little or no research.

There could be a market for this kind of ranking but is very far from certain that THE are capable of doing it and whether it is financially feasible. 



Tuesday, May 31, 2016

UK rises in the U21 system rankings

The comparison of national higher education systems by Universitas 21 shows that the UK has risen from 10th place to 4th since 2012.

These rankings consists of four groups of indicators: resources, connectivity, environment and output. Since 2012 the British higher education has risen from 27th to to 12th for resources, 13th to 10th for environment  and 6th to 4th for connectivity. It was in second place for output in 2012 and in 2016 but its score rose from 62.2 to 69.9 over the four years.

Every few months, whenever any sort of ranking is published, there is an outcry from British universities that austerity and government demands and interference and immigration controls are ruining higher education.

If the U21 rankings have any validity then it would seem that British universities have been very generously funded in comparison to other countries.

Perhaps they could return some of the money or at least say thank you to the state that has been so kind to them.