Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Wednesday, March 02, 2016
The Decline of Free Speech in American Universities
The Foundation for Individual Rights in Education has just released its list of the ten worse colleges for free speech in the US. Here they are along with the incidents that put them on the list.
Mount St Mary's University, Maryland
Two faculty were sacked for criticising the President's plan to get rid of low performing students, "drowning the bunnies" as he so charmingly put it. They were later reinstated.
Northwestern University
Laura Kipnis was investigated for sexual harassment for writing an essay criticising the sexual harassment mania sweeping US colleges. She was cleared only after writing an account of her persecution in the Chronicle of Higher Education.
Louisiana State University
Theresa Buchanan was fired for using profanity in the classroom for a pedagogical reason.
University of California San Diego
The administration attempted to defund a student newspaper for making fun of "safe spaces".
St Mary's University of Minnesota
An adjunct classics professor was fired for sexual harassment which have have something to do with an authentic production of Seneca's Medea. He was also fired from his other job as a janitor (!).
University of Oklahoma
Two fraternity members for leading a racist chant. The supreme court has ruled that offensive speech is protected by the first amendment
Marquette University
John Mcadams was for criticising an instructor for suppressing a student's negative comments about same sex marriage.
Colorado College
A student was suspended for unchivalrous remarks about African American women on Yik Yak.
University of Tulsa
A student was removed from class because of Facebook posts written by his fiance criticising a professor.
Wesleyan University
The student government voted to remove funding from a student newspaper that was mildly critical of Black Lives Matter.
Wednesday, February 24, 2016
Britain leads in sniffing edge research
There must be some formula that can predict the scientific research that will go viral over the social media or reach the pages of the popular press or even Times Higher Education (THE).
The number of papers in the natural and social sciences is getting close to uncountable. So why out of all of them has THE showcased a study of the disgust reported by students when sniffing sweaty T-shirts from other universities?
Anyway, here is a suggestion to the authors for a follow-up study. Have the students read the latest QS, THE or Shanghai world rankings before having a sniff and see if that makes any difference to the disgust experienced.
Tuesday, February 23, 2016
Should the UK stay in the EU?
There are 130 universities in the UK and the vice-chancellors of more than 103 of them have signed a letter praising the role of the European Union in supporting the UK's world-class universities.
There are some notable names missing, University College London, Manchester, Warwick and York but such a high degree of consensus among the higher bureaucracy is rather suspicious.
There are some notable names missing, University College London, Manchester, Warwick and York but such a high degree of consensus among the higher bureaucracy is rather suspicious.
The Ranking Effect
The observer effect in science refers to the changes that a phenomenon undergoes as a result of being observed.
Similarly, in the world of university ranking, indicators that may once have been useful measures of quality sometimes become less so once they are included in the global rankings.
Counting highly cited researchers might have been a good measure of research quality but its value was greatly reduced once someone found out how easy it was to recruit adjunct researchers and to get them to list their new "employer" as an affiliation.
International students and international faculty could also be markers of quality but not so much if universities are deliberately recruiting under-qualified staff and students just to boost their rankings score.
Or, as Irish writer Eoin O'Malley aptly puts it in the online magazine Village , "As Goodhart’s Law warns us: when a measure becomes a target, it ceases to be a good measure".
O'Malley argues that reputation surveys are little more than an exercise in name reputation, that Nobel awards do not measure anything important (although I should point out that Trinity College Dublin would not get any credit for Samuel Beckett since the Shanghai Rankings do not count literature awards), and that the major criteria used by rankers do not measure anything of interest to students.
Irish universities have experienced a disproportionate impact from the methodological changes introduced by QS and Times Higher Education towards the end of last year. I suspect that Dr O'Malley's criticism will have a receptive audience.
Similarly, in the world of university ranking, indicators that may once have been useful measures of quality sometimes become less so once they are included in the global rankings.
Counting highly cited researchers might have been a good measure of research quality but its value was greatly reduced once someone found out how easy it was to recruit adjunct researchers and to get them to list their new "employer" as an affiliation.
International students and international faculty could also be markers of quality but not so much if universities are deliberately recruiting under-qualified staff and students just to boost their rankings score.
Or, as Irish writer Eoin O'Malley aptly puts it in the online magazine Village , "As Goodhart’s Law warns us: when a measure becomes a target, it ceases to be a good measure".
O'Malley argues that reputation surveys are little more than an exercise in name reputation, that Nobel awards do not measure anything important (although I should point out that Trinity College Dublin would not get any credit for Samuel Beckett since the Shanghai Rankings do not count literature awards), and that the major criteria used by rankers do not measure anything of interest to students.
Irish universities have experienced a disproportionate impact from the methodological changes introduced by QS and Times Higher Education towards the end of last year. I suspect that Dr O'Malley's criticism will have a receptive audience.
Friday, January 15, 2016
Aussies not impressed with THE any more
Back in 2012 The Australian published a list of the most influential figures in Australian higher education. In 14th place was Phil Baty, the editor of the Times Higher Education (THE) World University Rankings.
Recently, the newspaper came out with another influential list full of the usual bureaucrats and boosters plus the Australian dollar at number five. Then at number 10 was not a person, not even Times Higher Education, but "rankings". A step up for rankings but a demotion for THE.
To make things worse for THE, Leiden Ranking and the Shanghai Academic ranking of World Universities were designated the leaders.
Then we have a reference to "new and increasingly obscure league tables peddled by unreliable metrics merchants, with volatile methodologies triggering inexplicably spectacular rises and falls from grace."
But who are those new and increasingly obscure league tables? It can't be URAP, National Taiwan University Rankings, QS world rankings, or Scimago, because they are not new. The US News Best Global Universities and the Russian Round University Ranking are new but so far their methodology is not volatile. Webometrics can be a bit volatile sometimes but it is also not new. Maybe they are referring to the QS subject rankings.
Or could it be that The Australian is thinking of the THE World University Rankings? What happened last autumn to universities in France, Korea and Turkey was certainly a case of volatile methodology. But new? Maybe The Australian has decided that the methodology was changed so much that it constituted a new league table.
Sunday, January 10, 2016
Diversity Makes You Brighter ... if You're a Latino Stockpicker in Texas or Chinese in Singapore
Nearly everybody, or at least those who run the western mainstream media, agrees that some things are sacred. Unfortunately, this is not always obvious to the uncredentialled who from time to time need to be beaten about their empty heads with the "findings" of "studies".
So we find that academic papers often with small or completely inappropriate samples, minimal effect sizes, marginal significance levels, dubious data collection procedures, unreproduced results or implausible assumptions are published in top flight journals, cited all over the Internet or even showcased in the pages of the "quality" or mass market press.
For example, anyone with any sort of mind knows that the environment is the only thing that determines intelligence.
So in 2009 we had an article in the Journal of Neuroscience that supposedly proves that a stimulating environment will not only make its beneficiaries more intelligent but also the children of the experimental subjects.
A headline in the Daily Mail proclaimed that " Mothers who enjoyed a stimulating childhood 'have brainier babies"
The first sentence of the reports claims that "[a] mother's childhood experiences may influence not only her own brain development but also that of her sons and daughters, a study suggests."
Wonderful. This could, of course, be an argument for allowing programs like Head Start to run for another three decades so that that their effects would show up in the next generation. Then the next sentence gives the game away.
"Researchers in the US found that a stimulating environment early in life improved the memory of female mice with a genetic learning defect."
Notice that experiment involved mice and not humans or any other mammal bigger than a ferret, it improved memory and nothing else, and the subjects had a genetic learning defect.
Still, that did not stop the MIT Technology Review from reporting Moshe Szyf of McGill University a saying “[i]f the findings can be conveyed to human, it means that girls’ education is important not just to their generation but to the next one,”
All of this, if confirmed, would be a serious blow against modern evolutionary theory. The MIT Technology Review got it right when it spoke about a comeback for Lamarckianism. But if there is anything scientists should have learnt over the last few decades it is that an experiment that appears to overthrow current theory, not to mention common sense and observation, is often flawed in some way. Confronted with evidence in 2011 that neutrinos were travelling faster than light, physicists with CERN reviewed their experimental procedures until they found that the apparent theory busting observation was caused by a loose fibre optic cable.
If a study had shown that a stimulating environment had a negative effect on the subjects or on the next generation or that it was stimulation for fathers that made the difference, would it have been cited in the Daily Mail or the MIT Technology Review? Would it even have been published in the Journal of Neuroscience? Wouldn't everybody have been looking for the equivalent of a loose cable?
A related idea that has reached the status of unassailable truth is that the famous academic achievement gap between Asians and Whites, and African Americans and Hispanics, could be eradicated by some sort of environmental manipulation such as spending money, providing safe spaces or laptops, boosting self esteem or fine tuning teaching methods.
A few years ago Science, the apex of scientific research, published a paper by Geoffrey L. Cohen, Julio Garcia, Nancy Apfel and Allison Master that claimed a few minutes writing a essay affirming students' values (the control group wrote about somebody else's values) would start a process leading to an improvement in their relative academic performance. This applied only to low-achieving African American students.
So we find that academic papers often with small or completely inappropriate samples, minimal effect sizes, marginal significance levels, dubious data collection procedures, unreproduced results or implausible assumptions are published in top flight journals, cited all over the Internet or even showcased in the pages of the "quality" or mass market press.
For example, anyone with any sort of mind knows that the environment is the only thing that determines intelligence.
So in 2009 we had an article in the Journal of Neuroscience that supposedly proves that a stimulating environment will not only make its beneficiaries more intelligent but also the children of the experimental subjects.
A headline in the Daily Mail proclaimed that " Mothers who enjoyed a stimulating childhood 'have brainier babies"
The first sentence of the reports claims that "[a] mother's childhood experiences may influence not only her own brain development but also that of her sons and daughters, a study suggests."
Wonderful. This could, of course, be an argument for allowing programs like Head Start to run for another three decades so that that their effects would show up in the next generation. Then the next sentence gives the game away.
"Researchers in the US found that a stimulating environment early in life improved the memory of female mice with a genetic learning defect."
Notice that experiment involved mice and not humans or any other mammal bigger than a ferret, it improved memory and nothing else, and the subjects had a genetic learning defect.
Still, that did not stop the MIT Technology Review from reporting Moshe Szyf of McGill University a saying “[i]f the findings can be conveyed to human, it means that girls’ education is important not just to their generation but to the next one,”
All of this, if confirmed, would be a serious blow against modern evolutionary theory. The MIT Technology Review got it right when it spoke about a comeback for Lamarckianism. But if there is anything scientists should have learnt over the last few decades it is that an experiment that appears to overthrow current theory, not to mention common sense and observation, is often flawed in some way. Confronted with evidence in 2011 that neutrinos were travelling faster than light, physicists with CERN reviewed their experimental procedures until they found that the apparent theory busting observation was caused by a loose fibre optic cable.
If a study had shown that a stimulating environment had a negative effect on the subjects or on the next generation or that it was stimulation for fathers that made the difference, would it have been cited in the Daily Mail or the MIT Technology Review? Would it even have been published in the Journal of Neuroscience? Wouldn't everybody have been looking for the equivalent of a loose cable?
A related idea that has reached the status of unassailable truth is that the famous academic achievement gap between Asians and Whites, and African Americans and Hispanics, could be eradicated by some sort of environmental manipulation such as spending money, providing safe spaces or laptops, boosting self esteem or fine tuning teaching methods.
A few years ago Science, the apex of scientific research, published a paper by Geoffrey L. Cohen, Julio Garcia, Nancy Apfel and Allison Master that claimed a few minutes writing a essay affirming students' values (the control group wrote about somebody else's values) would start a process leading to an improvement in their relative academic performance. This applied only to low-achieving African American students.
I suspect that anyone with any sort of experience of secondary school classrooms would be surprised by the claim that such a brief exercise could have such a disproportionate impact.
The authors in their conclusion say:
"Finally, our apparently disproportionate results rested on an obvious precondition: the existence in the school of adequate material, social, and psychological resources and support to permit and sustain positive academic outcomes. Students must also have had the skills to perform significantly better. What appear to be small or brief events in isolation may in reality be the last element required to set in motion a process whose other necessary conditions already lay, not fully realised, in the situation."
In other words the experiment would not work unless there were "adequate material, social, and psychological resources and support" in the school, and unless students "have had the skills to perform significantly.
Is it possible that a school with all those resources, support and skills might also be one where students, mentors, teachers or classmates might just somehow leak who was in the experimental and who was in the control group?
Perhaps the experiment really is valid. If so we can expect to see millions of US secondary school students and perhaps university students writing their self affirmation essays and watch the achievement gap wither away.
In 2012, this study made the top 20 of studies that Psychfiledrawer would like to see reproduced, along with studies that showed that participants were more likely to give up trying to solve a puzzle if they ate radishes than if they ate cookies, that anxiety reducing interventions boost exam scores, music training raises IQ, and, of course, Rosenthal and Jacobsons' famous study showing that teacher expectations can change students' IQ.
Geoffrey Cohen has provided a short list of studies that he claims replicate his findings. I suspect that only someone already convinced of the reality of self affirmation would be impressed.
Another variant of the environmental determinism creed is that diversity (racial or maybe gender although certainly not intellectual or ideological) is a wonderful thing that enriches the lives of everybody. There are powerful economic motives for universities to believe this and so we find that a succession of dubious studies are show cased as though they are the last and definitive word on the topic.
The latest such study is by Sheen S. Levine, David Stark and others and was the basis for an op ed in the New York Times (NYT).
The background is that the US Supreme Court back in 2003 had decided that universities could not admit students on the basis of race but they could try to recruit more minority students because having large numbers of a minority group would be good for everybody. Now the court is revisiting the issue and asking whether racial preferences can be justified by the benefits they supposedly provide for everyone.
Levine and Stark in their NYT piece claim that they can and refer to a study that they published with four other authors in the Proceedings of the American Academy of Sciences. Essentially, this involved an experiment in simulating stock trading and it was found that homogenous "markets" in Singapore and Kingsville, Texas, (ethnically Chinese and Latino respectively) were less accurate in pricing stocks than those that were ethnically diverse with participants from minority groups (Indian and Malay in Singapore, non-Hispanic White, Black and Asian in Texas).
They argue that:
"racial and ethnic diversity matter for learning, the core purpose of a university. Increasing diversity is not only a way to let the historically disadvantaged into college, but also to promote sharper thinking for everyone.
Our research provides such evidence. Diversity improves the way people think. By disrupting conformity, racial and ethnic diversity prompts people to scrutinize facts, think more deeply and develop their own opinions. Our findings show that such diversity actually benefits everyone, minorities and majority alike."
From this very specific exercise the authors conclude that diversity is beneficial for American universities which are surely not comparable to a simulated stock market.
Frankly, if this is the best they can do to justify diversity then it looks as though affirmative action in US education is doomed.
Looking at the original paper also suggests that quite different conclusions could be drawn. It is true that in each country the diverse market was more accurate than the homogenous one (Chinese in Singapore, Latino in Texas) but the homogenous Singapore market was more accurate than the diverse Texas market (see fig. 2) and very much more accurate than the homogenous Texas market. Notice that this difference is obscured by the way the data is presented.
There is a moral case for affirmative action provided that it is limited to the descendants of the enslaved and the dispossessed but it is wasting everybody's time to cherry-pick studies like these to support questionable empirical claims and to stretch their generalisability well beyond reasonable limits.
Wednesday, January 06, 2016
Towards a transparent university ranking system
For the last few years global university rankings have been getting more complicated and more "sophisticated".
Data makes it way from branch campuses, research institutes and
far flung faculties and departments and is analysed, decomposed, recomposed, scrutinised
for anomalies and outliers and then enters the files of the rankers where it is
normalised, standardised, square rooted, weighted and/or subjected to regional
modification. Sometimes what comes out the other end makes sense: Harvard in
first place, Chinese high fliers flying higher. Sometimes it stretches academic
credulity: Alexandria University in fourth place in the world for research impact,
King Abdulaziz University in the world's top ten for mathematics.
The transparency of the various indicators in the global rankings varies.
Checking the scores for Nature and Science papers and indexed publications in the
Shanghai rankings is easy if you have access to the Web of Knowledge. It
is also not difficult to check the numbers of faculty and students on the QS,
Times Higher Education (THE)and US News web sites.
On the other hand, getting into the data behind the THE citations is close to impossible. Citations are normalised by field, year of publication and year of citation. Then, until last year the score for each university was adjusted by division by the square root of the citation impact score of the country in which it was located. Now this applies to half the score for the indicator. Reproducing the THE citations score is impossible for almost everybody since it requires calculating the world average citation score for 250 or 300 fields and then the total citation score for every country.
It is now possible to access third party data from sources such as Google, World Intellectual Property Organisation and various social media such as LinkedIn. One promising development is the creation of public citation profiles by Google Scholar.
The Cybermetrics Lab in Spain, publishers of the Webometrics Ranking Web
of Universities, has announced the beta version of a ranking based on nearly
one million individual profiles in the Google Scholar Citations database. The
object is to see whether this data can be included in future editions of the
Ranking Web of Universities
It uses data from the institutional profiles and counts the citations in
the top ten public profiles for each institution, excluding the first profile.
The ranking is incomplete since many researchers and institutions have
not participated fully. There are, for example, no Russian institutions in the
top 600. In addition, there are technical issues such as the duplication of
profiles.
The leading university is Harvard which is well ahead of its closest
rival, the University of Chicago. English speaking universities are dominant
with 17 of the top 20 places going to US institutions and three, Oxford,
Cambridge and University College London, going to the UK.
Overall the top twenty are:
- Harvard
University
-
University of Chicago
- Stanford
University
-
University of California Berkeley
-
Massachusetts Institute of Technology (MIT)
-
University of Oxford
- University
College London
-
University of Cambridge
- Johns
Hopkins University
-
University of Michigan
- Michigan State University
- Yale University
- University of California San Diego
- UCLA
- Columbia University
- Duke University
- University of Washington
- Princeton University
- Carnegie Mellon University
- Washington University St Louis.
The top universities in selected
countries and regions are:
Africa: University of Cape Town, South Africa 244th
Arab Region: King Abdullah University of Science and Technology, Saudi
Arabia 148th
Asia and Southeast Asia: National University of Singapore 40th
Australia and Oceana: Australian National University 57th
Canada: University of Toronto 22nd
China: Zhejiang University 85th
France: Université Paris 6 Pierre and Marie Curie 133rd
Germany: Ludwig Maximilians Universität München 194th
Japan: Kyoto University 100th
Latin America: Universidade de São Paulo 164th
Middle East: Hebrew University of Jerusalem 110th
South Asia: Indian Institute of Science Bangalore 420th.
This seems plausible and sensible so it is likely that the method could be extended and improved.
Tuesday, January 05, 2016
Worth Reading 5: Another Year, Another Methodology
International Higher Education, published by Boston College, has an article by Ellen Hazelkorn and Andrew Gibson that reviews the recent changes in the methodology of the brand name world rankings.
Nothing new here, except that they have noticed the Round University Rankings from Russia.
Nothing new here, except that they have noticed the Round University Rankings from Russia.
Tuesday, December 22, 2015
Worth Reading 4: Rankings influence public perceptions of German universities
Der Ranking-Effekt Zum Einfluss des „Shanghai-Rankings“ auf die medial dargestellte Reputation deutscher Universitäten
Tim Hegglin · Mike S. Schäfer
Publizistik (2015) 60:381–402 DOI 10.1007/s11616-015-0246-
English Abstract
Increasingly, universities find themselves in a competition about public visibility and reputation in which media portrayals play a crucial role. But universities are complex, heterogeneous institutions which are difficult to compare. University rankings offer a seemingly simple solution for this problem: They reduce the complexity inherent to institutions of higher education to a small number of measures and easy-to-understand ranking tables – which may be particularly attractive for media as they conform to news values and media preferences. Therefore, we analyze whether the annual publications of the “Shanghai Ranking” influence media coverage about the included German universities. Based on a content analysis of broadsheet print media, our data show that a ranking effect exists: After the publication of the Ranking results, included universities are presented as more reputable in the media. This effect is particularly strong among better ranked universities. It does not, however, increase over a 10-year time period. The Ranking Effect How the “Shanghai Ranking” influences the mediated reputation of German Universities.
Thanks to Christian Scholz of the University of Hamburg for alerting me to this paper.
Monday, December 21, 2015
Worth Reading 3
Matthew David, Fabricating World Class: Global university league tables, status differentiation and myths of global competition
accepted for publication in the British Journal of Sociology of Education
This paper finds that UK media coverage of global university rankings is strongly biased towards the Russell Group, which supposedly consists of elite research intensive universities, emphasises the superiority of some US universities, interprets whatever happens in the rankings as evidence that British universities, especially those in the Russell Group, need and deserve as much money as they want.
For example, he quotes the Daily Mail as saying in 2007 that "Vice chancellors are now likely to seize on their strong showing [in the THES-QS world university rankings] to press the case for the £3,000 a-year cap on tuition fees to be lifted when it is reviewed in 2009," while the Times in 2008, when UK universities slipped, said: "Vice chancellors and commentators voiced concern that, without an increase in investment, Britain's standing as a first-class destination for higher education could be under threat"
The media and elite universities also claim repeatedly that lavishly funded Asian universities are overtaking the impoverished and neglected schools of the West.
David argues that none of this is supported by the actual data of the rankings. He looks at the top 200 of the three well known rankings QS, THE, ARWU up to 2012.
I would agree with most of these conclusions, especially the argument that the rankings data he uses do not support either US superiority or the rise of Asia.
I would go further and suggest that changes to the QS rankings in 2008 and 2015, plus ad hoc adjustments to the employer survey in 2011 and 2012 plus changes in rules for submission of data, plus variations in the degree of engagement with the rankings, plus the instability resulting from an unstable pool from which ranked universities are drawn would render the QS rankings invalid as a measure of any but the most obvious trends.
Similarly THE rankings, started in 2010, underwent substantial changes in 2011 and then in 2015. Between those years there were fluctuations for many universities because a few papers could have a disproportionate impact on the citations indicator and again because the pool of ranked universities from which indicator means are calculated is unstable.
If, however, we take the Shanghai rankings over the course of eleven years and look at the full five hundred rankings then we do find that Asia, or more accurately some of it, is rising.
The number of Chinese universities in the ARWU top 500 rose from 16 in 2004 to 44 in 2015. The number of South Korean universities rose from 8 to 12, and Australian from 14 to 20,
But the number of Indian universities remained unchanged at three, while the number of Japanese fell from 36 to 18.
David does not argue that Asia is not rising, merely that looking at the top level of the rankings does not show that it is.
What is probably more important in the long run is the comparative performance not of universities but of secondary school systems. Here the future of the US, the UK and continental Europe does indeed look bleak while that of East Asia and the Chinese diaspora is very promising.
accepted for publication in the British Journal of Sociology of Education
This paper finds that UK media coverage of global university rankings is strongly biased towards the Russell Group, which supposedly consists of elite research intensive universities, emphasises the superiority of some US universities, interprets whatever happens in the rankings as evidence that British universities, especially those in the Russell Group, need and deserve as much money as they want.
For example, he quotes the Daily Mail as saying in 2007 that "Vice chancellors are now likely to seize on their strong showing [in the THES-QS world university rankings] to press the case for the £3,000 a-year cap on tuition fees to be lifted when it is reviewed in 2009," while the Times in 2008, when UK universities slipped, said: "Vice chancellors and commentators voiced concern that, without an increase in investment, Britain's standing as a first-class destination for higher education could be under threat"
The media and elite universities also claim repeatedly that lavishly funded Asian universities are overtaking the impoverished and neglected schools of the West.
David argues that none of this is supported by the actual data of the rankings. He looks at the top 200 of the three well known rankings QS, THE, ARWU up to 2012.
I would agree with most of these conclusions, especially the argument that the rankings data he uses do not support either US superiority or the rise of Asia.
I would go further and suggest that changes to the QS rankings in 2008 and 2015, plus ad hoc adjustments to the employer survey in 2011 and 2012 plus changes in rules for submission of data, plus variations in the degree of engagement with the rankings, plus the instability resulting from an unstable pool from which ranked universities are drawn would render the QS rankings invalid as a measure of any but the most obvious trends.
Similarly THE rankings, started in 2010, underwent substantial changes in 2011 and then in 2015. Between those years there were fluctuations for many universities because a few papers could have a disproportionate impact on the citations indicator and again because the pool of ranked universities from which indicator means are calculated is unstable.
If, however, we take the Shanghai rankings over the course of eleven years and look at the full five hundred rankings then we do find that Asia, or more accurately some of it, is rising.
The number of Chinese universities in the ARWU top 500 rose from 16 in 2004 to 44 in 2015. The number of South Korean universities rose from 8 to 12, and Australian from 14 to 20,
But the number of Indian universities remained unchanged at three, while the number of Japanese fell from 36 to 18.
David does not argue that Asia is not rising, merely that looking at the top level of the rankings does not show that it is.
What is probably more important in the long run is the comparative performance not of universities but of secondary school systems. Here the future of the US, the UK and continental Europe does indeed look bleak while that of East Asia and the Chinese diaspora is very promising.
Saturday, December 19, 2015
Go East Young (and Maybe not so Young) Man and Woman!
Mary Collins, a distinguished immunologist at University College London (UCL), is leaving to take up an academic appointment in Japan. Going with her is her husband Tim Hunt, a Nobel winner, who was the victim of a particularly vicious witch hunt about some allegedly sexist remarks over dinner. She had apparently applied for the Japanese post before the uproar but her departure was hastened by the disgraceful way he was treated by UCL.
Could this be the beginning of a massive drain of academic talent from the West to Asia? What would it take to persuade people like Nicholas Christakis, Erika Christakis, Joshua Richwine, Mark Regnerus, Andrea Quenette, K.C. Johnson, and Matt Tyler to trade in abuse and harassment by the "progressive" academic establishment for a productive scholarly or administrative career in Korea, Japan, China or the Pacific rim?
Meanwhile Russia and the Arab Gulf are also stepping up their recruitment of foreign scientists. Has Mary Collins started a new trend?
Could this be the beginning of a massive drain of academic talent from the West to Asia? What would it take to persuade people like Nicholas Christakis, Erika Christakis, Joshua Richwine, Mark Regnerus, Andrea Quenette, K.C. Johnson, and Matt Tyler to trade in abuse and harassment by the "progressive" academic establishment for a productive scholarly or administrative career in Korea, Japan, China or the Pacific rim?
Meanwhile Russia and the Arab Gulf are also stepping up their recruitment of foreign scientists. Has Mary Collins started a new trend?
Friday, December 18, 2015
THE Adjectival Hyperbole
Times Higher Education (THE) has always had, or tried to have, a good opinion of itself and its rankings.
A perennial highlight of the ranking season is the flurry of adjectives used by THE to describe its summits (prestigious, exclusive) and the rankings and their methodology. Here is a selection:
"the sharper, deeper insights revealed by our new and more rigorous world rankings"
"Robust, transparent and sophisticated"
"the most comprehensive, sophisticated and balanced global rankings in the world."
"a dramatically improved ranking system"
"our dramatic innovations"
"our tried, trusted and comprehensive combination of 13 performance indicators remains in place, with the same carefully calibrated weightings"
"our most comprehensive, inclusive and insightful World University Rankings to date"
The problem is that if the rankings are so robust and sophisticated then what is the point of a dramatic improvement? If there is a dramatic improvement one year is there a need for more dramatic improvements in the next? And are there no limits to the rigor of the methodology and the sharpness and depth of the insights?
A perennial highlight of the ranking season is the flurry of adjectives used by THE to describe its summits (prestigious, exclusive) and the rankings and their methodology. Here is a selection:
"the sharper, deeper insights revealed by our new and more rigorous world rankings"
"Robust, transparent and sophisticated"
"the most comprehensive, sophisticated and balanced global rankings in the world."
"a dramatically improved ranking system"
"our dramatic innovations"
"our tried, trusted and comprehensive combination of 13 performance indicators remains in place, with the same carefully calibrated weightings"
"our most comprehensive, inclusive and insightful World University Rankings to date"
The problem is that if the rankings are so robust and sophisticated then what is the point of a dramatic improvement? If there is a dramatic improvement one year is there a need for more dramatic improvements in the next? And are there no limits to the rigor of the methodology and the sharpness and depth of the insights?
Monday, December 14, 2015
Why are university bosses paid so much?
Times Higher Education (THE) has an article by Ellie Bothwell about the earnings of university heads in the USA and the UK. The US data is from the Chronicle of Higher Education.
The sums paid are in some cases extraordinary. Maybe Lee Bollinger of Columbia deserves $4,615,230 but $1,634,000 for the head of Tulane?
On the other side of the Atlantic the biggest earner is the head of Nottingham Trent University. To the lay reader that makes as much sense as the manager of Notts County or Plymouth Argyle outearning Manchester City or Chelsea.
THE argues that there is little correlation between the salaries of the top earning twenty American and British university heads and university prestige as measured by position in the overall THE world rankings.
It would actually be very surprising if a large correlation were found since there is an obvious restriction of range effect if only the top 20 are considered. If we looked at the entire spectrum of salaries we would almost certainly get a much greater correlation. I suspect that THE is trying to deflect criticism that its rankings measure wealth and age rather than genuine quality.
THE do not give any numbers so I have calculated the correlation between the salaries of the US heads and overall scores in the brand name rankings. Maybe I'll get round to the British salaries next week.
The Pearson correlation coefficient between the salaries of the 20 most highly paid university heads in the US and overall THE world rankings scores is only .259, which is not statistically significant.
The correlation is greater when we compare salaries with the US News (USN) America's Best Colleges and the Shanghai Academic Ranking of World Universities. The top 20 US salaries have a .362 correlation with the overall scores in the 2015 America's Best Colleges (not significant) and .379 (significant at the 0.05 level [1 tailed]) with the total scores in the 2015 ARWU.
That suggests that American university heads are recruited with the object of doing well in the things that count in the USN rankings and more so in the Shanghai rankings. Or perhaps that the THE rankings are not so good at measuring the things that the heads are supposed to do.
Of course, if we looked at the whole range of university salaries and university quality there would probably be different results.
By the way, there is almost zero correlation between the top 20 salaries and university size as measured by the number of students.
Thursday, December 03, 2015
Not as Elite as They Thought
British higher education is very definitely not a flat system. There is an enormous difference between Oxford or LSE and the University of Bolton or the University of East London in terms of research output and quality, graduate outcomes, public perceptions, student attributes and just about anything else you could think of.
The most obvious dividing line in the UK university world is between the post-1992 and pre-1992 universities. The former were mostly polytechnics run by local authorities that did not award their own degrees, provided sub-degree courses and did little research.
Another line was drawn in 1994. Named after the hotel (only four stars but it is "old", "famous", "grand" and "impressive") where the inaugural meeting was held, the Russell Group now has 24 members, including of course Oxford and Cambridge, and claims to include only elite research intensive universities. Definitely no riff-raff.
The home page of the group gives a good idea of its priorities:
Our universities are global leaders in research, but it is vital they receive sufficient funding and support
A high-quality, research-led education requires proper funding at both undergraduate and postgraduate level
Collaboration with business is a key part of the work of our universities but Government could foster more innovation
Our universities are global businesses competing for staff, students and funding with the best in the world.
Like all good clubs, membership is not cheap. In 2012 the Universities of Durham, Exeter and York and Queen Mary College University of London paid £500,000 apiece to join.
They may have been wasting their money.
A paper by Vikki Boliver of Durham University, whose research does not appear to have received any sort of funding, finds that analysis of data on research activity, teaching quality, economic resources, academic selectivity and socioeconomic student mix reveals four tiers within UK tertiary education. They are:
- A very small premier league composed of Oxford and Cambridge
- A second tier composed of 22 members of the Russell Group plus 17 of the other old universities -- the first three alphabetically are Aberdeen, Bath and Birmingham
- A third tier with 13 old and 54 post-1992 universities -- starting with Abertaye, Aberystwyth, and University of the Arts Bournemouth
- A fourth tier 4 of 19 post-1992 universities -- starting with Anglia Ruskin, Bishop Grosseteste and University College Birmingham.
It looks like some of the Russell Group are in danger of descending into the abyss of the Tier Three riff-raff.
Incidentally, taking a look at the well known world rankings, the US News Best Global Universities has a gap of 12 places between Cambridge, second of the Tier 1 universities, and Imperial College, best of the Tier 2 schools.
The Shanghai rankings similarly have a gap of ten places between Oxford and University College London.
But there are only four places in the THE World University Rankings between Cambridge and Imperial and one between Oxford and UCL in the QS world rankings.
Another finding is that the differences between teaching quality in the old and new universities are relatively minor compared to the amount and impact of research.
Does that explain why the Russell Group are so hostile to initiatives like AHELO and U-Multirank?
Wednesday, November 18, 2015
Are they trying to hide something?
Seven of the Australian Group of Eight elite universities have said that they have boycotted the Quacquarelli Symonds (QS) Graduate Employability Rankings which are due to be announced next week at the latest QS-Apple in Melbourne.
A spokesman for the Group, reported in The Australian, said:
“All of these rankings have their place and we are very happy to participate in them,” Ms Thomson said.
"However, the integrity and robustness of the data is critical in ensuring an accurate picture and we have some concerns around some of the data QS requested, particularly as it relates to student details and industry partners. These go to the heart of issues around privacy and confidentiality.
“We were also concerned about transparency with the methodology — we need to know how it will be used before we hand over information. There is no doubt that there are challenges in establishing a ranking of this nature and we will be very happy to work with QS in refining its pilot.”
I am not QS's number one fan but I wonder just how much the Group of Eight are really bothered about transparency and confidentiality. Could it be that they are afraid that such rankings might reveal that they are not quite as good at some things as they think they are?
Earlier this year the Household, Income and Labour Dynamics in Australia (HILDA) Survey reported that graduates of younger universities such as James Cook and Charles Darwin and some technological universities had higher incomes than those from the Group .
Spokespersons for the Group were not amused. They were "perplexed" and "disappointed" with the results which were "skewed" and "clearly anomalous".
The counterparts of the Group of Eight in the UK's Russell Group and the League of European Research Universities (LERU) have already shown that they do not like the U-Multirank rating tool, which the League considers a "serious threat to higher education".
Universities such as those in the Ivy League, the Group of Eight, LERU and the Russell Group have a bit of a problem. The do a lot of things, research, innovation, political indoctrination, sponsorship of sports teams, instruction in professional and scientific disciplines.
They also signal to employers that their graduates are sufficiently intelligent to do cognitively complex tasks. Now that A-levels and SATs have been dumbed down, curricular standards eroded, students admitted and faculty appointed and promoted for political and social reasons, an undergraduate degree from an elite institution means a lot less than it used to.
Still, organisations must survive and so the elite will continue to value rankings that count historical data like the Nobel awards, reputation, income and citations. They will be very uneasy about anything that probes too deeply into what they actually provide in return for bloated salaries and tuition fees.
Monday, November 16, 2015
Maybe QS were on to something
I recently posted on the implausibility of Quacquarelli Symonds (QS) putting the National University of Singapore and Nanyang Technological University ahead of Yale and Columbia in the latest World University Rankings. This remarkable achievement was largely due to high scores for the reputation surveys and international students and faculty, none of which have very much validity.
But recent events at Yale suggest that maybe QS know something. Students there have been excited not about the persecution of religious minorities in Myanmar and the Middle East, the possibility of war in Eastern Europe, terrorist attacks in Paris and Beirut or even the decay of public services in the US but by a sensible comment from an administrator about halloween costumes that appeared to presume too much about their maturity and intelligence.
It seems that the Master of Silliman College was insufficiently hysterical about some cautious and diffident remarks about free speech by his wife and Assistant Master. A viral video showed him being screeched at by a student.
Later, there was some of the usual grovelling about failing students.
The students certainly have been failed. Their parents should have spoken to them about the right way to treat domestic servants and the university administration should have told them to grow up.
But the most interesting question is what is going to happen when Yale undergraduates become faculty and the current faculty become administrators. How can they possibly hope to compete with graduates, teachers and researchers from the rigorous and selective university systems that are developing in East and Southeast Asia?
Comparing Engineering Rankings
Times Higher Education (THE) have just come out with another subject ranking, this time for Engineering and Technology. Here are the top five.
1. Stanford
2. Caltech
3. MIT
4. Cambridge
5. Berkeley
Nanyang Technological University is 20th, Tsinghua University 26th, and Zhejiang University 47th.
These rankings are very different from the US News ranking for Engineering.
There the top five are:
1. Tsinghua
2. MIT
3. Berkeley
4. Zhejiang
5. Nanyang Technological University.
Stanford is 8th, Cambridge 35th and Caltech 62nd.
So what could possibly explain such a huge difference?
Basically, the two rankings are measuring rather different things. THE give a third of their weighting to reputation. Supposedly there are two indicators -- postgraduate teaching reputation and research reputation -- but it is likely that they are so closely correlated that they are really measuring the same thing. Another chunk goes to income in three flavors, institutional, research, and industry. Another 30% goes to citations normalised by field and year.
The US News ranking puts more emphasis on measures of quantity rather quality and output rather than input, and ignores teaching reputation, international faculty and students and faculty student ratio. In these rankings Tsinghua is first for publications and Caltech 165th while Caltech is 46th for normalised citation impact and Tsinghua 186th.
On balance, I suspect that it is more likely that there will be a transition from quantity to quality than the other way round so we can expect Tsinghua and Zhejiang to close the gap in the THE rankings if they continue in their present form.
1. Stanford
2. Caltech
3. MIT
4. Cambridge
5. Berkeley
Nanyang Technological University is 20th, Tsinghua University 26th, and Zhejiang University 47th.
These rankings are very different from the US News ranking for Engineering.
There the top five are:
1. Tsinghua
2. MIT
3. Berkeley
4. Zhejiang
5. Nanyang Technological University.
Stanford is 8th, Cambridge 35th and Caltech 62nd.
So what could possibly explain such a huge difference?
Basically, the two rankings are measuring rather different things. THE give a third of their weighting to reputation. Supposedly there are two indicators -- postgraduate teaching reputation and research reputation -- but it is likely that they are so closely correlated that they are really measuring the same thing. Another chunk goes to income in three flavors, institutional, research, and industry. Another 30% goes to citations normalised by field and year.
The US News ranking puts more emphasis on measures of quantity rather quality and output rather than input, and ignores teaching reputation, international faculty and students and faculty student ratio. In these rankings Tsinghua is first for publications and Caltech 165th while Caltech is 46th for normalised citation impact and Tsinghua 186th.
On balance, I suspect that it is more likely that there will be a transition from quantity to quality than the other way round so we can expect Tsinghua and Zhejiang to close the gap in the THE rankings if they continue in their present form.
Friday, November 13, 2015
Are global rankings losing their credibility? (from WONK HE)
Originally published in WONK HE 27/10/2015
Are global rankings losing their credibility?
Richard is an academic and expert on university rankings. He writes
in depth on rankings at his blog: University
Rankings Watch.
PUBLISHED
Oct 27th 2015
TAGS
·
Data
The international university ranking scene has become increasingly
complex, confusing and controversial. It also seems that the big name brands
are having problems balancing popularity with reliability and validity. All
this is apparent from the events of the last two months which have seen the
publication of several major rankings.
The first phase of the 2015 global ranking season ended with the
publication of the US News’s (USN) Best Global
universities. We have already seen the 2015 editions of the big
three brand names, the Academic Ranking
of World Universities (ARWU) produced by the Centre for
World-Class Universities at Shanghai Jiao Tong University, the Quacquarelli
Symonds (QS) World University
Rankings and the Times Higher Education (THE) World University
Rankings. Now a series of spin-offs has begun.
In addition, a Russian organisation, Round University Ranking (RUR), has
produced another set of league tables. Apart from a news item on
the website of the International Ranking Expert Group these rankings have
received almost no attention outside Russia, Eastern Europe and the CIS. This
is very unfortunate since they do almost everything that the other rankings do
and contain information that the others do not.
One sign of the growing complexity of the ranking scene is that USN, QS,
ARWU and THE are producing a variety of by-products including
rankings of new universities, subject rankings, best cities for students,
reputation rankings, regional rankings with no doubt more to come. They are
also assessing more universities than ever before. THE used to take pride in
ranking only a small elite group of world universities. Now they are talking
about being open and inclusive and have ranked 800 universities this year, as
did QS, while USN has expanded from 500 to 750 universities. Only the Shanghai rankers
have remained content with a mere 500 universities in their general rankings.
Academic Ranking of World Universities (ARWU)
All three of the brand name rankings have faced issues of credibility.
The Shanghai ARWU has had a problem with the massive recruitment of adjunct
faculty by King Abdulaziz University (KAU) in Jeddah. This was initially aimed
at the highly cited researchers indicator in the ARWU, which simply counts the
number of researchers affiliated to universities no matter whether their affiliation
has been for an academic lifetime or had begun the day before ARWU did the
counting. The Shanghai rankers deftly dealt with this issue by simply not
counting secondary affiliations in the new lists of highly cited researchers
supplied by Thompson Reuters in 2014.
That, however, did not resolve the problem entirely. Those researchers
have not stopped putting KAU as a secondary affiliation and even if they no
longer affected the highly cited researchers indicator they could still help a
lot with publications and papers in Nature and Science,
both of which are counted in the ARWU. These part-timers – and some may not
even be that – have already ensured that KAU, according to ARWU, is the top
university in the world for publications in mathematics.
The issue of secondary affiliation is one that is likely to become a
serious headache for rankers, academic publishers and databases in the next few
years. Already, undergraduate teaching in American universities is dominated by
a huge reserve army of adjuncts. It is not impossible that in the near future
some universities may find it very easy to offer minimal part-time contracts to
talented researchers in return for listing as an affiliation and then see a
dramatic improvement in ranking performance.
ARWU’s problem with the highly cited researchers coincided with Thomson
Reuters producing a new list and announcing that the old one would no longer be
updated. Last year, Shanghai combined the old and new lists and this produced
substantial changes for some universities. This year they continued with the
two lists and there was relatively little movement in this indicator or in the
overall rankings. But next year they will drop the old list altogether and just
use the new one and there will be further volatility. ARWU have, however, listed the
number of highly cited researchers in the old and new lists so
most universities should be aware of what is coming.
Quacquarelli Symonds (QS) World University Rankings
The Quacquarelli Symonds (QS) World University Rankings have been
regarded with disdain by many British and American academics although they do
garner some respect in Asia and Latin America. Much of the criticism has
been directed at the academic reputation survey which is complex, opaque and,
judging from QS’s regular anti-gaming measures, susceptible to influence from
universities. There have also been complaints about the staff student ratio
indicator being a poor proxy for teaching quality and the bias of the citations
per faculty indicator towards medicine and against engineering, the social
sciences and the arts and humanities.
QS have decided to reform their
citations indicator by treating the five large subject groups
as contributing equally to the indicator score. In addition, QS omitted papers,
most of them in physics, with a very large number of listed authors and
averaged responses to the surveys over a period of five years in an attempt to
make the rankings less volatile.
The result of all this was that some universities rose and others fell.
Imperial College London went from 2nd to 8th while the London School of Economics rose from 71st to 35th. In Italy, the Polytechnics of Milan
and Turin got a big boost while venerable universities suffered dramatic
relegation. Two Indian institutions moved into the two hundred, some Irish
universities such as Trinity College Dublin, University College Dublin and
University College Cork went down and some such as National University of
Ireland Galway and the University of Limerick went up.
There has always been a considerable amount of noise in these rankings
resulting in part from small fluctuations in the employer and academic surveys.
In the latest rankings these combined with methodological changes to produce
some interesting fluctuations. Overall the general pattern was that
universities that emphasise the social sciences, the humanities and engineering
have improved at the expense of those that are strong in physics and medicine.
Perhaps the most remarkable of this year’s changes was the rise of two
Singaporean universities, the National University of Singapore (NUS) and
Nanyang Technological University (NTU), to 12th and 13th place respectively, a change
that has met with some scepticism even in Singapore. They are now above Yale,
EPF Lausanne and King’s College London. While the changes to the citations
component were significant, another important reason for the rise of these two
universities was their continuing remarkable performance in the academic and
employer surveys. NUS is in the top ten in the world for academic reputation
and employer reputation with a perfect score of 100, presumably rounded up, in
each. NTU is 52nd for the academic survey and 39th for employer with scores in the nineties for both.
Introducing a moderate degree of field normalisation was probably a
smart move. QS were able to reduce the distortion resulting from the database’s
bias to medical research without risking the multiplication of strange results
that have plagued the THE citations indicator. They have not,
however, attempted to reform the reputation surveys which continue to have a
combined 50% weighting and until they do so these rankings are unlikely to
achieve full recognition from the international academic community.
Times Higher Education (THE)
World University Rankings
The latest THE world rankings were published on
September 30th and like QS, THE have done some tweaking of
their methodology. They had broken with Thompson Reuters at the end of
2014 and started using data from Scopus, while doing the analysis and
processing in-house. They were able to analyse many more papers and citations
and conduct a more representative survey of research and postgraduate
supervision. In addition they omitted multi-author and multi-cited papers and
reduced the impact of the “regional modification”.
Consequently there was a large dose of volatility. The results were so
different from those of 2014 that they seemed to reflect an entirely new
system. THE did, to their credit, do the decent thing and
state that direct comparisons should not be made to previous years. That,
however, did not stop scores of universities and countries around the world
from announcing their success. Those that had suffered have for the most part
kept quiet.
There were some remarkable changes. At the very top, Oxford and
Cambridge surged ahead of Harvard which fell to sixth place. University College
Dublin, in contrast to the QS rankings, rose as did Twente and Moscow State,
the Karolinska Institute and ETH Zurich.
On the other hand, many universities in France, Korea, Japan and Turkey
suffered dramatic falls. Some of those universities had been participants in
the CERN projects and so had benefitted in 2014 from the huge number of
citations derived from their papers. Some were small and produced few papers so
those citations were divided by a small number of papers. Some were located in
countries that performed poorly and so got help from a “regional modification”
(the citation impact score of the university is divided by the square root of
the average citation impact score of the whole country). Such places suffered
badly from this year’s changes.
It is a relief that THE have finally done something
about the citations indicator and it would be excellent if they continued with
further reforms such as fractional counting, reducing the indicator’s overall
weighting, not counting self-citations and secondary affiliations and getting
rid of the regional modification altogether.
Unfortunately, if the current round of reforms represent an improvement,
and on balance they probably do, then the very different results of 2014 and
before, call into question THE’s repeated claims to be trusted, robust
and sophisticated. If the University of Twente deserves to be in the top 150
this year then the 2014 rankings which had them outside the top 200 could not
possibly be valid. If the Korean Advanced Institute of Science and Technology
(KAIST) fell 66 places then either the 2015 rankings or those of 2014 were
inaccurate, or they both were. Unless there is some sort of major restructuring
such as an amalgamation of specialist schools or the shedding of inconvenient
junior colleges or branch campuses, large organisations like universities
simply do not and cannot change that much over the course of 12 months or less.
It would have been more honest, although probably not commercially
feasible, for THE to declare that they were starting with a
completely new set of rankings and to renounce the 2009-14 rankings in the way
that they had disowned the rankings produced in cooperation with QS between
2004 and 2008. THE seem to be trying to trade on the basis of
their trusted methodology while selling results suggesting that that
methodology is far from trustworthy. They are of course doing just what a
business has to do. But that is no reason why university administrators and
academic experts should be so tolerant of such a dubious product.
These rankings also contain quite a few small or specialised
institutions that would appear to be on the borderline of a reasonable
definition of an “independent university with a broad range of subjects”:
Scuala Normale Superiore di Pisa and Scuala Superiore Sant’Anna, both part of
the University of Pisa system, Charité-Universitätsmedizin Berlin, an affiliate
of two universities, St George’s, University of London, a medical school,
Copenhagen Business School, Rush university, the academic branch of a private
hospital in Chicago, the Royal College of Surgeons in Ireland, and the National
Research Nuclear University (MEPhI) in Moscow, specialising in physics. Even if THE have
not been too loose about who is included, the high scores achieved by such
narrowly focussed institutions calls the validity of the rankings into
question.
Round University Rankings
In general the THE rankings have received a broad and
respectful response from the international media and university
managers, and criticism has largely been confined to outsiders and
specialists. This is in marked contrast to the Rankings released by a
Russian organisation early in September. These are based entirely on data
supplied by Thompson Reuters, THE’s data provider and analyst
until last year. They contain a total of 20 indicators, including 12 out of the
13 in the THE rankings. Unlike THE, RUR do not bundle
indicators together in groups so it is possible to tell exactly why
universities are performing well or badly.
The RUR rankings are not elegantly presented but the content is more
transparent than THE, more comprehensive than QS, and apparently
less volatile than either. It is a strong indictment of the international
higher education establishment that these rankings are ignored while THE’s are
followed so avidly.
Best Global Universities
The second edition of the US News’s Best Global
Universities was published at the beginning of October. The US
News is best known for the ranking of American colleges and
universities and it has been cautious about venturing into the global arena.
These rankings are fairly similar to the Shanghai ARWU, containing only
research indicators and making no pretence to measure teaching or graduate
quality. The methodology avoids some elementary mistakes. It does not give too
much weight to any one indicator, with none getting more than 12.5%, and
measures citations in three different ways. For eight indicators log
manipulation was done before the calculation of z-scores to eliminate outliers
and statistical anomalies.
This year US News went a little way towards reducing
the rankers’ obsession with citations by including conferences and books in the
list of criteria.
Since they do not include any non-research indicators these rankings are
essentially competing with the Shanghai ARWU and it is possible that they may
eventually become the first choice for internationally mobile graduate
students.
But at the moment it seems that the traditional media and higher
education establishment have lost none of their fascination for the snakes and
ladders game of THE and QS.
Subscribe to:
Posts (Atom)