Times Higher Education (THE) has always had, or tried to have, a good opinion of itself and its rankings.
A perennial highlight of the ranking season is the flurry of adjectives used by THE to describe its summits (prestigious, exclusive) and the rankings and their methodology. Here is a selection:
"the sharper, deeper insights revealed by our new and more rigorous world rankings"
"Robust, transparent and sophisticated"
"the most comprehensive, sophisticated and balanced global rankings in the world."
"a dramatically improved ranking system"
"our dramatic innovations"
"our tried, trusted and comprehensive combination of 13 performance indicators remains in place, with the same carefully calibrated weightings"
"our most comprehensive, inclusive and insightful World University Rankings to date"
The problem is that if the rankings are so robust and sophisticated then what is the point of a dramatic improvement? If there is a dramatic improvement one year is there a need for more dramatic improvements in the next? And are there no limits to the rigor of the methodology and the sharpness and depth of the insights?
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Friday, December 18, 2015
Monday, December 14, 2015
Why are university bosses paid so much?
Times Higher Education (THE) has an article by Ellie Bothwell about the earnings of university heads in the USA and the UK. The US data is from the Chronicle of Higher Education.
The sums paid are in some cases extraordinary. Maybe Lee Bollinger of Columbia deserves $4,615,230 but $1,634,000 for the head of Tulane?
On the other side of the Atlantic the biggest earner is the head of Nottingham Trent University. To the lay reader that makes as much sense as the manager of Notts County or Plymouth Argyle outearning Manchester City or Chelsea.
THE argues that there is little correlation between the salaries of the top earning twenty American and British university heads and university prestige as measured by position in the overall THE world rankings.
It would actually be very surprising if a large correlation were found since there is an obvious restriction of range effect if only the top 20 are considered. If we looked at the entire spectrum of salaries we would almost certainly get a much greater correlation. I suspect that THE is trying to deflect criticism that its rankings measure wealth and age rather than genuine quality.
THE do not give any numbers so I have calculated the correlation between the salaries of the US heads and overall scores in the brand name rankings. Maybe I'll get round to the British salaries next week.
The Pearson correlation coefficient between the salaries of the 20 most highly paid university heads in the US and overall THE world rankings scores is only .259, which is not statistically significant.
The correlation is greater when we compare salaries with the US News (USN) America's Best Colleges and the Shanghai Academic Ranking of World Universities. The top 20 US salaries have a .362 correlation with the overall scores in the 2015 America's Best Colleges (not significant) and .379 (significant at the 0.05 level [1 tailed]) with the total scores in the 2015 ARWU.
That suggests that American university heads are recruited with the object of doing well in the things that count in the USN rankings and more so in the Shanghai rankings. Or perhaps that the THE rankings are not so good at measuring the things that the heads are supposed to do.
Of course, if we looked at the whole range of university salaries and university quality there would probably be different results.
By the way, there is almost zero correlation between the top 20 salaries and university size as measured by the number of students.
Thursday, December 03, 2015
Not as Elite as They Thought
British higher education is very definitely not a flat system. There is an enormous difference between Oxford or LSE and the University of Bolton or the University of East London in terms of research output and quality, graduate outcomes, public perceptions, student attributes and just about anything else you could think of.
The most obvious dividing line in the UK university world is between the post-1992 and pre-1992 universities. The former were mostly polytechnics run by local authorities that did not award their own degrees, provided sub-degree courses and did little research.
Another line was drawn in 1994. Named after the hotel (only four stars but it is "old", "famous", "grand" and "impressive") where the inaugural meeting was held, the Russell Group now has 24 members, including of course Oxford and Cambridge, and claims to include only elite research intensive universities. Definitely no riff-raff.
The home page of the group gives a good idea of its priorities:
Our universities are global leaders in research, but it is vital they receive sufficient funding and support
A high-quality, research-led education requires proper funding at both undergraduate and postgraduate level
Collaboration with business is a key part of the work of our universities but Government could foster more innovation
Our universities are global businesses competing for staff, students and funding with the best in the world.
Like all good clubs, membership is not cheap. In 2012 the Universities of Durham, Exeter and York and Queen Mary College University of London paid £500,000 apiece to join.
They may have been wasting their money.
A paper by Vikki Boliver of Durham University, whose research does not appear to have received any sort of funding, finds that analysis of data on research activity, teaching quality, economic resources, academic selectivity and socioeconomic student mix reveals four tiers within UK tertiary education. They are:
- A very small premier league composed of Oxford and Cambridge
- A second tier composed of 22 members of the Russell Group plus 17 of the other old universities -- the first three alphabetically are Aberdeen, Bath and Birmingham
- A third tier with 13 old and 54 post-1992 universities -- starting with Abertaye, Aberystwyth, and University of the Arts Bournemouth
- A fourth tier 4 of 19 post-1992 universities -- starting with Anglia Ruskin, Bishop Grosseteste and University College Birmingham.
It looks like some of the Russell Group are in danger of descending into the abyss of the Tier Three riff-raff.
Incidentally, taking a look at the well known world rankings, the US News Best Global Universities has a gap of 12 places between Cambridge, second of the Tier 1 universities, and Imperial College, best of the Tier 2 schools.
The Shanghai rankings similarly have a gap of ten places between Oxford and University College London.
But there are only four places in the THE World University Rankings between Cambridge and Imperial and one between Oxford and UCL in the QS world rankings.
Another finding is that the differences between teaching quality in the old and new universities are relatively minor compared to the amount and impact of research.
Does that explain why the Russell Group are so hostile to initiatives like AHELO and U-Multirank?
Wednesday, November 18, 2015
Are they trying to hide something?
Seven of the Australian Group of Eight elite universities have said that they have boycotted the Quacquarelli Symonds (QS) Graduate Employability Rankings which are due to be announced next week at the latest QS-Apple in Melbourne.
A spokesman for the Group, reported in The Australian, said:
“All of these rankings have their place and we are very happy to participate in them,” Ms Thomson said.
"However, the integrity and robustness of the data is critical in ensuring an accurate picture and we have some concerns around some of the data QS requested, particularly as it relates to student details and industry partners. These go to the heart of issues around privacy and confidentiality.
“We were also concerned about transparency with the methodology — we need to know how it will be used before we hand over information. There is no doubt that there are challenges in establishing a ranking of this nature and we will be very happy to work with QS in refining its pilot.”
I am not QS's number one fan but I wonder just how much the Group of Eight are really bothered about transparency and confidentiality. Could it be that they are afraid that such rankings might reveal that they are not quite as good at some things as they think they are?
Earlier this year the Household, Income and Labour Dynamics in Australia (HILDA) Survey reported that graduates of younger universities such as James Cook and Charles Darwin and some technological universities had higher incomes than those from the Group .
Spokespersons for the Group were not amused. They were "perplexed" and "disappointed" with the results which were "skewed" and "clearly anomalous".
The counterparts of the Group of Eight in the UK's Russell Group and the League of European Research Universities (LERU) have already shown that they do not like the U-Multirank rating tool, which the League considers a "serious threat to higher education".
Universities such as those in the Ivy League, the Group of Eight, LERU and the Russell Group have a bit of a problem. The do a lot of things, research, innovation, political indoctrination, sponsorship of sports teams, instruction in professional and scientific disciplines.
They also signal to employers that their graduates are sufficiently intelligent to do cognitively complex tasks. Now that A-levels and SATs have been dumbed down, curricular standards eroded, students admitted and faculty appointed and promoted for political and social reasons, an undergraduate degree from an elite institution means a lot less than it used to.
Still, organisations must survive and so the elite will continue to value rankings that count historical data like the Nobel awards, reputation, income and citations. They will be very uneasy about anything that probes too deeply into what they actually provide in return for bloated salaries and tuition fees.
Monday, November 16, 2015
Maybe QS were on to something
I recently posted on the implausibility of Quacquarelli Symonds (QS) putting the National University of Singapore and Nanyang Technological University ahead of Yale and Columbia in the latest World University Rankings. This remarkable achievement was largely due to high scores for the reputation surveys and international students and faculty, none of which have very much validity.
But recent events at Yale suggest that maybe QS know something. Students there have been excited not about the persecution of religious minorities in Myanmar and the Middle East, the possibility of war in Eastern Europe, terrorist attacks in Paris and Beirut or even the decay of public services in the US but by a sensible comment from an administrator about halloween costumes that appeared to presume too much about their maturity and intelligence.
It seems that the Master of Silliman College was insufficiently hysterical about some cautious and diffident remarks about free speech by his wife and Assistant Master. A viral video showed him being screeched at by a student.
Later, there was some of the usual grovelling about failing students.
The students certainly have been failed. Their parents should have spoken to them about the right way to treat domestic servants and the university administration should have told them to grow up.
But the most interesting question is what is going to happen when Yale undergraduates become faculty and the current faculty become administrators. How can they possibly hope to compete with graduates, teachers and researchers from the rigorous and selective university systems that are developing in East and Southeast Asia?
Comparing Engineering Rankings
Times Higher Education (THE) have just come out with another subject ranking, this time for Engineering and Technology. Here are the top five.
1. Stanford
2. Caltech
3. MIT
4. Cambridge
5. Berkeley
Nanyang Technological University is 20th, Tsinghua University 26th, and Zhejiang University 47th.
These rankings are very different from the US News ranking for Engineering.
There the top five are:
1. Tsinghua
2. MIT
3. Berkeley
4. Zhejiang
5. Nanyang Technological University.
Stanford is 8th, Cambridge 35th and Caltech 62nd.
So what could possibly explain such a huge difference?
Basically, the two rankings are measuring rather different things. THE give a third of their weighting to reputation. Supposedly there are two indicators -- postgraduate teaching reputation and research reputation -- but it is likely that they are so closely correlated that they are really measuring the same thing. Another chunk goes to income in three flavors, institutional, research, and industry. Another 30% goes to citations normalised by field and year.
The US News ranking puts more emphasis on measures of quantity rather quality and output rather than input, and ignores teaching reputation, international faculty and students and faculty student ratio. In these rankings Tsinghua is first for publications and Caltech 165th while Caltech is 46th for normalised citation impact and Tsinghua 186th.
On balance, I suspect that it is more likely that there will be a transition from quantity to quality than the other way round so we can expect Tsinghua and Zhejiang to close the gap in the THE rankings if they continue in their present form.
1. Stanford
2. Caltech
3. MIT
4. Cambridge
5. Berkeley
Nanyang Technological University is 20th, Tsinghua University 26th, and Zhejiang University 47th.
These rankings are very different from the US News ranking for Engineering.
There the top five are:
1. Tsinghua
2. MIT
3. Berkeley
4. Zhejiang
5. Nanyang Technological University.
Stanford is 8th, Cambridge 35th and Caltech 62nd.
So what could possibly explain such a huge difference?
Basically, the two rankings are measuring rather different things. THE give a third of their weighting to reputation. Supposedly there are two indicators -- postgraduate teaching reputation and research reputation -- but it is likely that they are so closely correlated that they are really measuring the same thing. Another chunk goes to income in three flavors, institutional, research, and industry. Another 30% goes to citations normalised by field and year.
The US News ranking puts more emphasis on measures of quantity rather quality and output rather than input, and ignores teaching reputation, international faculty and students and faculty student ratio. In these rankings Tsinghua is first for publications and Caltech 165th while Caltech is 46th for normalised citation impact and Tsinghua 186th.
On balance, I suspect that it is more likely that there will be a transition from quantity to quality than the other way round so we can expect Tsinghua and Zhejiang to close the gap in the THE rankings if they continue in their present form.
Friday, November 13, 2015
Are global rankings losing their credibility? (from WONK HE)
Originally published in WONK HE 27/10/2015
Are global rankings losing their credibility?
Richard is an academic and expert on university rankings. He writes
in depth on rankings at his blog: University
Rankings Watch.
PUBLISHED
Oct 27th 2015
TAGS
·
Data
The international university ranking scene has become increasingly
complex, confusing and controversial. It also seems that the big name brands
are having problems balancing popularity with reliability and validity. All
this is apparent from the events of the last two months which have seen the
publication of several major rankings.
The first phase of the 2015 global ranking season ended with the
publication of the US News’s (USN) Best Global
universities. We have already seen the 2015 editions of the big
three brand names, the Academic Ranking
of World Universities (ARWU) produced by the Centre for
World-Class Universities at Shanghai Jiao Tong University, the Quacquarelli
Symonds (QS) World University
Rankings and the Times Higher Education (THE) World University
Rankings. Now a series of spin-offs has begun.
In addition, a Russian organisation, Round University Ranking (RUR), has
produced another set of league tables. Apart from a news item on
the website of the International Ranking Expert Group these rankings have
received almost no attention outside Russia, Eastern Europe and the CIS. This
is very unfortunate since they do almost everything that the other rankings do
and contain information that the others do not.
One sign of the growing complexity of the ranking scene is that USN, QS,
ARWU and THE are producing a variety of by-products including
rankings of new universities, subject rankings, best cities for students,
reputation rankings, regional rankings with no doubt more to come. They are
also assessing more universities than ever before. THE used to take pride in
ranking only a small elite group of world universities. Now they are talking
about being open and inclusive and have ranked 800 universities this year, as
did QS, while USN has expanded from 500 to 750 universities. Only the Shanghai rankers
have remained content with a mere 500 universities in their general rankings.
Academic Ranking of World Universities (ARWU)
All three of the brand name rankings have faced issues of credibility.
The Shanghai ARWU has had a problem with the massive recruitment of adjunct
faculty by King Abdulaziz University (KAU) in Jeddah. This was initially aimed
at the highly cited researchers indicator in the ARWU, which simply counts the
number of researchers affiliated to universities no matter whether their affiliation
has been for an academic lifetime or had begun the day before ARWU did the
counting. The Shanghai rankers deftly dealt with this issue by simply not
counting secondary affiliations in the new lists of highly cited researchers
supplied by Thompson Reuters in 2014.
That, however, did not resolve the problem entirely. Those researchers
have not stopped putting KAU as a secondary affiliation and even if they no
longer affected the highly cited researchers indicator they could still help a
lot with publications and papers in Nature and Science,
both of which are counted in the ARWU. These part-timers – and some may not
even be that – have already ensured that KAU, according to ARWU, is the top
university in the world for publications in mathematics.
The issue of secondary affiliation is one that is likely to become a
serious headache for rankers, academic publishers and databases in the next few
years. Already, undergraduate teaching in American universities is dominated by
a huge reserve army of adjuncts. It is not impossible that in the near future
some universities may find it very easy to offer minimal part-time contracts to
talented researchers in return for listing as an affiliation and then see a
dramatic improvement in ranking performance.
ARWU’s problem with the highly cited researchers coincided with Thomson
Reuters producing a new list and announcing that the old one would no longer be
updated. Last year, Shanghai combined the old and new lists and this produced
substantial changes for some universities. This year they continued with the
two lists and there was relatively little movement in this indicator or in the
overall rankings. But next year they will drop the old list altogether and just
use the new one and there will be further volatility. ARWU have, however, listed the
number of highly cited researchers in the old and new lists so
most universities should be aware of what is coming.
Quacquarelli Symonds (QS) World University Rankings
The Quacquarelli Symonds (QS) World University Rankings have been
regarded with disdain by many British and American academics although they do
garner some respect in Asia and Latin America. Much of the criticism has
been directed at the academic reputation survey which is complex, opaque and,
judging from QS’s regular anti-gaming measures, susceptible to influence from
universities. There have also been complaints about the staff student ratio
indicator being a poor proxy for teaching quality and the bias of the citations
per faculty indicator towards medicine and against engineering, the social
sciences and the arts and humanities.
QS have decided to reform their
citations indicator by treating the five large subject groups
as contributing equally to the indicator score. In addition, QS omitted papers,
most of them in physics, with a very large number of listed authors and
averaged responses to the surveys over a period of five years in an attempt to
make the rankings less volatile.
The result of all this was that some universities rose and others fell.
Imperial College London went from 2nd to 8th while the London School of Economics rose from 71st to 35th. In Italy, the Polytechnics of Milan
and Turin got a big boost while venerable universities suffered dramatic
relegation. Two Indian institutions moved into the two hundred, some Irish
universities such as Trinity College Dublin, University College Dublin and
University College Cork went down and some such as National University of
Ireland Galway and the University of Limerick went up.
There has always been a considerable amount of noise in these rankings
resulting in part from small fluctuations in the employer and academic surveys.
In the latest rankings these combined with methodological changes to produce
some interesting fluctuations. Overall the general pattern was that
universities that emphasise the social sciences, the humanities and engineering
have improved at the expense of those that are strong in physics and medicine.
Perhaps the most remarkable of this year’s changes was the rise of two
Singaporean universities, the National University of Singapore (NUS) and
Nanyang Technological University (NTU), to 12th and 13th place respectively, a change
that has met with some scepticism even in Singapore. They are now above Yale,
EPF Lausanne and King’s College London. While the changes to the citations
component were significant, another important reason for the rise of these two
universities was their continuing remarkable performance in the academic and
employer surveys. NUS is in the top ten in the world for academic reputation
and employer reputation with a perfect score of 100, presumably rounded up, in
each. NTU is 52nd for the academic survey and 39th for employer with scores in the nineties for both.
Introducing a moderate degree of field normalisation was probably a
smart move. QS were able to reduce the distortion resulting from the database’s
bias to medical research without risking the multiplication of strange results
that have plagued the THE citations indicator. They have not,
however, attempted to reform the reputation surveys which continue to have a
combined 50% weighting and until they do so these rankings are unlikely to
achieve full recognition from the international academic community.
Times Higher Education (THE)
World University Rankings
The latest THE world rankings were published on
September 30th and like QS, THE have done some tweaking of
their methodology. They had broken with Thompson Reuters at the end of
2014 and started using data from Scopus, while doing the analysis and
processing in-house. They were able to analyse many more papers and citations
and conduct a more representative survey of research and postgraduate
supervision. In addition they omitted multi-author and multi-cited papers and
reduced the impact of the “regional modification”.
Consequently there was a large dose of volatility. The results were so
different from those of 2014 that they seemed to reflect an entirely new
system. THE did, to their credit, do the decent thing and
state that direct comparisons should not be made to previous years. That,
however, did not stop scores of universities and countries around the world
from announcing their success. Those that had suffered have for the most part
kept quiet.
There were some remarkable changes. At the very top, Oxford and
Cambridge surged ahead of Harvard which fell to sixth place. University College
Dublin, in contrast to the QS rankings, rose as did Twente and Moscow State,
the Karolinska Institute and ETH Zurich.
On the other hand, many universities in France, Korea, Japan and Turkey
suffered dramatic falls. Some of those universities had been participants in
the CERN projects and so had benefitted in 2014 from the huge number of
citations derived from their papers. Some were small and produced few papers so
those citations were divided by a small number of papers. Some were located in
countries that performed poorly and so got help from a “regional modification”
(the citation impact score of the university is divided by the square root of
the average citation impact score of the whole country). Such places suffered
badly from this year’s changes.
It is a relief that THE have finally done something
about the citations indicator and it would be excellent if they continued with
further reforms such as fractional counting, reducing the indicator’s overall
weighting, not counting self-citations and secondary affiliations and getting
rid of the regional modification altogether.
Unfortunately, if the current round of reforms represent an improvement,
and on balance they probably do, then the very different results of 2014 and
before, call into question THE’s repeated claims to be trusted, robust
and sophisticated. If the University of Twente deserves to be in the top 150
this year then the 2014 rankings which had them outside the top 200 could not
possibly be valid. If the Korean Advanced Institute of Science and Technology
(KAIST) fell 66 places then either the 2015 rankings or those of 2014 were
inaccurate, or they both were. Unless there is some sort of major restructuring
such as an amalgamation of specialist schools or the shedding of inconvenient
junior colleges or branch campuses, large organisations like universities
simply do not and cannot change that much over the course of 12 months or less.
It would have been more honest, although probably not commercially
feasible, for THE to declare that they were starting with a
completely new set of rankings and to renounce the 2009-14 rankings in the way
that they had disowned the rankings produced in cooperation with QS between
2004 and 2008. THE seem to be trying to trade on the basis of
their trusted methodology while selling results suggesting that that
methodology is far from trustworthy. They are of course doing just what a
business has to do. But that is no reason why university administrators and
academic experts should be so tolerant of such a dubious product.
These rankings also contain quite a few small or specialised
institutions that would appear to be on the borderline of a reasonable
definition of an “independent university with a broad range of subjects”:
Scuala Normale Superiore di Pisa and Scuala Superiore Sant’Anna, both part of
the University of Pisa system, Charité-Universitätsmedizin Berlin, an affiliate
of two universities, St George’s, University of London, a medical school,
Copenhagen Business School, Rush university, the academic branch of a private
hospital in Chicago, the Royal College of Surgeons in Ireland, and the National
Research Nuclear University (MEPhI) in Moscow, specialising in physics. Even if THE have
not been too loose about who is included, the high scores achieved by such
narrowly focussed institutions calls the validity of the rankings into
question.
Round University Rankings
In general the THE rankings have received a broad and
respectful response from the international media and university
managers, and criticism has largely been confined to outsiders and
specialists. This is in marked contrast to the Rankings released by a
Russian organisation early in September. These are based entirely on data
supplied by Thompson Reuters, THE’s data provider and analyst
until last year. They contain a total of 20 indicators, including 12 out of the
13 in the THE rankings. Unlike THE, RUR do not bundle
indicators together in groups so it is possible to tell exactly why
universities are performing well or badly.
The RUR rankings are not elegantly presented but the content is more
transparent than THE, more comprehensive than QS, and apparently
less volatile than either. It is a strong indictment of the international
higher education establishment that these rankings are ignored while THE’s are
followed so avidly.
Best Global Universities
The second edition of the US News’s Best Global
Universities was published at the beginning of October. The US
News is best known for the ranking of American colleges and
universities and it has been cautious about venturing into the global arena.
These rankings are fairly similar to the Shanghai ARWU, containing only
research indicators and making no pretence to measure teaching or graduate
quality. The methodology avoids some elementary mistakes. It does not give too
much weight to any one indicator, with none getting more than 12.5%, and
measures citations in three different ways. For eight indicators log
manipulation was done before the calculation of z-scores to eliminate outliers
and statistical anomalies.
This year US News went a little way towards reducing
the rankers’ obsession with citations by including conferences and books in the
list of criteria.
Since they do not include any non-research indicators these rankings are
essentially competing with the Shanghai ARWU and it is possible that they may
eventually become the first choice for internationally mobile graduate
students.
But at the moment it seems that the traditional media and higher
education establishment have lost none of their fascination for the snakes and
ladders game of THE and QS.
Thursday, October 29, 2015
Worth Reading 2
University ranking methodologies. An interview with Ben Sowter about the Quacquarelli Symonds World University Ranking
Alberto Baccini, Antonio Banfi, Giuseppe De Nicolao, Paola Galimberti
RT. A Journal on Research Policy & Evaluation 1 (2015)
Wednesday, October 28, 2015
Tuesday, October 27, 2015
Thursday, October 22, 2015
Even the Spectator reads the THE rankings
The influence of the global rankings, especially the Times Higher Education (THE) World University Rankings, appears to have no limits.
An article by Harry Mount in the Spectator describes the changing educational background of the leaders of the Labour Party. The top ranks used to be filled by graduates of Oxford (Denis Healey, Harold Wilson, Tony Blair, the Milibands, Ed Balls), Cambridge (Tristram Hunt) and Edinburgh (Gordon Brown).
Now they have been replaced by the alumni of Brunel and Birkbeck (John McDonnell), Sussex (Hilary Benn and Owen Smith, Nottingham (Michael Dugher ), Westminster (Gloria De Piero) and Hull (Tom Watson and Rosie Winterton) . Jeremy Corbyn lasted a year at the Polytechnic of North London now London Metropolitan University.
Mount observes that Oxford was second in the latest edition of the THE world rankings, Hull 401st and London Metropolitan unranked.
It is only fair to point out that participation in the THE rankings is voluntary so maybe London Metropolitan could have been ranked if they had bothered to send in the data.
Not everyone is impressed by the THE rankings. "Tony Dark" comments
An article by Harry Mount in the Spectator describes the changing educational background of the leaders of the Labour Party. The top ranks used to be filled by graduates of Oxford (Denis Healey, Harold Wilson, Tony Blair, the Milibands, Ed Balls), Cambridge (Tristram Hunt) and Edinburgh (Gordon Brown).
Now they have been replaced by the alumni of Brunel and Birkbeck (John McDonnell), Sussex (Hilary Benn and Owen Smith, Nottingham (Michael Dugher ), Westminster (Gloria De Piero) and Hull (Tom Watson and Rosie Winterton) . Jeremy Corbyn lasted a year at the Polytechnic of North London now London Metropolitan University.
Mount observes that Oxford was second in the latest edition of the THE world rankings, Hull 401st and London Metropolitan unranked.
It is only fair to point out that participation in the THE rankings is voluntary so maybe London Metropolitan could have been ranked if they had bothered to send in the data.
Not everyone is impressed by the THE rankings. "Tony Dark" comments
"Amusing to note the reference to the Times Higher Education world ranking: this allegedly authoritative table is produced by a handful of hacks, and their hired statisticians, from a journal so insignificant that hardly anyone even in universities reads it. The other allegedly authoritative table, emanating from an organisation called QS, is largely driven by another clique of journos who split from the Times Higher . And the heads of multi million pound universities quail before the wondrous listings generated by these miniscule cabals. A mad world, my masters."
Sunday, October 18, 2015
Going Up and Going Down
A revised version of a previous post has been posted at University World News. Readers are welcome to comment here.
Sunday, October 11, 2015
More on Politics and Rankings
The Higher Education Minister of Malaysia has praised the country's leading university, Universiti Malaya (UM) for getting into the top 150 of the Quacquarelli Symonds (QS) World University Rankings. He also noted that UM and other Malaysian universities had done well in the QS subject rankings.
The problem with relying on QS or Times Higher Education (THE) is that they are prone to volatility because of reliance on reputation surveys that can be unstable outside the top dozen or so universities. Things have been made worse this year by methodological changes. In the case of QS one change was to give more credit to citations in the humanities and social sciences thereby helping universities that publish mainly or entirely in English.
A more consistent view of university performance might be found in the Shanghai or US News rankings.
The problem with relying on QS or Times Higher Education (THE) is that they are prone to volatility because of reliance on reputation surveys that can be unstable outside the top dozen or so universities. Things have been made worse this year by methodological changes. In the case of QS one change was to give more credit to citations in the humanities and social sciences thereby helping universities that publish mainly or entirely in English.
A more consistent view of university performance might be found in the Shanghai or US News rankings.
Rankings Become Big Politics
University performance in global rankings has become a favorite weapon of politicians around the world. Scotland's first Minister has noted that there are five Scottish universities in the Times Higher Education World University Rankings and that the Scottish government will "continue to work with our universities to make two sure that they continue to be that fantastic success story"
She did not mention that there are only two Scottish universities in the top 200 of the Shanghai rankings and in the US News Best Global Universities.
She did not mention that there are only two Scottish universities in the top 200 of the Shanghai rankings and in the US News Best Global Universities.
Thursday, October 08, 2015
Tokyo Metropolitan University is Still in the Japanese Top Ten
Until recently Tokyo Metropolitan University had an advertisement with Times Higher Education proclaiming their perfect score of 100 for citations. This year the score fell to 72.2 and so now they just say "TMU ranks 9th among Japanese universities in the Times Higher Education World University Rankings 2015-2016"
I hope they got a discount.
I hope they got a discount.
Saturday, October 03, 2015
Where Should Rankers get Data From?
Times Higher Education (THE) have started publishing some basic university statistics on their rankings page: number of students, student-staff ratio, international students and female-male ratio.
Already some observers have noted that the data does not always match that found in institutional and official sources. I have heard that the number of students given for several German universities is significantly lower than that found in other sources.
The Online Citizen in Singapore has found that the island's two leading tertiary institutions, National University of Singapore and Nanyang Technological University, have claimed 34% and 33% international students respectively on the THE site although in 2013 the Minister of Education had claimed that the proportion of international students in Singaporean universities was only 16 %.
There are several plausible and innocent explanations for this and similar discrepancies. It could be that part-time students, branch campuses, online students, permanent residents, research institutes, commuters living in Malaysia are counted in one set of figures but not the other.
But there is a serious and general problem with institutional data for university rankings. Even if everybody concerned is completely honest, there are many points at which ambiguous definitions, conflicting estimates, duplication or omission of data can undermine the accuracy of ranking indicators. In the case of Germany there might be some argument over whether doctoral candidates count as students or teaching and/or research staff
QS used to have a validation hierarchy starting with national statistics, followed by institutional data, data from websites, old data, third party data and smart averages in that order. If it is still applied rigorously this would be the best approach.
I understand that both QS and THE reserve the right to overrule institutional data although how severe they are I do not know. THE have a particularly difficult task since they allow universities to opt in or out as they please. Should THE be too strict about the data supplied a university might simply decide not to be ranked for a year.
On balance, it is probably good sense for ranking organisations to rely on publicly accessible data when they can and to minimise input from universities.
Already some observers have noted that the data does not always match that found in institutional and official sources. I have heard that the number of students given for several German universities is significantly lower than that found in other sources.
The Online Citizen in Singapore has found that the island's two leading tertiary institutions, National University of Singapore and Nanyang Technological University, have claimed 34% and 33% international students respectively on the THE site although in 2013 the Minister of Education had claimed that the proportion of international students in Singaporean universities was only 16 %.
There are several plausible and innocent explanations for this and similar discrepancies. It could be that part-time students, branch campuses, online students, permanent residents, research institutes, commuters living in Malaysia are counted in one set of figures but not the other.
But there is a serious and general problem with institutional data for university rankings. Even if everybody concerned is completely honest, there are many points at which ambiguous definitions, conflicting estimates, duplication or omission of data can undermine the accuracy of ranking indicators. In the case of Germany there might be some argument over whether doctoral candidates count as students or teaching and/or research staff
QS used to have a validation hierarchy starting with national statistics, followed by institutional data, data from websites, old data, third party data and smart averages in that order. If it is still applied rigorously this would be the best approach.
I understand that both QS and THE reserve the right to overrule institutional data although how severe they are I do not know. THE have a particularly difficult task since they allow universities to opt in or out as they please. Should THE be too strict about the data supplied a university might simply decide not to be ranked for a year.
On balance, it is probably good sense for ranking organisations to rely on publicly accessible data when they can and to minimise input from universities.
Friday, October 02, 2015
Very Interesting Rankings from Times Higher Education
The latest edition of the Times Higher Education (THE) World University Rankings has just been published, along with a big dose of self-flattery and
congratulations to the winners of what is beginning to look more like a lottery
than an objective exercise in comparative assessment.
The background to the story is
that at the end of last year THE broke with their data suppliers Thomson Reuters (TR) and
announced the dawn of a new era of transparency and accountability
There were quite a few things wrong
with the THE rankings, especially with the citations indicator which supposedly measured research impact and was given
nearly a third of the total weighting. This meant that THE was faced with a
serious dilemma. Keeping the old methodology would be a problem but radical
reform would raise the question of why THE would want to change what they claimed was a uniquely trusted and sophisticated methodology with carefully calibrated indicators.
It seems that THE have decided to
make a limited number of changes but to postpone making a decision about other
issues.
They have broadened the academic
reputation survey, sending out forms in more languages and getting more
responses from outside the USA. Respondents are now drawn from those with publications
in the Scopus database, much larger than the Web of Science, as was information
about publications and citations. In addition, THE have excluded 649 “freakish” multi
– author papers from their calculations and diluted the effect of the regional modification
that boosted the scores in the citations indicator of low performing countries.
These changes have led to implausible fluctuations with some institutions rising or falling dozens or hundreds of places. Fortunately for THE, the latest winners are happy to trumpet their success and the losers so far seem to have lapsed into an embarrassed silence.
When they were published on the 30th
of September the rankings provided lots of headline fodder about who was up or
down.
The Irish Times announced that the rankings showed Trinity College Dublin had fallen while University College Dublin was rising.
In the Netherlands the University of
Twente bragged about its “sensationally higher scores”.
Study
International asserted that “Asia Falters” and that Britain and the US were still
dominant in higher education.
The London Daily Telegraph claimed that
European universities were matching the US.
The Hindu found something to boast about
by noting that India was at last the equal of co-BRICS member Brazil.
Russian media celebrated the
remarkable achievement of Lomonosov Moscow State University in rising 35 places.
And, of course, the standard THE narrative was trotted out again. British universities are wonderful
but they will only go on being wonderful if they are given as much money as
they want and are allowed to admit as many overseas students as they want.
The latest rankings support this narrative
of British excellence by showing Oxford and Cambridge overtaking Harvard, which
was pushed into sixth place. But is such a claim believable? Has anything happened
in the labs or lecture halls at any of those places between 2014 and 2015 to cause
such a shift?
In reality, what probably happened
was that the Oxbridge duo were not actually doing anything better this year but
that Harvard’s eclipse came from a large drop from 92.9 to 83.6 points for THE’s
composite teaching indicator. Did Harvard’s teaching really deteriorate over twelve
months? It is more likely that there were relatively fewer American
respondents in the THE survey but one cannot be sure because there are four
other statistics bundled into the indicator.
While British universities appeared to do well, French ones appeared to perform disastrously. The École Normale Supérieure
recorded a substantial gain going from 78th to 54th place
but every other French institution in the rankings fell, sometimes by dozens of places. École Polytechnique went from
61st place to 101st, Université Paris-Sud from 120th to 188th , the University of Strasbourg from the 201-225 band to 301-350, in every case because of a
substantial fall in the citations indicator. If switching to Scopus was intended to help non-English speaking countries it did not do France any good.
Meanwhile, the advance of Asia has apparently come to an end or gone into screeching reverse. Many Asian universities slipped down the ladder although the top Chinese schools held their ground. Some Japanese and Korean
universities fell dozens of places. The University of Tokyo went from 23rd
to 43rd place, largely because of a fall in the citations indicator from 74.7 points to 60.9 and the University of Kyoto from 59th to 88th with another drop in the score for citations. Among the casualties was Tokyo Metropolitan University which used to advertise its perfect score of 100 for citations on the THE website. This year, stripped of the citations for mega-papers in physics, its citation score dropped to a rather tepid 72.2.
The Korean flagships have also foundered. Seoul National University fell 35 places and the Korean Advanced Institute of Technology 66, largely because of a decline in the scores for teaching and research. Pohang University of Science and Technology (POSTECH) fell 50 places, losing points in all indicators except income from industry
The most catastrophic fall was in
Turkey. There were four Turkish universities in the top 200 last year. All of
them have dropped out. Several Turkish universities contributed to the Large Hadron
Collider project with its multiple authors and multiple citations and they also benefited from producing comparatively few
research papers and from the regional modification, which gave them artificially high scores for the citations indicator in 2014 but not this year.
The worst case was Middle East Technical
University which had the 85th place in 2014, helped by an outstanding
score of 92 for citations and reasonable scores for the other indicators. This year
it was in the 501-600 band with reduced scores for everything except Industry
Income and a very low score of 28.8 for citations.
The new rankings appear to have
restored the privilege given to medical research. In the upper reaches we find
St George’s, University of London, a medical school, which according to THE is the world's leading university for research impact, Charité - Universitätsmedizin Berlin, a teaching hospital affiliated to Humboldt University and the Free University of Berlin, and Oregon Health and Science University.
It also appears that THE's methodology continues to gives an undeserved advantage to small or specialized institutions such as Scuola Superiore Sant’Anna in Pisa, which does not appear to be a truly independent university, the Copenhagen Business School, and Rush University in Chicago, the academic branch of a private hospital.
These rankings appear so far to have got a good reception in the mainstream press, although it is likely that that before long we will hear some negative reactions from independent experts and from Japan, Korea, France, Italy and the Middle East.
THE, however, have just postponed the hard decisions that they will eventually have to make.
Monday, September 28, 2015
Japanese Barbarians Out to Crush Humanities!
The international education media has been getting very excited recently about what appeared to be an extraordinary act of cultural vandalism by the Japanese Ministry of Education.
It seems that the ministry has been behaving like the Taliban on a rampage through the Louvre and has ordered public universities to stop teaching the humanities and social sciences.
Noah Smith, an Assistant Professor of Finance at Stony Brook University SUNY and a freelance writer, wrote that public universities had been ordered to stop teaching social sciences, humanities and law, although apparently the "order" was non-binding.
Meanwhile Takamitsu Sawa announced in the Japan Times that the humanities were under attack and that someone on the ministry's panel of learned persons had said that students should study accounting software instead of Samuelson's Economics and translation instead of Shakespeare.
Eventually, the Financial Times revealed that that the ministry had been misinterpreted and that the abolition of the humanities referred to a number of unneeded teacher training programs. This was supported by an authoritative comment by a former government official.
So it seems that Samuelson and Shakespeare are safe from the rampage of utilitarian barbarians.
Perhaps Japanese universities can now adopt the best practices of Columbia and the University at Buffalo for the teaching of art.
It seems that the ministry has been behaving like the Taliban on a rampage through the Louvre and has ordered public universities to stop teaching the humanities and social sciences.
Noah Smith, an Assistant Professor of Finance at Stony Brook University SUNY and a freelance writer, wrote that public universities had been ordered to stop teaching social sciences, humanities and law, although apparently the "order" was non-binding.
Meanwhile Takamitsu Sawa announced in the Japan Times that the humanities were under attack and that someone on the ministry's panel of learned persons had said that students should study accounting software instead of Samuelson's Economics and translation instead of Shakespeare.
Eventually, the Financial Times revealed that that the ministry had been misinterpreted and that the abolition of the humanities referred to a number of unneeded teacher training programs. This was supported by an authoritative comment by a former government official.
So it seems that Samuelson and Shakespeare are safe from the rampage of utilitarian barbarians.
Perhaps Japanese universities can now adopt the best practices of Columbia and the University at Buffalo for the teaching of art.
Sunday, September 27, 2015
Latest on the THE Rankings Methodology
Times Higher Education (THE) have now officially announced the methodology of next week's World University Rankings. There are some changes although major problems are still not addressed.
First, THE is now getting data from Scopus rather than Thomson Reuters. The Scopus database is more inclusive -- it covers 22,000 publications compared to 12,000 -- and includes more papers from non-English speaking countries so this may give an advantage to some universities in Eastern Europe and Asia.
Second, THE has tried to make its reputation survey more inclusive, making forms available in an additional six languages and reducing the bias to the USA.
Third, 649 papers with more than 1,000 listed authors, mainly in physics, will not be counted for the citations indicator.
Fourth, the citations indicator will be divided into two with equal weighting. One half will be be with and one half without the "regional modification" by which the overall citations impact score of a university is divided by the square root of the score for the country in which it is located. In previous editions of these rankings this modification gave a big boost to universities in low scoring countries such as Chile, India, Turkey and Russia.
It is likely that Institutions such as Bogazici University, Panjab University, Federico Santa Maria Technical University and Universite Cadi Ayyad, which have benefited from contributing to mega-papers such as those emanating from the Large Hadron Collider project,will suffer from the exclusion of these papers from the citations indicator. Their pain will be increased by the dilution of the regional modification.
It is possible that such places may get some compensation in the form of more responses in the reputation survey or higher publication counts in the Scopus database but that is far from certain. I suspect that several university administrators are going to be very miserable next Thursday.
There is something else that should not be forgotten. The scores published by THE are not raw data but standardised scores derived from standard deviations and means. Since THE are including more universities in this year's rankings and since most of them are likely to have low scores for most indicators it follows that the overall mean scores of ranked universities will fall. This will have the effect of raising the standardised scores of the 400 or so universities that score above the mean. It is likely that this effect will vary from indicator to indicator and so the final overall scores will be even more unpredictable and volatile.
First, THE is now getting data from Scopus rather than Thomson Reuters. The Scopus database is more inclusive -- it covers 22,000 publications compared to 12,000 -- and includes more papers from non-English speaking countries so this may give an advantage to some universities in Eastern Europe and Asia.
Second, THE has tried to make its reputation survey more inclusive, making forms available in an additional six languages and reducing the bias to the USA.
Third, 649 papers with more than 1,000 listed authors, mainly in physics, will not be counted for the citations indicator.
Fourth, the citations indicator will be divided into two with equal weighting. One half will be be with and one half without the "regional modification" by which the overall citations impact score of a university is divided by the square root of the score for the country in which it is located. In previous editions of these rankings this modification gave a big boost to universities in low scoring countries such as Chile, India, Turkey and Russia.
It is likely that Institutions such as Bogazici University, Panjab University, Federico Santa Maria Technical University and Universite Cadi Ayyad, which have benefited from contributing to mega-papers such as those emanating from the Large Hadron Collider project,will suffer from the exclusion of these papers from the citations indicator. Their pain will be increased by the dilution of the regional modification.
It is possible that such places may get some compensation in the form of more responses in the reputation survey or higher publication counts in the Scopus database but that is far from certain. I suspect that several university administrators are going to be very miserable next Thursday.
There is something else that should not be forgotten. The scores published by THE are not raw data but standardised scores derived from standard deviations and means. Since THE are including more universities in this year's rankings and since most of them are likely to have low scores for most indicators it follows that the overall mean scores of ranked universities will fall. This will have the effect of raising the standardised scores of the 400 or so universities that score above the mean. It is likely that this effect will vary from indicator to indicator and so the final overall scores will be even more unpredictable and volatile.
Tuesday, September 22, 2015
Looking Inside the Engine: The Structure of the Round University Rankings
Many of those interested in international university rankings have been frustrated by the lack of transparency in the Quacquarelli Symonds (QS) and the Times Higher Education (THE) rankings .
The QS rankings assign a fifty per cent weighting to two surveys collected from a variety of channels -- I think six for the employer survey and five for the academic survey -- with different and fluctuating response rates.
The THE rankings have lumped five indicators in a Teaching cluster, three in a Research cluster and three in an International cluster. So how can anyone figure out just what is causing a university to rise or fall in the rankings?
A major step forward in transparency has now come with the recent publication of the Round University Rankings (RUR) by a Russian organisation that uses data from Thomson Reuters (TR), who provided the data for the Times Higher Education world and regional rankings from 2009 until the end of last year.
RUR have published the separate scores for all of the indicators. They have retained 12 out of the 13 indicators used in the THE rankings from 2011 to 2014, dropping income from industry as a percentage of research income, and added another eight.
I doubt that RUR could afford to pay TR very much for the data and I suspect that TR's motive in allowing the dissemination of such a large amount of information is to preempt THE or anyone else trying to move upstream in the drive to monetise data.
It is now possible to see whether the various indicators are measuring the same thing and hence are redundant, whether and to what extent they are associated with other indicators and whether there is any link between markers of input and markers of output.
Here is a crude analysis of a very small sample of sixteen, one in fifty, of the RUR rankings starting with Harvard and ending with the Latvia Transport and Telecom Institute. I hope that a more detailed analysis of the entire corpus can be done in a few weeks.
The combined indicator groups
Three groups, Teaching, Research, Financial Sustainability, are fairly closely associated with one another. The teaching cluster correlates .634 with Research and .735 with Financial Sustainability. Research correlates .702 with Financial Sustainability.
The International Diversity group appears to be the odd one out here. It correlates significantly with Research (.555) but not with Teaching or Financial Sustainability. This suggests that internationalisation, at least in the form of recruiting more international students, may not always be a strong marker of quality.
The Reputation Indicators
Looking at the three reputation indicators, teaching, international teaching and research, we can see that for practical purposes they are measuring the same thing. The correlation between the Research Reputation and Teaching Reputation scores is .986 and between Research Reputation and International Teaching Reputation .925. Between Teaching Reputation and International Teaching Reputation it is .941.
Alex Usher of Higher Education Strategy Associates has claimed a correlation of .99 between teaching and research reputation scores in the THE rankings up to 2014. The figures from the RUR rankings are a bit lower but essentially the reputation indicators are measuring the same thing, whatever it is, and there is no need to count them more than once.
Other Unnecessary Indicators
Turning to the international indicators, the correlation between Academic Staff per Students and Academic Staff per Bachelor Degrees is very close at .834. The latter, which has not been in any previous ranking, could be omitted without a significant loss of information,
There is an extremely high correlation, .989, between Citations per Academic and Research Staff and Papers per Academic and Research Staff. It sounds rather counter-intuitive but it seems that as measure of research productivity one is as good as the other, at least when dealing with more than a few hundred elite universities
There is a correlation of .906 between Institutional Income per Academic Staff and Institutional Income per Student.
It would appear then that the THE rankings of 2011-2014 with 13 indicators had passed the point beyond which additional indicators become redundant and provide no additional information.
Input and Outputs
There are some clues about the possible relationship between indicators that could be regarded as inputs and those that might be counted as outputs.
Academic Staff per Student does not significantly affect teaching reputation (.350, sig .183). It is positively and significantly associated only with doctoral degrees per bachelor degrees (.510). The correlation with the overall score is, however, quite high and significant at .552.
There is some evidence that a diverse international faculty might have a positive impact on research output and quality. The correlations between International Faculty and Normalised Citation Impact, Papers per Academic and Research Staff and the overall score are positive and significant. On the other hand, the correlations between international collaboration and overall score and international students and overall score are weak and insignificant.
Money seems to help at least as as far as research is concerned. There are moderately high and significant correlations between International Income per Academic Staff and Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
Research Income per Academic Staff correlates highly and significantly with Teaching Reputation, International Teaching Reputation, Research Reputation, Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
The QS rankings assign a fifty per cent weighting to two surveys collected from a variety of channels -- I think six for the employer survey and five for the academic survey -- with different and fluctuating response rates.
The THE rankings have lumped five indicators in a Teaching cluster, three in a Research cluster and three in an International cluster. So how can anyone figure out just what is causing a university to rise or fall in the rankings?
A major step forward in transparency has now come with the recent publication of the Round University Rankings (RUR) by a Russian organisation that uses data from Thomson Reuters (TR), who provided the data for the Times Higher Education world and regional rankings from 2009 until the end of last year.
RUR have published the separate scores for all of the indicators. They have retained 12 out of the 13 indicators used in the THE rankings from 2011 to 2014, dropping income from industry as a percentage of research income, and added another eight.
I doubt that RUR could afford to pay TR very much for the data and I suspect that TR's motive in allowing the dissemination of such a large amount of information is to preempt THE or anyone else trying to move upstream in the drive to monetise data.
It is now possible to see whether the various indicators are measuring the same thing and hence are redundant, whether and to what extent they are associated with other indicators and whether there is any link between markers of input and markers of output.
Here is a crude analysis of a very small sample of sixteen, one in fifty, of the RUR rankings starting with Harvard and ending with the Latvia Transport and Telecom Institute. I hope that a more detailed analysis of the entire corpus can be done in a few weeks.
The combined indicator groups
Three groups, Teaching, Research, Financial Sustainability, are fairly closely associated with one another. The teaching cluster correlates .634 with Research and .735 with Financial Sustainability. Research correlates .702 with Financial Sustainability.
The International Diversity group appears to be the odd one out here. It correlates significantly with Research (.555) but not with Teaching or Financial Sustainability. This suggests that internationalisation, at least in the form of recruiting more international students, may not always be a strong marker of quality.
The Reputation Indicators
Looking at the three reputation indicators, teaching, international teaching and research, we can see that for practical purposes they are measuring the same thing. The correlation between the Research Reputation and Teaching Reputation scores is .986 and between Research Reputation and International Teaching Reputation .925. Between Teaching Reputation and International Teaching Reputation it is .941.
Alex Usher of Higher Education Strategy Associates has claimed a correlation of .99 between teaching and research reputation scores in the THE rankings up to 2014. The figures from the RUR rankings are a bit lower but essentially the reputation indicators are measuring the same thing, whatever it is, and there is no need to count them more than once.
Other Unnecessary Indicators
Turning to the international indicators, the correlation between Academic Staff per Students and Academic Staff per Bachelor Degrees is very close at .834. The latter, which has not been in any previous ranking, could be omitted without a significant loss of information,
There is an extremely high correlation, .989, between Citations per Academic and Research Staff and Papers per Academic and Research Staff. It sounds rather counter-intuitive but it seems that as measure of research productivity one is as good as the other, at least when dealing with more than a few hundred elite universities
There is a correlation of .906 between Institutional Income per Academic Staff and Institutional Income per Student.
It would appear then that the THE rankings of 2011-2014 with 13 indicators had passed the point beyond which additional indicators become redundant and provide no additional information.
Input and Outputs
There are some clues about the possible relationship between indicators that could be regarded as inputs and those that might be counted as outputs.
Academic Staff per Student does not significantly affect teaching reputation (.350, sig .183). It is positively and significantly associated only with doctoral degrees per bachelor degrees (.510). The correlation with the overall score is, however, quite high and significant at .552.
There is some evidence that a diverse international faculty might have a positive impact on research output and quality. The correlations between International Faculty and Normalised Citation Impact, Papers per Academic and Research Staff and the overall score are positive and significant. On the other hand, the correlations between international collaboration and overall score and international students and overall score are weak and insignificant.
Money seems to help at least as as far as research is concerned. There are moderately high and significant correlations between International Income per Academic Staff and Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
Research Income per Academic Staff correlates highly and significantly with Teaching Reputation, International Teaching Reputation, Research Reputation, Citations per Academic and Research Staff, Papers per Academic and Research Staff, Normalised Citation Impact and the overall score.
Saturday, September 19, 2015
Who's Interested in the QS World University Rankings?
And here are the first ten results (excluding this blog and the QS page) from a Google search for this year's QS world rankings. Compare with ARWU and RUR. Does anyone notice any patterns?
Canada falls in World University Rankings' 2015 list
UBC places 50th, SFU 225th in QS World University Rankings
Subscribe to:
Posts (Atom)