Showing posts sorted by relevance for query oxford reputation. Sort by date Show all posts
Showing posts sorted by relevance for query oxford reputation. Sort by date Show all posts

Sunday, June 13, 2021

The Remarkable Revival of Oxford and Cambridge


There is nearly always a theme for the publication of global rankings. Often it is the rise of Asia, or parts of it. For a while it was the malign grasp of Brexit which was crushing the life out of British research or the resilience of American science in the face of the frenzied hostility of the great orange beast. This year it seems that the latest QS world rankings are about the triumph of Oxford and other elite UK institutions and their leapfrogging their US rivals. Around the world, quite a few other places are also showcasing their splendid achievements.

In the recent QS rankings Oxford has moved up from overall fifth to second place and Cambridge from seventh to third while University College London, Imperial College London, and Edinburgh have also advanced. No doubt we will soon hear that this is because of transformative leadership, the strength that diversity brings, working together as a team or a family, although I doubt whether any actual teachers or researchers will get a bonus or a promotion for their contributions to these achievements.

But was it leadership or team spirit that pushed Oxford and Cambridge into the top five? That is very improbable. Whenever there is a big fuss about universities rising or falling significantly in the rankings in a single year it is a safe bet that it is the result of an error, the correction of an error, or a methodological flaw or tweak of some kind.

Anyway, this year's Oxbridge advances had as much to do with leadership,  internationalization, or reputation as goodness had with Mae West's diamonds. It was entirely due to a remarkable rise for both places in the score for citations per faculty, Oxford from 81.3 to 96, and Cambridge from 69.2 to 92.1. There was no such change for any of the other indicators.

Normally, there are three ways in which a university can rise in QS's citations indicator. One is to increase the number of publications while maintaining the citation rate. Another is to improve the citation rate while keeping output constant. The third is to reduce the number of faculty physically or statistically.

None of these seem to have happened at Oxford and Cambridge. The number of publications and citations has been increasing but not sufficiently to cause such a big jump. Nor does there appear to have been a drastic reduction of faculty in either place.

In any case it seems that Oxbridge is not alone in its remarkable progress this year. For citations, ETH Zurich rose from 96.4 to 99.8, University of Melbourne from 75 to 89.7, National University of Singapore from 72.9 to 90.6, Michigan from 58 to 70.5. It seems that at the top levels of these rankings nearly everybody is rising except for MIT which has the top score of 100 but it is noticeable that as we get near the top the increase gets smaller.

It is theoretically possible that this might be the result of a collapse of the raw scores of citations front runner MIT which would raise everybody else's scores if it still remained at the top but there is no evidence of either a massive collapse in citations or a massive expansion of research and teaching staff.

But then as we go to the other end of the ranking we find universities' citations scores falling, University College Cork from 23.4 to 21.8, Universitas Gadjah Mada from 1.7 to 1.5, UCSI University Malaysia from 4.4 to 3.6, American University  in Cairo from 5.7 to 4.2.

It seems there is a bug in the QS methodology. The indicator scores that are published by QS are not raw data but standardized scores based on standard deviations from the mean The mean score is set at fifty and the top score at one hundred. Over the last few years the number of ranked universities has been increasing and the new ones tend to perform less well than the the established ones, especially for citations. In consequence, the  mean number of citations per faculty has declined and therefore universities scoring above the mean will increase their standardized scores which is derived from the standard deviation from the mean. If this interpretation is incorrect I'm very willing to be corrected.

This has an impact on the relative positions of Oxbridge and leading American universities. Oxford and Cambridge rely on their  scores in the academic and employer survey and international faculty and staff to keep in the top ten. Compared to Harvard, Stanford and MIT they are do not perform well for quantity or quality of research. So the general inflation of citations scores gives them more of a boost than the US leaders and so their total score rises.

It is likely that Oxford and Cambridge's moment of glory will be brief since QS in the next couple of years will have to do some recentering in order to prevent citation indicator scores bunching up in the high nineties. The two universities will fall again although  it that will probably not be attributed to a sudden collapse of leadership or failure to work as a team.

It will be interesting to see if any of this year's rising universities will make an announcement that they don't really deserve any praise for their illusory success in the rankings.



Thursday, March 10, 2011

The THE Reputation Rankings

Times Higher Education have constructed a reputation ranking from the data collected for last year's World University Rankings. There is a weighting of two thirds for research and one third for postgraduate teaching. The top five are:

1.  Harvard
2.  MIT
3.  Cambridge
4.  UC Berkeley
5.  Stanford

Scores are given only for the top fifty universities. Then another fifty are sorted in bands of ten without scores. Evidently, the number of responses favouring universities outside the top 100 was so small that it was not worth listing.

This means that the THE reputational survey reveals significant differences between Harvard and MIT or between Cambridge and Oxford but it would be of no help to those wondering whether to study or work at the University of Cape Town or the University of Kwazulu-Natal or Trinity College Dublin or University College Dublin.

The scores for research reputation (top fifty for total reputation scores only) show a moderate correlation with the THE citations indicator (.422) and, perhaps surprisingly, a higher correlation with the citations per faculty score on the QS World University Rankings of 2010 (.538).

looking at the QS academic survey, which asked only about research, we can see that there was an insignificant correlation of .213 between the QS scores and the score for citations per faculty in the QS rankings (THE reputation ranking top 50 only). However, there was a higher correlation between the QS survey and the THE citations indicator of .422, the same as that between the THE research reputation scores and the the THE citations indicator.

Comparing the two research surveys with a third party, the citations indicator in the Scimago 2010 rankings, the THE research reputation survey did better with a correlation of .438 compared to an insignificant .188 for the QS academic survey.

This seems to suggest that the THE reputational survey does a better job at differentiating between the world's elite universities. But once we leave the top 100 it is perhaps less helpful and there may still be a role for the QS rankings

Sunday, June 18, 2017

Comparing the THE and QS Academic Reputation Surveys

Times Higher Education (THE) has just published its 2017 reputation rankings which include 100 universities. These are based on a survey distributed between January and March of this year and will be included, after standardisation, in the 2017-18 (or 2018) World University Rankings scheduled for publication in a few months. In the forthcoming world rankings the reputation survey will be divided into two metrics in the research and teaching indicator groups, with a combined weighting of 33 percent. The survey asked about research and postgraduate teaching but since the correlation between these two questions is very high there is effectively only one indicator.

The QS world rankings released last week included scores derived from two surveys, one of academics with a 40% weighting and one of employers with 10%. The academic survey was concerned only with research.

The methodology of the THE survey is relatively simple. The respondents are drawn from the database of researchers with publications in Scopus indexed journals, in other words those who get to be listed as corresponding author. THE claims that this makes them experienced senior researchers although in many parts of the world being a member or leader of a research team often has more to do with politics than merit.

In contrast, the QS methodology has changed quite a lot over the last few years. It began with scouring the mailing lists of World Scientific, a Singapore based academic publisher with links to Imperial College London, then adding various other channels including lists supplied by institutions and sign up facilities for potential respondents. The result is a survey that appears more inclusive than THE's with more respondents from outside the elite but one whose validity may be rather suspect.

The THE ranking found that there were six super-brand universities that stood out from everyone else, Harvard, MIT, Stanford, Cambridge, Oxford, and Berkeley. There was a big gap between Berkeley and number seven Princeton and then the long smooth slope continues. 

After that, the ranking is dominated by English speaking universities, with the USA contributing 42, the UK 10, Canada 3 and Australia 3.  East Asia and the Chinese diaspora (Hong Kong, Taiwan and Singapore) are fairly well represented, while South and Central Asia, the Middle East and Africa are absent.

For any survey a great deal depends on how the forms are distributed. Last year, the THE survey had a lot more responses from the social sciences, including economics and business studies, and fewer from the arts and the humanities, and that contributed to some Asian universities rising and some British ones falling.

Such falls are typically attributed in the education establishment media to anxiety about the looming horrors of Brexit, the vicious snatching of research funds and the rising tide of hostility to international students.

This year British Universities did a bit better in the THE reputation ranking this year with five going up, three staying put and three going down. No doubt we will soon hear about the invigorating effects of Brexit and the benefits of austerity.  Perhaps also it might have something to do with the number of survey responses from the arts and humanities going up from  9% to 12.5%, something that would surely benefit UK universities.

The QS reputation indicator has the same universities in the top six but not in quite the same order: Cambridge, fourth in THE, is second in the QS indicator. After that it starts looking very different. Number seven is the University of Tokyo, which THE puts in 11th place for  academic reputation. Other Asian universities do much better in the QS indicator. The National University of Singapore is 11th ( 27th in THE) Nanyang Technological University  Singapore is 50th (THE 81-90 band), Peking University is 14th (THE 17h) Chulalongkorn University Thailand is 99th (not in the THE top 100).

It is noticeable that Latin American universities such as  the University of Sao Paulo, the University of Buenos Aires and the Pontifical Catholic University of Chile get a higher placing in the QS indicator than they do in the THE ranking as do some Southern European universities such as Barcelona, Sapienza and Bologna.

The THE reputation ranking gives us a snapshot of the current views of the world's academic elite and probably underestimates the rising universities of Greater China and Korea. QS cast their nets further and have probably caught a few of tomorrow's world class institutions although I suspect that the Latin American high fliers, apart from Sao Paulo, are very overrated.




Monday, September 19, 2016

Update on previous post

The reputation data used by THE in the 2016 world rankings, for which the world is breathlessly waiting, is that which was used in their reputation rankings  released last May and collected between January and March.

Therefore, the distribution of responses from disciplinary groups this year was 9% for the arts and humanities and 15% for social sciences and 13% for business (28% for the last two combined). In 2015 it was 16% for the arts and humanities and 19% for the social sciences (which then included business).

Since UK universities are relatively strong in the humanities and Asian universities relatively strong in business studies the result of this was a shift in the reputation rankings away from the UK and towards Asian universities. Oxford fell from 3rd (score 80.4) to 5th (score 69.1) in the reputation rankings and Bristol and Durham dropped out of the top 100 while Tsinghua University rose from 26th place to 18th, Peking University from 32nd to 21st and Seoul National University from 51-60 to 45th.

In the forthcoming world rankings British universities (although threatened by Brexit) ought to do better because of the inclusion of books in the publications and citations indicators and certain Asian universities, but by no means all, may do better because their citations for mega-projects will be partially restored.

Notice that THE have also said that this year they will combine the reputation scores for 2015 and 2016, something that is unprecedented. Presumably this will reduce the fall of UK universities in the reputation survey. Combined with the inclusion of books in the database, this may mean that UK universities may not fall this year and may even go up a bit (ATBB).  

Wednesday, September 07, 2016

The shadow of Brexit falls across the land


The western chattering and scribbling classes sometimes like to reflect on their superiority to the pre-scientific attitudes of the local peasantry, astrology, nationalism and religion and things like that. But it seems that the credentialled elite of Britain are now in the grip of a great fear of an all pervading spirit called Brexit whose malign power is unlimited in time and space.

Thus the Independent tells us that university rankings (QS in this case) show that "post Brexit uncertainty and long-term funding issues" have hit UK higher education.

The Guardian implies that Brexit has something to do with the decline of British universities in the rankings without actually saying so.

"British universities have taken a tumble in the latest international rankings, as concern persists about the potential impact of Brexit on the country’s higher education sector. "

Many British universities have fallen in the QS rankings this year but the idea that Brexit has anything to do with it is nonsense. The Brexit vote was on June 23rd, well after QS's deadlines for submitting respondents for the reputation surveys and updating institutional data. The citations indicator refers to the period 2011-2015.

The belief that rankings reveal the dire effects of funding cuts and immigration restrictions is somewhat more plausible but fundamentally untenable.

Certainly, British universities have taken some blows in the QS rankings this year. Of the 18 universities in the top 100 in 2015 two are in the same place this year, two have risen and 14 have fallen. This is associated with a general decline in performance in the academic reputation indicator which accounts for 40% of the overall score.

Of those 18 universities three, Oxford, Cambridge and Edinburgh, hold the same rank in the academic reputation indicator, one, King's College London, has risen and fourteen are down.

The idea that the reputation of British universities is suffering because survey respondents have heard that the UK government is cutting spending or tightening up on visa regulations is based on some unlikely assumptions about how researchers go about completing reputation surveys.

Do researchers really base their assessment of research quality on media headlines, often inaccurate and alarmist? Or do they make an honest assessment of performance over the last few years or even decades? Or do they vote according to their self interest, nominating their almae matres or former employers?

I suspect that the decline of British universities in the QS reputation indicator has little to do with perceptions about British universities and a lot more to do with growing sophistication about and interest in rankings in the rest of the world, particularly in East Asia and maybe parts of continental Europe.






Thursday, September 18, 2014

QS World University Rankings 2014



Publisher

QS (Quacquarelli Symonds)



Scope

Global. 701+ universities.


Top Ten


PlaceUniversity
1MIT
2=Cambridge
2=Imperial College London
4Harvard
5Oxford
6University College London
7Stanford
8California Institute of Technology (Caltech)
9Princeton
10Yale



Countries with Universities in the Top Hundred


Country      Number of Universities
USA28
UK19
Australia8
Netherlands                                              7
Canada5
Switzerland4
Japan4
Germany3
China3
Korea3
Hong Kong3
Denmark2
Singapore2
France2
Sweden2
Ireland1
Taiwan1
Finland1
Belgium1
New Zealand1



Top Ranked in Region


North America 
MIT
AfricaUniversity of Cape Town
EuropeCambridge
Imperial College London
Latin AmericaUniversidade de Sao Paulo                                    
AsiaNational University of Singapore                                    
Central and Eastern Europe  Lomonosov Moscow State University                                   
Arab WorldKing Fahd University of Petroleum and Minerals                                     
Middle EastHebrew University of Jerusalem                                  



Noise Index

In the top 20, this year's QS world rankings are less volatile than the previous edition but more so than the THE rankings or Shanghai ARWU. The top 20 universities in 2013 rose or fell an average of 1.45 places. The most remarkable change was the rise of Imperial College and Cambridge to second place behind MIT and ahead of Harvard.


RankingAverage Place Change
 of Universities in the top 20 
QS World Rankings 2013-20141.45
QS World Rankings 2012-20131.70
ARWU 2013 -2014 0.65
Webometrics 2013-20144.25
Center for World University Ranking (Jeddah)
2013-2014 
0.90
THE World Rankings 2012-20131.20


Looking at the top 100 universities, the  QS rankings  are little different from last year. The average university in the top 100 moved up or down 3.94 places compared to 3.97 between 2012 and 2013. These rankings are more reliable than this year's ARWU, which was affected by the new lists of highly cited researchers, and last year's THE rankings.

RankingAverage Place Change
 of Universities in the top 100 
QS World Rankings 2013-143.94
QS World Rankings 2012-20133.97
ARWU 2013 -2014 4.92
Webometrics 2013-201412.08
Center for World University Ranking (Jeddah)
2013-2014 
10.59
THE World Rankings 2012-20135.36




Methodology (from topuniversities)

1. Academic reputation (40%)

Academic reputation is measured using a global survey, in which academics are asked to identify the institutions where they believe the best work is currently taking place within their field of expertise.
For the 2014/15 edition, the rankings draw on almost 63,700 responses from academics worldwide, collated over three years. Only participants’ most recent responses are used, and they cannot vote for their own institution. Regional weightings are applied to counter any discrepancies in response rates.
The advantage of this indicator is that it gives a more equal weighting to different discipline areas than research citation counts. Whereas citation rates are far higher in subjects like biomedical sciences than they are in English literature, for example, the academic reputation survey weights responses from academics in different fields equally.
It also gives students a sense of the consensus of opinion among those who are by definition experts. Academics may not be well positioned to comment on teaching standards at other institutions, but it is well within their remit to have a view on where the most significant research is currently taking place within their field.

2. Employer reputation (10%)

The employer reputation indicator is also based on a global survey, taking in almost 28,800 responses for the 2014/15 edition. The survey asks employers to identify the universities they perceive as producing the best graduates. This indicator is unique among international university rankings.
The purpose of the employer survey is to give students a better sense of how universities are viewed in the job market. A higher weighting is given to votes for universities that come from outside of their own country, so it’s especially useful in helping prospective students to identify universities with a reputation that extends beyond their national borders. 

3. Student-to-faculty ratio (20%)

This is a simple measure of the number of academic staff employed relative to the number of students enrolled. In the absence of an international standard by which to measure teaching quality, it provides an insight into the universities that are best equipped to provide small class sizes and a good level of individual supervision.

4. Citations per faculty (20%)

This indicator aims to assess universities’ research output. A ‘citation’ means a piece of research being cited (referred to) within another piece of research. Generally, the more often a piece of research is cited by others, the more influential it is. So the more highly cited research papers a university publishes, the stronger its research output is considered.
QS collects this information using Scopus, the world’s largest database of research abstracts and citations. The latest five complete years of data are used, and the total citation count is assessed in relation to the number of academic faculty members at the university, so that larger institutions don’t have an unfair advantage.

5  6. International faculty ratio (5%)  international student ratio (5%)

The last two indicators aim to assess how successful a university has been in attracting students and faculty members from other nations. This is based on the proportion of international students and faculty members in relation to overall numbers. Each of these contributes 5% to the overall ranking results.

Tuesday, April 01, 2014

Comparing the THE and QS Reputation Rankings

This year's Times Higher Education (THE) Reputation Rankings were  a bit boring, at least at the top, and that is just what they should be.

The top ten are almost the same as last year. Harvard is still first and MIT is second. Tokyo has dropped out of the top ten to 11th place and has been replaced by Caltech. Stanford is up three places and is now third. Cambridge and Oxford are both down one place. Further down, there is some churning but it is difficult to see any clear and consistent trends, although the media have done their best to find stories, UK universities falling or sliding or slipping, no Indian or Irish or African universities in the top 100.

These rankings may be more interesting for who is not there than for who is. There are some notable absentees from the top 100. Last year Tokyo Metropolitan University was, according to THE and data providers Thomson Reuters (TR), first in the world, along with MIT, for research impact. Yet it fails to appear in the top 100 in a reputation  survey in which research has a two thirds weighting. Rice University, joint first in the world for research impact with Moscow State Engineering Physics Institute  in 2012 is also absent. How is this possible? Am I missing something?

In general, the THE-TR reputation survey, the data collection for which was contracted out  to the pollsters Ipsos Mori CT, appears to be quite rigorous and reliable. Survey forms were sent out to a clearly defined group, researchers with papers in the ISI indexes. THE claim that this means that their respondents must therefore be active producers of academic research. That is stretching it a bit. Getting your name on a article published in a reputable journal might mean a high degree of academic competence or it could  just mean having some sort of influence over the research process. I have heard a report about an Asian university where researchers were urged to put their heads of  department on the list of co-authors. Still, on balance it seems that the respondents to the THE survey are mostly from a stable group, namely those who have usually made some sort of contribution to a research paper of sufficient merit to be included in an academic journal .

TR also appear to have used a systematic approach in sending out the survey forms. When the first survey was being prepared in 2010 they announced that the forms would be emailed according to the number of researchers recorded by UNESCO in 2007. It is not clear if this procedure has been followed strictly over the last four years. Oceania, presumably Australia and New  Zealand, appears to have a very large  number of responses this year, 10%, although TR reported in 2010 that UNESCO found only 2.1 % of the world's researchers in that region.

The number of responses received appears reasonably large although it has declined recently.  In 2013 TR collected 10, 536 responses, considerably less than in 2012 when it was 16,639. Again, it is not clear what happened.

The number of responses from the various subject areas has changed somewhat. Since 2012 the proportion from the social sciences has gone from 19% to 22% as has engineering and technology while life sciences has gone from 16% to 22%.

QS do not publish reputation surveys but it is possible to filter their ranking scores to find out how universities performed on their academic survey.

The QS approach is less systematic. They started out using the subscription lists of World Scientific, a Singapore based academic publishing company with links to Imperial College London. Then they added respondents from  Mardev, a publisher of academic lists, to beef up the number of names in the humanities. Since then the balance has shifted with more names coming from Mardev with some topping up from World Scientific. QS have also added a sign up facility where people are allowed to apply to receive survey forms. That was suspended in April 2013 but has recently been revived. They have also asked universities to submit lists of potential respondents and respondents to suggest further names. The  exact number of responses coming from all these different sources is not known.

Over the last few years QS have made their survey rather more rigorous. First, respondents were not allowed to vote for the universities where they were currently employed. They were restricted to one response per computer and universities were not allowed to solicit votes or instruct staff who to vote for or who not to vote for. Then they were told not to promote any form of participation in the surveys.

In addition to methodological changes, the proportion of responses from different countries has changed significantly since 2007 with a large increase from Latin America, especially Brazil and Mexico, the USA and larger European countries and a fall in those from India, China and the Asia-Pacific region. All of this means that it is very difficult to figure out whether the rise or fall of a university reflects a change in methodology or distribution of responses or a genuine shift in international reputation

Comparing the THE-TR and QS surveys there is some overlap at the top. The top five are the same in both although in a different order: Harvard, MIT, Stanford, Oxford and Cambridge.

After that, we find that the QS academic survey favours universities in Asia-Pacific and  Latin America. Tokyo is seventh according to QS but THE-TR have it in 11th place. Peking is 19th for QS and 41st for THE -TR. Sao Paulo is 51st in the QS indicator but is in the 81-90 band in the THE-TR rankings. The Autonomous National University of Mexico (UNAM) is not even in THE-TR's top 100 but QS put it 48th.

On the other hand Caltech, Moscow State University, Seoul National University and Middle Eastern Technical University do much better with THE-TR than with QS .

I suspect that the QS survey is tapping a younger less experienced pool of respondents from less regarded universities and from countries with high aspirations but so far limited achievements.






Friday, May 11, 2018

Ranking Insights from Russia

The ranking industry is expanding and new rankings appear all the time. Most global rankings measure research publications and citations. Others try to add to the mix indicators that might have something to do with teaching and learning. There is now a  ranking that tries to capture various third missions.

The Round University Rankings published in Russia are in the tradition  of  holistic rankings. They give a 40 % weighting to research, 40 % to teaching, 10% to international diversity and 10% to financial sustainability. Each group contains five equally weighted indicators. The data is derived from Clarivate Analytics who also contribute to the US News Best Global Universities Rankings.

These rankings are similar to the THE rankings in that they attempt to assess quality rather than quantity but they have 20 indicators instead of 13 and assign sensible weightings. Unfortunately, they receive only a fraction of the attention given to the THE rankings.

They are, however, very  valuable since they dig deeper into the data than  other global rankings. They also show that there is a downside to measures of quality and that data submitted directly by institutions should be  treated with caution and perhaps scepticism.

Here are the top universities for each of the RUR indicators.

Teaching
Academic staff per students: VIB (Flemish Institute of Biotechnology), Belgium
Academic staff per bachelor degrees awarded: University of Valladolid, Spain
Doctoral degrees per academic staff: Kurdistan University of Medical Science, Iran
Doctoral degrees per bachelor degrees awarded:  Jawaharlal Nehru University, India
World teaching reputation  Harvard University, USA.

Research
Citations per academic and research staff: Harvard
Doctoral degrees per admitted PhD: Al Farabi Kazakh National University
Normalised citation impact: Rockefeller University, USA
Share of international co-authored papers: Free University of Berlin
World research reputation: Harvard.

International diversity
Share of international academic staff: American University of Sharjah, UAE
Share of international students: American University of Sharjah
Share of international co-authored papers: Innopolis University, Russia
Reputation outside region: Voronezh State Technical University, Russai
International Level: EPF Lausanne, Switzerland.

Financial sustainability:
Institutional income per academic staff: Universidade Federal Do Ceara, Brazil
Institutional income per student: Rockefeller University
Papers per research income:  Novosibersk State University of Economics and Management, Russia
Research income per academic and research staff: Istanbul Technical University, Turkey
Research income per institutional income: A C Camargo Cancer Center, Brazil.

There are some surprising results here. The most obvious is Voronezh State Technical University which is first for reputation outside its region (Asia, Europe and so on), even though its overall scores for reputation and for international diversity are very low. The other top universities for this metric are just what you would expect, Harvard, MIT, Stanford, Oxford and so on. I wonder whether there is some sort of bug in the survey procedure, perhaps something like the university's supporters being assigned to Asia and therefore out of region. The university is also in second place in the world for papers per research income despite very low scores for the other research indicators.

There are other oddities such as Novosibersk State University of Economics and Management placed first for papers per research income and Universidade Federal Do Ceara for institutional income per academic staff. These may result from anomalies in the procedures for reporting and analysing data, possibly including problems in collecting data on income and staff.

It also seems that medical schools and specialist or predominantly postgraduate institutions such as Rockefeller University, the Kurdistan University of Medical Science, Jawarhalal Nehru University and VIB have a big advantage with these indicators since they tend to have favourable faculty student ratios, sometimes boosted by large numbers of clinical and research only staff, and a large proportion of doctoral students.

Jawaharlal Nehru University is a mainly postgraduate university so a high placing for academic staff per bachelor degrees awarded is not unexpected although I am surprised that it is ahead of Yale and Princeton. I must admit that the third place here for the University of Baghdad needs some explanation.

The indicator doctoral degrees per admitted PhD might identify universities that do a good job of selection and training and get large numbers of doctoral candidates through the system. Or perhaps it identifies universities where doctoral programmes are so lacking in rigour that nearly everybody can get their degree once admitted. The top ten of this indicator includes De Montfort University, Shakarim University, Kingston University, and the University of Westminster, none of which are famous for research excellence across the range of disciplines.

Measures of international diversity have become a staple of global rankings since they are fairly easy to collect. The problem is that international orientation may have something do with quality but it may also simply be a necessary attribute of being in a small country next to larger countries with the same or similar language and culture. The top ten for the international student indicators includes the Central European University and the American university of Sharjah. For international faculty it includes the University of Macau and Qatar University.

To conclude, these indicators suggest that self submitted institutional data should be used sparingly and that data from third party sources may be preferable. Also, while ranking by quality instead of quantity is sometimes advisable it also means that anomalies and outliers are more likely to appear.







Thursday, August 11, 2016

Value Added Ranking


There has been a lot of talk about ranking universities by factors other than the usual mix of contributions to research and innovation, reputation surveys and inputs such as spending, teaching resources or student quality.

The emerging idea is that universities should be assessed according to their ability to teach students or to inculcate desirable skills or attributes.

Much of this is powered by the growing awareness that American and European secondary schools are failing to produce sufficient numbers of students with the ability  to undertake and complete anything that could realistically be called a university education. It is unlikely that this is the fault of the schools. The unavoidable  verdict of recent research is that the problem with schools has very little to do with institutional racism, a lack of grit, resilience or the current X factor or the failure to adopt Finnish, Chinese or Singaporean teaching methods. It is simply that students entering the school system are on average less intelligent than they were and those leaving are consequently also less intelligent.

There is now a market for rankings that will measure the quality of universities not by their resources, wealth or research output but by their ability to add value to students and to prepare them for employment or to enable them to complete their courses.

This could, however, lead to massively perverse consequences. If universities are assessed according to the percentage of entrants who graduate within a certain period or their employability then there could be a temptation to dilute graduation requirements .

Nevertheless, the idea of adding value is one that is clearly becoming more popular. It can be seen in the attempt to introduce a national rating system in the US and in the UK to use the proposed Teaching Excellence Framework (TEF) to rank universities.

One UK ranking that includes a value added measure is the Guardian University Guide. This includes eight indicators, three of which three of which measure student satisfaction. Other indicators are staff student ratio and spending per student. There is also  a measure of student outcomes, that is graduate level employment or entry into a postgraduate course after six months, one of the quality of students measured by A level qualifications and one a measure of value added, that is the difference between the students entry level exam results and their eventual degree results.

It is therefore possible to get a rough idea of what factors might actually produce positive student outcomes.

The overall ranking for 2015-16 starts by being quite conventional with the top three places going to Cambridge, Oxford and St Andrews. Some might be surprised by Exeter in 9th place and Loughborough in 11th,  ahead of LSE and UCL.

Measuring student quality by exam scores produces unsurprising results at the top. Cambridge is first followed by Oxford and Imperial. For staff student ratio the top three are UCL, Oxford and SOAS and for spending per student Oxford, Cambridge and the University of the Arts London.

For student satisfaction with courses, Bath, Keele and UEA are in the lead while Oxford is 5th and Cambridge 12th. It's when we look at the Value Added that we find some really unusual results. The top three are Gloucester, Edinburgh and Abertay.

After plugging the indicator scores into an SPSS file we can calculate the correlations between the desired outcome, that is graduate level employment or postgraduate study and a variety of possible associated factors.

Here in descending order are the correlations with career prospects:

average entry tariff .820
student staff ratio .647
spending per student .569
satisfaction with course  .559
satisfaction with teaching   .531
value added .335
satisfaction with feedback -.171.

It would seem that if you want to know which university is best for career prospects then the most important piece of data is the average academic ability of the students. The student staff ratio and money spent are also significant as is satisfaction with courses and teaching. 

The correlation between value added and career prospects is much less and rather modest.

The universities were divided into thirds according to average entry tariff. In the top third of universities there was a strong correlation between career prospects and average entry level tariff, .628, and a modest one with spending, .355. Nothing else was associated with career success.

In the middle third the factor most associated with career prospects was course satisfaction, .498, followed by average entry tariff, .449, staff student ratio, .436, and satisfaction with teaching, .362. Satisfaction with feedback and value added were insignificant.

However, for the least selective third of universities, the picture was rather different. The factor most strongly associated with career success was satisfaction with feedback, .493, followed by valued added, .479, course satisfaction, .470, satisfaction with teaching, .439, and average entry tariff, .401. The relationship with spending and staff student ratio was insignificant .

The evidence of the Guardian rankings is that value added would only be of interest to students at or applying to the least selective third of UK universities. For the rest it is of no importance. It is debatable whether it is worth making it the centre of a new set of rankings.












Sunday, May 15, 2016

The THE reputation rankings: Much ado about not very much

Every so often, especially in North America and Western Europe, there is a panic about the impact of government policies on higher education, usually the failure to provide as much money as universities want, or sometimes as many overseas students as they need to fill lecture halls or cover budget deficits. Global university rankings have a lot to do with the onset and spread of these panics.

True to form, the British  "quality" media have been getting into a tizzy over the latest edition of the Times Higher Education (THE) world reputation ranking. According to Javier Espinoza, education editor of the Telegraph, top UK universities have been under pressure to admit minority and state school students and have also had difficulty in recruiting foreign students. This has somehow caused them to forget about doing research or teaching the most able students. It seems that academics from countries around the world, where such problems are of course unknown, are reacting by withholding their votes from British universities when responding to the THE survey and transferring their approval to the rising stars of Asia.

This supposedly has caused UK institutions to slide down the rankings and two of them, Bristol and Durham, have even dropped out of the top 100 altogether into the great dark pit of the unranked.

The Guardian notes that Oxford and Cambridge are falling and are now only just in the world's top five while the Independent quotes Phil Baty, as saying that "our evidence - from six massive global surveys over six years, including the views of more than 80,000 scholars - proves the balance of power in higher education and research is slowly shifting from the West to the East". 

This, it would seem, is all because of cuts in funding and restrictions on the entry of overseas students and faculty.

All this is is rather implausible. First of all, these are reputation rankings. They refer only to one indicator that accounts for 33 percent of the World University Rankings that will appear later this year. It is not certain that the other indicators will go in the same direction.

Secondly, these rankings have not been standardised as they will be when included in the world rankings, which means that the huge gap between the Big Six, Harvard -- MIT, Berkeley, Stanford, Oxford and Cambridge -- and the rest is laid bare, as it will not be in the autumn, and so we can get a rough idea of how many academics were voting for each university. A crude guess is that when we get down to around 50th place the number of votes will be around five hundred and even less when we reach 100th place.

This means that below the 50 mark a shift in the opinion of a few dozen respondents could easily push a university up or down into a new band or even into or out of the top 100.

Another thing we should remember is that the expertise of the researchers in the Scopus database, from which respondents are drawn, is  exaggerated. The qualification for receiving a survey form is being the corresponding author of a publication listed in the Scopus database. There is much anecdotal evidence that in some places winning research grants or getting the corresponding author slot has more to do with politics than with merit. The THE survey is better than QS's, which allows anyone with an academic email address to take part, but it does not guarantee that every respondent is an unbiased and senior researcher.

We should also note that, unlike the US News and QS survey indicators, THE takes no measures to damp down year to year fluctuations. Nor does it do anything to prevent academics from supporting their own universities in the survey.

So, do we really need to get excited about a few dozen "senior researchers" withdrawing their support from British universities?

The credibility of these rankings is further undermined by apparent changes in the distribution of responses by subject group. According to the methodology page in Times Higher Education for 2015, 16% of the responses were from the arts and humanities and 19% were from the social sciences, which in that year included business studies and economics. This year, according to the THE methodology page, 9% of the responses were from the arts and humanities and 15 % were from the social sciences and 13 % were from business and economics, adding up to 28%.

In other words the responses from the arts and humanities have apparently fallen by 7 percentage points, or around 700 responses, and the combined responses from social sciences and business and economics have apparently risen by nine points, or about 900 responses.

If these numbers are accurate then there has been among survey respondents a very substantial shift from the arts and humanities to the social sciences (inclusive of business and economics) and it is possible that this could be sufficient to cause the recorded decline in the reputation scores of British universities which usually do much better  in the arts and humanities than in the social sciences.

In the THE subject group rankings last year, Durham, for example, was 28th for arts and humanities in the THE 2015-16 World University Rankings and 36th for the social sciences. Exeter was 71st for arts and humanities and 81st for the social sciences.

At the same time some of those rising  Asian universities were definitely  stronger in the social sciences than in the humanities: Peking was 52nd for social sciences and 84th for arts and humanities, Hong Kong 39th for social sciences and 44th for arts and humanities, Nanyang Technological University 95th for social sciences and outside the top 100 universities for the arts and humanities.

It is possible that such a symmetrical change could be the result of changes in the way disciplines are classified or even a simple transposition of data. So far, THE have given no indication that this was the case.

It is interesting that an exception to to the narrative of British decline is the London Business School which has risen from the 91-100 band to 81-90.

The general claim that the views of 80,000 academics over six years are evidence of a shift from west to east is also somewhat tenuous. There have been several changes in the collection and organisation of data over the last few years that could affect the outcomes of the reputation  survey.

Between 2010-2011 and 2016 the percentage of responses from the social sciences (originally including  business and economics) has risen from 19% to 28 % for social sciences plus business and economics counted separately. Those for clinical and health sciences and life sciences  have fallen somewhat while there has been a slight rise for the arts and humanities, with a large spike in 2015.

The number of responses from the Asia Pacific region and the Middle East has has risen from 25% to 36% while those from the Americas (North and Latin) have  fallen from 44% to 25%. The number of languages in which the survey is administered has increased from eight in 2011 to fifteen this year.

The source of respondents has shifted from the Thomson Reuters Web of Science to Scopus, which includes more publications from languages other than English.

The value of these changes is not disputed here but they should make everybody very cautious about using the reputation rankings to make large claims about what is happening to British universities or what the causes of their problems are.




Thursday, October 26, 2006

The World’s Best Science Universities?

The Times Higher Education Supplement (THES) has now started to publish lists of the world’s top 100 universities in five disciplinary areas. The first to appear were those for science and technology.

THES publishes scores for its peer review by people described variously as “research-active academics” or just as “smart people” of the disciplinary areas along with the number of citations per paper. The ranking is, however, based solely on the peer review, although a careless reader might conclude that the citations were considered as well.

We should ask for a moment what a peer review, essentially a measure of a university’s reputation, can accomplish that an analysis of citations cannot. A citation is basically an indication that another researcher has found something of interest in a paper. The number of citations of a paper indicates how much interest a paper has aroused among the community of researchers. It coincides closely with the overall quality of research, although occasionally a paper may attract attention because there is something very wrong with it.

Citations then are a good measure of a university’s reputation for research. For one thing, votes are weighted. A researcher who publishes a great deal has more votes and his or her opinion will have more weight than someone who publishes nothing. There are abuses of course. Some researchers are rather too fond of citing themselves and journals have been known to ask authors to cite papers by other researchers whose work they have published but such practices do not make a substantial difference.

In providing the number of citations per paper as well as the score for peer review, THES and their consultants, QS Quacquarelli Symonds, have really blown their feet off. If the scores for peer review and the citations are radically different it almost certainly means that there is something wrong with the review. The scores are in fact very different and there is something very wrong with the review.

This post will review the THES rankings for science.

Here are the top twenty universities for the peer review in science:

1. Cambridge
2. Oxford
3. Berkeley
4. Harvard
5. MIT
6. Princeton
7. Stanford
8. Caltech
9. Imperial College, London
10. Tokyo
11. ETH Zurich
12. Beijing (Peking University)
13. Kyoto
14. Yale
15. Cornell
16. Australian National University
17. Ecole Normale Superieure, Paris
18. Chicago
19. Lomonosov Moscow State University
20. Toronto


And here are the top 20 universities ranked by citations per paper:


1. Caltech
2. Princeton
3. Chicago
4. Harvard
5. John Hopkins
6. Carnegie-Mellon
7. MIT
8. Berkeley
9. Stanford
10. Yale
11. University of California at Santa Barbara
12. University of Pennsylvania
13. Washington (Saint Louis?)
14. Columbia
15. Brown
16. University of California at San Diego
17. UCLA
18. Edinburgh
19. Cambridge
20. Oxford


The most obvious thing about the second list is that it is overwhelmingly dominated by American universities with the top 17 places going to the US. Cambridge and Oxford, first and second in the peer review, are 19th and 20th by this measure. Imperial College London. Beijing, Tokyo, Kyoto and the Australian National University are in the top 20 for peer review but not for citations.

Some of the differences are truly extraordinary. Beijing is 12th for peer review and 77th for citations, Kyoto13th and 57th, the Australian National University 16th and 35th Ecole Normale Superieure, Paris 17th and 37th, Lomsonov State University, Moscow 18th and 82nd National University of Singapore, 25th and 75th, Sydney 35th and 70th , Toronto 20th and 38th. Bear in mind that there are almost certainly several universities that were not in the peer review top 100 but have more citations per paper than some of these institutions.

It is no use saying that citations are biased against researchers who do not publish in English. For better or worse, English is the lingua franca of the natural sciences and technology and researchers and universities that do not publish extensively in English will simply not be noticed by other academics. Also, a bias towards English does not explain the comparatively poor performance by Sydney, ANU and the National University of Singapore and their high ranking on the peer review.

Furthermore, there are some places for which no citation score is given. Presumably, they did not produce enough papers to be even considered. But if they produce so few papers, how could they become so widely known that their peers would place them in the world’s top 100? These universities are:

Indian Institutes of Technology (all of them)
Monash
Auckland
Universiti Kebangsaan Malaysia
Fudan
Warwick
Tokyo Institute of Technology
Hong Kong University of Science and Technology
Hong Kong
St. Petersburg
Adelaide
Korean Advanced Institute of Science and Technology
New York University
King’s College London
Nanyang Technological University
Vienna Technical University
Trinity College Dublin
Universiti Malaya
Waterloo

These universities are overwhelmingly East Asian, Australian and European. None of them appear to be small, specialized universities that might produce a small amount of high quality research.

The peer review and citations per paper thus give a totally different picture. The first suggests that Asian and European universities are challenging those of the United States and that Oxford and Cambridge are the best in the world. The second indicates that the quality of research of American universities is still unchallenged, that the record of Oxford and Cambridge is undistinguished and that East Asian and Australian universities have a long way to go before being considered world class in any meaningful sense of the word.

A further indication of how different the two lists are can be found by calculating their correlation. Overall, the correlation is, as expected, weak (.390). For Asia-Pacific (.217) and for Europe (.341) it is even weaker and statistically insignificant. If we exclude Australia from the list of Asia-Pacific universities and just consider the remaining 25, there is almost no association at all between the two measures. The correlation is .099, for practical purposes no better than chance. Whatever criteria the peer reviewers used to pick Asian universities, quality of research could not have been among them.

So has the THES peer review found out something that is not apparent from other measures? Is it possible that academics around the world are aware of research programmes that have yet to produce large numbers of citations? This, frankly, is quite implausible since it would require that nascent research projects have an uncanny tendency to concentrate in Europe, East Asia and Australia.

There seems to be no other explanation for the overrepresentation of Europe, East Asia and Australia in the science top 100 than a combination of a sampling procedure that included a disproportionate number of respondents from these regions, allowing or encouraging respondents to nominate universities in their own regions or even countries and a disproportionate distribution of forms to certain countries within regions.

I am not sure whether this is the result of extreme methodological naivety, with THES and QS thinking that they are performing some sort of global affirmative action by rigging the vote in favour of East Asia and Europe or whether it is a cynical attempt to curry favour with those regions that are involved in the management education business or are in the forefront of globalization.

Whatever is going on, the peer review gives a very false picture of current research performance in science. If people are to apply for universities or accept jobs or award grants in the belief that Beijing is better at scientific research than Yale, ANU than Chicago, Lomonosov than UCLA, Tsinghua than Johns Hopkins then they are going to make bad decisions.

If this is unfair then there is no reason why THES or QS should not indicate the following:

The universities and institutions to which the peer review forms were sent.
The precise questions that were asked.
The number of nominations received by universities from outside their own regions and countries.
The response rate.
The criteria by which respondents were chosen.

Until THES and /or QS do this, we can only assume that the rankings are an example of how almost any result can be produced with the appropriate, or inappropriate, research design.

Wednesday, May 30, 2018

Why are US universities doing so well in the THE reputation rankings?

For the last couple of years the higher education media has tried to present any blip in the fortunes of UK universities as one of the malign effects of Brexit, whose toxic rays are unlimited by space, time or logic. Similarly, if anything unpleasant happens to US institutions, it is often linked to the evil spell of the great orange devil, who is scaring away international students, preventing the recruitment of the scientific elites of the world, or even being insufficiently credulous of the latest settled science.

So what is the explanation for the remarkable renaissance of US higher education apparently revealed by the THE reputation survey published today?

Is Trump working his magic to make American colleges great again?

UCLA is up four places, Carnegie Mellon seven, Cornell six, University of Washington six, Pennsylvania three. In contrast, several European and Asian institutions have fallen, University College London and the University of Kyoto by two places, Munich by seven, and Moscow State University by three.

In the previous post I noted that this year's survey had seen an increased response from engineering and computer science and a reduced one from the social sciences and the arts and humanities. As expected, LSE has tumbled five places and Oxford has fallen one place. Surprisingly, Caltech has fallen as well.

Some schools that are strong in engineering, such as Nanyang Technological University and Georgia Institute of Technology, have done well but I do not know if that is a full explanation for  the success of US universities.

I suspect that US administrators have learned that influencing reputation is easier than maintaining scientific and intellectual standards and that a gap is emerging between perceptions and actual achievements.

It will be interesting to see if these results are confirmed by the reputation indicators included in the QS, Best Global Universities, and the Round University Rankings


Sunday, September 26, 2021

What is a University Really for ?

Louise Richardson, Vice-Chancellor of the University of Oxford, has seen fit to enlighten us about the true purpose of a university. It is, it seems, to inculcate appropriate deference to the class of certified experts.

Professor Richardson remarked at the latest Times Higher Education (THE) academic summit that she was embarrassed that "we" had educated the Conservative politician Michael Gove who said, while talking about Brexit, that people had had enough of experts.

So now we know what universities are really about.  Not about critical discussion, cutting-edge research, skepticism, the disinterested pursuit of truth but about teaching respect for experts.

A few years ago I wrote a post suggesting we were now in a world where the expertise of the accredited experts was declining along with public deference. I referred to the failure of political scientists to predict the nomination of Trump, the election of Trump, the rise of Leicester City, the Brexit vote. It looks like respect for experts has continued to decline, not entirely without reason.

Professor Richardson thinks that Gove's disdain for the Brexit experts is cause for embarrassment. While it is early years for the real effects of Brexit to become clear it is as yet far from obvious that it has been an unmitigated disaster.  It is, moreover, a little ironic that the remark was made at the latest THE academic summit where the annual world rankings were announced.  Richardson remarked that she was delighted that her university was once again ranked number one.

The irony is that the THE world rankings are probably the least expert of the global rankings although they are apparently the most prestigious at least among those institutions that are known for being prestigious.

Let's have another look at THE's Citations Indicator which is supposed to measure research quality or impact and accounts for nearly a third of the total weighting. (Regular readers of this blog can skim or skip the next few lines. ) Here are the top five from this year's rankings.

1,   University of Cape Coast

2,   Duy Tan University

3,   An Najah National University

4.   Aswan University

5.   Brighton and Sussex Medical School.

This is not an academic version of the imaginary football league tables that nine-year-old children used to construct. Nor is it the result of massive cheating by the universities concerned. It is quite simply the outcome of a hopelessly flawed system. THE, or rather its data analysts, appear to be aware of the inadequacies of this indicator but somehow meaningful reform keeps getting postponed. One day historians will search the THE archives to findr the causes of this inability to take very simple and obvious measures to produce a sensible and credible ranking. I suspect that the people in control of THE policy are averse to anything that might involve any distraction from the priority of monetising as much data as possible. Nor is there any compelling reason for a rush to reform when universities like Oxford are unconcerned about the inadequacies of the current system.

Here are the top five for income from industry which is supposed to have something to do with innovation.

1.   Asia University Taiwan

2.   Istanbul Technical University

3.   Khalifa University

4.   Korean Advanced Institute of Science and Technology (KAIST)

5.   LMU Munich.

This is a bit better. It is not implausible that KAIST or Munich is a world leader for innovation. But in general, this indicator is also inadequate for any purpose other than providing fodder for publicity. See a scathing review by Alex Usher

Would any tutor or examiner at Oxford give any credit to any student who thought that Ghana, Vietnam and Palestine were centers of international research impact. They are all doing a remarkable job of teachng in many reseapects but that is not what THE is ostensibly giving them credit for.

In addition, the THE world rankings fail to meet satisfactory standards with regard to basic validity. Looking at the indicator scores for the top 200 universities in the most recent world rankings we can see that the correlation between research and teaching is 0.92. In effect these are not two distinct metrics. They are measuring essentially the same thing. A quick look at the methodology suggests that what they are comparing is income (total institutional income for teaching, research income for research), reputation (the opinion surveys for research and teaching) and investment in doctoral programmes.

On the other hand , the citations indicator does not correlate significantly with research or teaching and correlates negatively with industry income.

One can hardly blame THE for wanting to make as much money as possible. But surely we can expect something better from supposedly elite institutions that claim to value intellectual and scientific excellence. If Oxford and its peers wish to restore public confidence in the experts there is no better way than saying to THE that we will not submit data to THE until you produce something a little less embarrassing.