Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Monday, February 06, 2017
Is Trump keeping out the best and the brightest?
One of the strange things among several about the legal challenge to Trump's executive order on refugees and immigration is the claim in an amicus brief by dozens of companies, many of them at the cutting edge of the high tech economy, that the order makes it hard to "recruit hire and retain some of the world's best employees." The proposed, now frozen, restrictions would, moreover, be a "barrier to innovation" and prevent companies from attracting "great talent." They point out that many Nobel prize winners are immigrants.
Note that these are "tech giants", not meat packers or farmers and that they are talking about the great and the best employees, not the good or adequate or possibly employable after a decade of ESL classes and community college.
So let us take a look at the seven countries included in the proposed restrictions. Are they likely to be the source of large numbers of future hi tech entrepreneurs, Nobel laureates and innovators?
The answer is almost certainly no. None of the Nobel prize winners (not counting Peace and Literature) so far have been born in Yemen, Iraq, Iran, Somalia, Sudan, Libya or Syria although there has been an Iranian born winner of the Fields medal for mathematics.
The general level of the higher educational system in these countries does not inspire confidence that they are bursting with great talent. Of the seven only Iran has any universities in the Shanghai rankings, the University of Tehran and Amirkabir University of Technology.
The Shanghai rankings are famously selectively so take a look at the rank of the top universities in the Webometrics rankings which are the most inclusive, ranking more than 12,000 institutions this year.
The position of the top universities from the seven countries is as follows:
University of Babylon, Iraq 2,654
University of Benghazi, Libya 3,638
Kismayo University, Somalia 5,725
University of Khartoum, Sudan 1,972
Yemeni University of Science and Technology 3,681
Tehran University of Medical Science, Iran 478
Damascus Higher Institute of Applied Science and Technology, Syria 3,757.
It looks as though the only country remotely capable of producing innovators, entrepreneurs and scientists is Iran.
Finally, let's look at the scores of students from these countries n the GRE verbal and quantitative tests 2011-12.
For verbal reasoning, Iran has a score of 141.3, Sudan 140.6, Syria 142.7, Yemen 141, Iraq 139.2, and Libya 137.1. The mean score is 150.8 with a standard deviation of 8.4.
For quantitative reasoning, Iran has a score of 157.5, equal to France, Sudan 148.5, Syria 152.7, Yemen 148.6, Iraq 146.4, Libya 145.5. The mean score is 151.4 with a standard deviation of 8.7.
It seems that of the seven countries only Iran is likely to produce any significant numbers of workers capable of contributing to a modern economy.
No doubt there are other reasons why Apple, Microsoft and Twitter should be concerned about Trump's executive order. Perhaps they are worried about Russia, China, Korea or Poland being added to the restricted list. Perhaps they are thinking about farmers whose crops will rot in the fields, ESL teachers with nothing to do or social workers and immigration lawyers rotting at their desks. But if they really do believe that Silicon Valley will suffer irreparable harm from the proposed restrictions then they are surely mistaken.
Sunday, February 05, 2017
Guest post by Bahram Bekhradnia
I have just received this reply from Bahram Bekhradnia, President of the Higher Education Policy institute, in response to my review of his report on global university rankings.
My main two points which I think are not reflected in your blog – no doubt because I was not sufficiently clear – are
My main two points which I think are not reflected in your blog – no doubt because I was not sufficiently clear – are
· First the international rankings – with the exception of U-multirank which has other issues – almost exclusively reflect research activity and performance. Citations and publications of course are explicitly concerned with research, and as you say “International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.” And I add (see below) that faculty to student ratios reflect research activity and are not an indicator of a focus on education. There is not much argument that indicators of research dominate the rankings.
Yet although they ignore pretty well all other aspects of universities’ activities they claim nevertheless to identify the "best universities". They certainly do not provide information that is useful to undergraduate students, nor even actually to postgraduate students whose interest will be at discipline not institution level. If they were also honest enough to say simply that they identify research performance there would be rather less objection to the international rankings.
That is why it is so damaging for universities, their governing bodies – and even Governments – to pay so much attention to improving their universities performance in the international rankings. Resources – time and money - are limited and attaching priority to improving research performance can only be right for a very small number of universities.
· Second, the data on which they are based are wholly inadequate. 50% of the QS and 30% of the Times Higher rankings are based on nothing more than surveys of "opinion”, including in the case of QS the opinions of dead respondents. But no less serious is that the data on which the rankings are based – other than the publications and prize related data – are supplied by universities themselves and unaudited, or are ‘scraped’ from a variety of other sources including universities websites’ and cannot be compared one with the other. Those are the reasons for the Trinity College Dublin and Sultan Qaboos fiascos. One UAE university told me recently they had (mistakenly) submitted information about external income in UAE Dirhams instead of US Dollars – an inflation of 350% that no-one had noticed. Who knows what other errors there may be – the ranking bodies certainly don’t.
In reply to some of the detailed points that you make
In order to compare institutions you need to be sure that the data relating to each are compiled on a comparable basis, using comparable definitions et cetera. That is why the ranking bodies, rightly, have produced their own data definitions to which they ask institutions to adhere when returning data. The problem of course is that there is no audit of the data that are returned by institutions to ensure that the definitions are adhered to or that the data are accurate. Incidentally, that is why also there is far less objection to national rankings, which can, if there are robust national data collection and audit arrangements, have fewer problems with regard to comparability of data.
But at least there is the attempt with institution-supplied data to ensure that they are on a common basis and comparable. That is not so with data ‘scraped’ from random sources, and that is why I say that data scraping is such a bad practice. It produces data which are not comparable, but which QS nevertheless uses to compare institutions.
You say that THE, at least, omit faculty on research only contracts when compiling faculty to student ratios. But when I say that FSRs are a measure of research activity I am not referring to research only faculty. What I am pointing out is that the more research a university does the more academic faculty it is likely to recruit on teaching and research contracts. These will inflate the faculty to student ratios without necessarily increasing the teaching capacity over a university that does less research, consequently has fewer faculty but whose faculty devote more of their time to teaching. And of course QS even includes research contract faculty in FSR calculations. FSRs are essentially a reflection of research activity.
Monday, January 30, 2017
Getting satisfaction: look for universities that require good A level grades
If you are applying to a British university and you are concerned not with personal transformation, changing your life or social justice activism but with simple things like enjoying your course and finishing it and getting a job what would you look for? Performance in global rankings? Staff salaries? Spending? Staff student ratios?
Starting with student satisfaction, here are a few basic correlations between scores for overall student satisfaction on the Guardian UK rankings and a number of variables from the Guardian rankings, the Times Higher Education TEF simulation (THE), the Hefce survey of educational qualifications, and theTHE survey of vice-chancellor's pay.
Average Entry Tariff (Guardian) .479**
Staff student ratio (Guardian) .451**
Research Excellence Framework score (via THE) .379**
Spending per student (Guardian) .220 *
Vice chancellor salary (via THE) .167
Average salary (via THE) .031
Total staff (via THE) .099
Total students (via THE) .065
Teaching qualifications (Hefce) -161 (English universities only)
If there is one single thing that best predicts how satisfied you will be it is average entry tariff (A level grades). The number of staff compared to students, REF score, and spending per student also correlate significantly with student satisfaction.
None of the following are of any use in predicting student satisfaction: vice chancellor salary, average staff salary, total staff, total students or percentage of faculty with teaching qualifications.
i
Starting with student satisfaction, here are a few basic correlations between scores for overall student satisfaction on the Guardian UK rankings and a number of variables from the Guardian rankings, the Times Higher Education TEF simulation (THE), the Hefce survey of educational qualifications, and theTHE survey of vice-chancellor's pay.
Average Entry Tariff (Guardian) .479**
Staff student ratio (Guardian) .451**
Research Excellence Framework score (via THE) .379**
Spending per student (Guardian) .220 *
Vice chancellor salary (via THE) .167
Average salary (via THE) .031
Total staff (via THE) .099
Total students (via THE) .065
Teaching qualifications (Hefce) -161 (English universities only)
If there is one single thing that best predicts how satisfied you will be it is average entry tariff (A level grades). The number of staff compared to students, REF score, and spending per student also correlate significantly with student satisfaction.
None of the following are of any use in predicting student satisfaction: vice chancellor salary, average staff salary, total staff, total students or percentage of faculty with teaching qualifications.
i
Thursday, January 26, 2017
Comments on the HEPI Report
The higher education industry tends to respond to global rankings in two ways. University bureaucrats and academics either get overexcited, celebrating when they are up, wallowing in self-pity when down, or they reject the idea of rankings altogether.
Bahram Bekhradnia of the Higher Education Policy Institute in the UK has published a report on international rankings which adopts the second option. University World News has several comments including a summary of the report by Bekhradnia.
To start off, his choice of rankings deserves comment. He refers to the "four main rankings", Academic Ranking of World Universities (ARWU) from Shanghai, Quacquarelli Symonds (QS), Times Higher Education (THE) and U- Multirank. It is true that the first three are those best known to the public, QS and Shanghai by virtue of their longevity and THE because of its skilled branding and assiduous cultivation of the great, the good and the greedy of the academic world. U- Multirank is chosen presumably because of its attempts to address, perhaps not very successfully, some of the issues that the author discusses.
But focusing on these four gives a misleading picture of the current global ranking scene. There are now several rankings that are mainly concerned with research -- Leiden, Scimago, URAP, National Taiwan University, US News -- and redress some of the problems with the Shanghai ranking by giving due weight to the social sciences and humanities, leaving out decades old Nobel and Fields laureates and including more rigorous markers of quality. In addition, there are rankings that measure web activity, environmental sustainability, employability and innovation. Admittedly, they do not do any of these very well but the attempts should at least be noted and they could perhaps lead to better things.
In particular, there is now an international ranking from Russia, Round University Ranking (RUR), which could be regarded as an improved version of the THE world rankings and which tries to give more weight to teaching, It uses almost the same array of metrics as THE plus some more but with rational and sensible weightings, 8% for field normalised citations, for example, rather than 30%.
Bekhradnia has several comments on the defects of current rankings. First, he says that they are concerned entirely or almost entirely with research. He claims that there are indicators in the QS and THE rankings that are actually although not explicitly about research. International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.
Bekhradnia is being a little unfair to THE. He asserts that if universities add to their faculty with research-only staff this will add to their faculty student metric, supposedly a proxy for teaching quality, thus turning the indicator into a measure of research. This is true of QS but it appears that THE does require universities to list research staff separately and excludes them from some indicators as appropriate. In any case, the number of research only staff is quite small outside the top hundred or so for most universities
It is true that most rankings are heavily, perhaps excessively, research-orientated but it would be a mistake to conclude that this renders them totally useless for evaluating teaching and learning. Other things being equal, a good record for research is likely to be associated with positive student and graduate outcomes such as satisfaction with teaching, completion of courses and employment.
For English universities the Research Excellence Framework (REF) score is more predictive of student success and satisfaction, according to indicators in the Guardian rankings and the recent THE Teaching Excellence Framework simulation than the percentage of staff with educational training or certification, faculty salaries or institutional spending, although it is matched by staff student ratio.
If you are applying to English universities and you want to know how likely you are to complete your course or be employed after graduation, probably the most useful things to know are average entry tariff (A levels), staff student ratio and faculty scores for the latest REF. There are of course intervening variables and the arrows of causation do not always fly in the same direction but scores for research indicators are not irrelevant to comparisons of teaching effectiveness and learning outcomes.
Next, the report deals with the issue of data, noting that internal data checks by THE and QS do not seem to be adequate. He refers to the case of Trinity College Dublin where a misplaced decimal point caused the university to drop several places in the THE word rankings. He then goes on to criticise QS for "data scraping" that is getting information from any available source. He notes that they caused Sultan Qaboos University (SQU) to drop 150 places in their world rankings apparently because QS took data from the SQU website that identified non teaching staff as teaching. I assume that the staff in question were administrators: if they were researchers then it would not have made any difference.
Bekhradnia is correct to point out that data from web sites is often incorrect or subject to misinterpretation. But to assume that such data is necessarily inferior to that reported by institutions to the rankers is debatable. QS has no need to be apologetic about resorting to data scraping. On balance information about universities is more likely to be correct if it comes from one of several and similar competitive sources, if it is from a source independent of the ranking organisation and the university, if it has been collected for reasons other than submission to the rankers, or if there are serious penalties for submitting incorrect data.
The best data for university evaluation and comparison is likely to be from third party databases that collect masses of information or from government agencies that require accuracy and honesty. After that institutional data from web sites and the like is unlikely to be significantly worse than that specifically submitted for ranking purposes.
There was an article in University World News in which Ben Sowter of QS took a rather defensive position with regard to data scraping. He need not have done so. In fact it would not be a bad idea for QS and others to do a bit more.
Bekhradnia goes on to criticise the reputation surveys. He notes that recycling unchanged responses over a period of five years, originally three, means that it is possible that QS is counting the votes of dead or retired academics. He also points out that the response rate to the surveys is very low. All this is correct although it is nothing new. But it should be pointed out that what is significant is not how many respondents there are but how representative they are of the group that is being investigated. The weighting given to surveys in the THE and QS rankings is clearly too much and QS's methods of selecting respondents are rather incoherent and can produce counter-intuitive results such as extremely high scores for some Asian and Latin American universities.
However, it is going too far to suggest that surveys should have no role. First reputation and perceptions are far from insignificant. Many students would, I suspect, prefer to go a university that is overrated by employers and professional schools than to one that provides excellent instruction and facilities but has failed to communicate this to the rest of the world.
In addition, surveys can provide a reality check when a university does a bit of gaming. For example King Abdulaziz University (KAU) has been diligently offering adjunct contracts to dozens of highly cited researchers around the world that require them to put the university as a secondary affiliation and thus allow it to get huge numbers of citations. The US News Arab Region rankings have KAU in the top five among Arab universities for a range of research indicators, publications, cited publications, citations, field weighted citation impact, publications in the top 10 % and the top 25%. But its academic reputation rank was only 26, definitely a big thumbs down.
Bekhradnia then refers to the advantage that universities get in the ARWU rankings simply by being big. This is certainly a valid point. However, it could be argued that quantity is a necessary prerequisite to quality and enables the achievement of economies of scale.
He also suggest that the practice of presenting lists in order is misleading since a trivial difference in the raw data could mean a substantial difference in the presented ranking. He proposes that it would be better to group universities into bands. The problem with this is that when rankers do resort to banding, it is fairly easy to calculate an overall score by adding up the published components. Bloggers and analysts do it all the time.
Bekhradnia concludes:
This is ultimately self defeating. The need and the demand for some sort of ranking is too widespread to set aside. Abandon explicit rankings and we will probably have implicit rankings of recommendations by self declared experts.
There is much to be done to make rankings better. The priority should be finding objective and internationally comparable measures of student attributes and attainment. That will be some distance in the future. For the moment what universities should be doing is to focus not on composite rankings but on the more reasonable and reliable indicators within specific rankings.
Bekhradnia does have a very good point at the end:
I would add that universities should stop celebrating when they do well in the rankings. The grim fate of Middle East Technical University should be present in the mind of every university head.
Bahram Bekhradnia of the Higher Education Policy Institute in the UK has published a report on international rankings which adopts the second option. University World News has several comments including a summary of the report by Bekhradnia.
To start off, his choice of rankings deserves comment. He refers to the "four main rankings", Academic Ranking of World Universities (ARWU) from Shanghai, Quacquarelli Symonds (QS), Times Higher Education (THE) and U- Multirank. It is true that the first three are those best known to the public, QS and Shanghai by virtue of their longevity and THE because of its skilled branding and assiduous cultivation of the great, the good and the greedy of the academic world. U- Multirank is chosen presumably because of its attempts to address, perhaps not very successfully, some of the issues that the author discusses.
But focusing on these four gives a misleading picture of the current global ranking scene. There are now several rankings that are mainly concerned with research -- Leiden, Scimago, URAP, National Taiwan University, US News -- and redress some of the problems with the Shanghai ranking by giving due weight to the social sciences and humanities, leaving out decades old Nobel and Fields laureates and including more rigorous markers of quality. In addition, there are rankings that measure web activity, environmental sustainability, employability and innovation. Admittedly, they do not do any of these very well but the attempts should at least be noted and they could perhaps lead to better things.
In particular, there is now an international ranking from Russia, Round University Ranking (RUR), which could be regarded as an improved version of the THE world rankings and which tries to give more weight to teaching, It uses almost the same array of metrics as THE plus some more but with rational and sensible weightings, 8% for field normalised citations, for example, rather than 30%.
Bekhradnia has several comments on the defects of current rankings. First, he says that they are concerned entirely or almost entirely with research. He claims that there are indicators in the QS and THE rankings that are actually although not explicitly about research. International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.
Bekhradnia is being a little unfair to THE. He asserts that if universities add to their faculty with research-only staff this will add to their faculty student metric, supposedly a proxy for teaching quality, thus turning the indicator into a measure of research. This is true of QS but it appears that THE does require universities to list research staff separately and excludes them from some indicators as appropriate. In any case, the number of research only staff is quite small outside the top hundred or so for most universities
It is true that most rankings are heavily, perhaps excessively, research-orientated but it would be a mistake to conclude that this renders them totally useless for evaluating teaching and learning. Other things being equal, a good record for research is likely to be associated with positive student and graduate outcomes such as satisfaction with teaching, completion of courses and employment.
For English universities the Research Excellence Framework (REF) score is more predictive of student success and satisfaction, according to indicators in the Guardian rankings and the recent THE Teaching Excellence Framework simulation than the percentage of staff with educational training or certification, faculty salaries or institutional spending, although it is matched by staff student ratio.
If you are applying to English universities and you want to know how likely you are to complete your course or be employed after graduation, probably the most useful things to know are average entry tariff (A levels), staff student ratio and faculty scores for the latest REF. There are of course intervening variables and the arrows of causation do not always fly in the same direction but scores for research indicators are not irrelevant to comparisons of teaching effectiveness and learning outcomes.
Next, the report deals with the issue of data, noting that internal data checks by THE and QS do not seem to be adequate. He refers to the case of Trinity College Dublin where a misplaced decimal point caused the university to drop several places in the THE word rankings. He then goes on to criticise QS for "data scraping" that is getting information from any available source. He notes that they caused Sultan Qaboos University (SQU) to drop 150 places in their world rankings apparently because QS took data from the SQU website that identified non teaching staff as teaching. I assume that the staff in question were administrators: if they were researchers then it would not have made any difference.
Bekhradnia is correct to point out that data from web sites is often incorrect or subject to misinterpretation. But to assume that such data is necessarily inferior to that reported by institutions to the rankers is debatable. QS has no need to be apologetic about resorting to data scraping. On balance information about universities is more likely to be correct if it comes from one of several and similar competitive sources, if it is from a source independent of the ranking organisation and the university, if it has been collected for reasons other than submission to the rankers, or if there are serious penalties for submitting incorrect data.
The best data for university evaluation and comparison is likely to be from third party databases that collect masses of information or from government agencies that require accuracy and honesty. After that institutional data from web sites and the like is unlikely to be significantly worse than that specifically submitted for ranking purposes.
There was an article in University World News in which Ben Sowter of QS took a rather defensive position with regard to data scraping. He need not have done so. In fact it would not be a bad idea for QS and others to do a bit more.
Bekhradnia goes on to criticise the reputation surveys. He notes that recycling unchanged responses over a period of five years, originally three, means that it is possible that QS is counting the votes of dead or retired academics. He also points out that the response rate to the surveys is very low. All this is correct although it is nothing new. But it should be pointed out that what is significant is not how many respondents there are but how representative they are of the group that is being investigated. The weighting given to surveys in the THE and QS rankings is clearly too much and QS's methods of selecting respondents are rather incoherent and can produce counter-intuitive results such as extremely high scores for some Asian and Latin American universities.
However, it is going too far to suggest that surveys should have no role. First reputation and perceptions are far from insignificant. Many students would, I suspect, prefer to go a university that is overrated by employers and professional schools than to one that provides excellent instruction and facilities but has failed to communicate this to the rest of the world.
In addition, surveys can provide a reality check when a university does a bit of gaming. For example King Abdulaziz University (KAU) has been diligently offering adjunct contracts to dozens of highly cited researchers around the world that require them to put the university as a secondary affiliation and thus allow it to get huge numbers of citations. The US News Arab Region rankings have KAU in the top five among Arab universities for a range of research indicators, publications, cited publications, citations, field weighted citation impact, publications in the top 10 % and the top 25%. But its academic reputation rank was only 26, definitely a big thumbs down.
Bekhradnia then refers to the advantage that universities get in the ARWU rankings simply by being big. This is certainly a valid point. However, it could be argued that quantity is a necessary prerequisite to quality and enables the achievement of economies of scale.
He also suggest that the practice of presenting lists in order is misleading since a trivial difference in the raw data could mean a substantial difference in the presented ranking. He proposes that it would be better to group universities into bands. The problem with this is that when rankers do resort to banding, it is fairly easy to calculate an overall score by adding up the published components. Bloggers and analysts do it all the time.
Bekhradnia concludes:
"The international surveys of reputation should be dropped
– methodologically they are flawed, effectively they only
measure research performance and they skew the results in
favour of a small number of institutions."
This is ultimately self defeating. The need and the demand for some sort of ranking is too widespread to set aside. Abandon explicit rankings and we will probably have implicit rankings of recommendations by self declared experts.
There is much to be done to make rankings better. The priority should be finding objective and internationally comparable measures of student attributes and attainment. That will be some distance in the future. For the moment what universities should be doing is to focus not on composite rankings but on the more reasonable and reliable indicators within specific rankings.
Bekhradnia does have a very good point at the end:
"Finally, universities and governments should discount therankings when deciding their priorities, policies and actions.In particular, governing bodies should resist holding seniormanagement to account for performance in flawed rankings.Institutions and governments should do what they do becauseit is right, not because it will improve their position in therankings."
I would add that universities should stop celebrating when they do well in the rankings. The grim fate of Middle East Technical University should be present in the mind of every university head.
Sunday, January 22, 2017
What's Wrong with Ottawa?
The University of Ottawa (UO) has been a great success over the last few years, especially in research. In 2004 it was around the bottom third of the 202-300 band in the Shanghai Academic Ranking of World Universities. By 2016 it had reached the 201st place, although the Shanghai rankers still recorded it as being in the 201-300 band. Another signing of a highly cited researcher, another paper in Nature, a dozen more papers listed in the Science Citation Index and it would have made a big splash by breaking into the Shanghai top 200.
The Shanghai rankings have, apart from recent problems with the Highly Cited Researchers indicator, maintained a stable methodology so this is a very solid and remarkable achievement.
A look at the individual components of these rankings shows that UO has improved steadily in the the quantity and the quality of research. The score for publications rose from 37.8 to 44.4 between 2004 and 2016, from 13.0 to 16.1 for papers in Nature and Science, and from 8.7 to 14.5 for highly cited researchers (Harvard is 100 in all cases). For productivity (five indicators divided by number of faculty) the score went from 13.2 to 21.5 (Caltech is 100).
It is well known that the Shanghai rankings are entirely about research and ignore the arts and humanities. The Russian Round University Rankings (RUR), however, get their data from the same source as THE did until two years ago, include data from the arts and humanities, and have a greater emphasis on teaching related indicators.
In the RUR rankings, UO rose from 263rd place in 2010 to 211th overall in 2015, from 384th to 378th in five combined teaching indicators and from 177th to 142nd in five combined research indicators. Ottawa is doing well for research and and creeping up a bit for teaching related criteria, although the relationship between these and actual teaching may be rather tenuous.
RUR did not rank UO in 2016. I cannot find any specific reason but it is possible that the university did not submit data for the Institutional Profiles at Research Analytics.
Just for completeness, Ottawa is also doing well in the Webometrics ranking, which is mainly about web activity but does include a measure of research excellence. It is in the 201st spot there also.
It seems, however, that this is not good enough. In September, according to Fulcrum, the university newspaper, there was a meeting of the Board of Governors which discussed not the good results from RUR, Shanghai Ranking and Webometrics. but a fall in the Times Higher Education (THE) World University Rankings from the 201-250 band in 2015-16 to the 250-300 band in 2016-17. One board member even suggested taking THE to court.
So what happened to UO in last year's THE world rankings? The only area where it fell was for Research, from 36.7 to 21.0. In the other indicators or indicator groups, Teaching, Industry Income, International Orientation, Research Impact (citations), it got the same score or improved.
But this is not very helpful. There are actually three components in the research group of indicators, which has a weighting of 30%, two of which are scaled. A fall in the research component might be caused by a fall in its score for research reputation, a decline in its reported research income, a decline in the number of publications, a rise in the number of academic staff, or some combination of these.
The fall in UO's research score could not have been caused by more faculty. The number of full time faculty was 1,284 in 2012-13 and 1,281 in 2013-14.
There was a fall of 7.6% in Ottawa's "sponsored research income" between 2013 and 2014 but I am not sure if that is enough to produce such a large decline in the combined research indicators.
My suspicion is -- and until THE disaggregate their indicators it cannot be anything more -- that the problem lies with the 18% weighted survey of postgraduate teaching. Between 2015 and 2016 the percentage of survey respondents from the arts and humanities was significantly reduced while that from the social sciences and business studies was increased. This would be to the disadvantage of English speaking universities, including those in Canada, relatively strong in the humanities and to the advantage of Asian universities relatively strong in business studies. UO, for example, is ranked highly by Quacquarelli Symonds (QS) for English, Linguistics and Modern Languages, but not for Business Management Studies and Finance and Accounting.
This might have something to do with THE wanting to get enough respondents for business studies after they had been taken out of the social sciences and given their own section. If that is the case, Ottawa might get a pleasant surprise this year since THE are now treating law and education as separate fields and may have to find more respondents to get around the problem of small sample sizes. If so, this could help UO which appears to be strong in those subjects.
It seems, according to another Fulcrum article, that the university is being advised by Daniel Calto from Elsevier. He correctly points out that citations had nothing to do with this year's decline. He then talks about the expansion in the size of the rankings with newcomers pushing in front of UO. It is unlikely that this in fact had a significant effect on the university since most of the newcomers would probably enter below the 300 position and since there has been no effect on its score for teaching, international orientation, industry income or citations (research impact).
I suspect that Caito may have been incorrectly reported. Although he says it was unlikely that citations could have had anything to do with the decline, he is reported later in the article to have said that THE's exclusion of kilo-papers (with 1,000 authors) affected Ottawa. But the kilo-papers were excluded in 2015 so that could not have contributed to the fall between 2015 and 1016.
The Fulcrum article then discusses how UO might improve. M'hamed Aisati, a vice-president at Elsevier, suggest getting more citations. This is frankly not very helpful. The THE methodology means that more citations are meaningless unless they are concentrated in exactly the right fields. And if more citations are accompanied by more publications then the effect could be counter-productive.
If UO is concerned about a genuine improvement in research productivity and quality there are now several global rankings that are quite reasonable. There are even rankings that attempt to measure things like innovation, teaching resources, environmental sustainability and web activity.
The THE rankings are uniquely opaque in that they hide the scores for specific indicators, they are extremely volatile, they depend far too much on dodgy data from institutions and reputation surveys that can be extremely unstable. Above all, the citations indicator is a hilarious generator of absurdity.
The University of Ottawa, and other Canadian universities, would be well advised to forget about the THE rankings or at least not take them so seriously.
The Shanghai rankings have, apart from recent problems with the Highly Cited Researchers indicator, maintained a stable methodology so this is a very solid and remarkable achievement.
A look at the individual components of these rankings shows that UO has improved steadily in the the quantity and the quality of research. The score for publications rose from 37.8 to 44.4 between 2004 and 2016, from 13.0 to 16.1 for papers in Nature and Science, and from 8.7 to 14.5 for highly cited researchers (Harvard is 100 in all cases). For productivity (five indicators divided by number of faculty) the score went from 13.2 to 21.5 (Caltech is 100).
It is well known that the Shanghai rankings are entirely about research and ignore the arts and humanities. The Russian Round University Rankings (RUR), however, get their data from the same source as THE did until two years ago, include data from the arts and humanities, and have a greater emphasis on teaching related indicators.
In the RUR rankings, UO rose from 263rd place in 2010 to 211th overall in 2015, from 384th to 378th in five combined teaching indicators and from 177th to 142nd in five combined research indicators. Ottawa is doing well for research and and creeping up a bit for teaching related criteria, although the relationship between these and actual teaching may be rather tenuous.
RUR did not rank UO in 2016. I cannot find any specific reason but it is possible that the university did not submit data for the Institutional Profiles at Research Analytics.
Just for completeness, Ottawa is also doing well in the Webometrics ranking, which is mainly about web activity but does include a measure of research excellence. It is in the 201st spot there also.
It seems, however, that this is not good enough. In September, according to Fulcrum, the university newspaper, there was a meeting of the Board of Governors which discussed not the good results from RUR, Shanghai Ranking and Webometrics. but a fall in the Times Higher Education (THE) World University Rankings from the 201-250 band in 2015-16 to the 250-300 band in 2016-17. One board member even suggested taking THE to court.
So what happened to UO in last year's THE world rankings? The only area where it fell was for Research, from 36.7 to 21.0. In the other indicators or indicator groups, Teaching, Industry Income, International Orientation, Research Impact (citations), it got the same score or improved.
But this is not very helpful. There are actually three components in the research group of indicators, which has a weighting of 30%, two of which are scaled. A fall in the research component might be caused by a fall in its score for research reputation, a decline in its reported research income, a decline in the number of publications, a rise in the number of academic staff, or some combination of these.
The fall in UO's research score could not have been caused by more faculty. The number of full time faculty was 1,284 in 2012-13 and 1,281 in 2013-14.
There was a fall of 7.6% in Ottawa's "sponsored research income" between 2013 and 2014 but I am not sure if that is enough to produce such a large decline in the combined research indicators.
My suspicion is -- and until THE disaggregate their indicators it cannot be anything more -- that the problem lies with the 18% weighted survey of postgraduate teaching. Between 2015 and 2016 the percentage of survey respondents from the arts and humanities was significantly reduced while that from the social sciences and business studies was increased. This would be to the disadvantage of English speaking universities, including those in Canada, relatively strong in the humanities and to the advantage of Asian universities relatively strong in business studies. UO, for example, is ranked highly by Quacquarelli Symonds (QS) for English, Linguistics and Modern Languages, but not for Business Management Studies and Finance and Accounting.
This might have something to do with THE wanting to get enough respondents for business studies after they had been taken out of the social sciences and given their own section. If that is the case, Ottawa might get a pleasant surprise this year since THE are now treating law and education as separate fields and may have to find more respondents to get around the problem of small sample sizes. If so, this could help UO which appears to be strong in those subjects.
It seems, according to another Fulcrum article, that the university is being advised by Daniel Calto from Elsevier. He correctly points out that citations had nothing to do with this year's decline. He then talks about the expansion in the size of the rankings with newcomers pushing in front of UO. It is unlikely that this in fact had a significant effect on the university since most of the newcomers would probably enter below the 300 position and since there has been no effect on its score for teaching, international orientation, industry income or citations (research impact).
I suspect that Caito may have been incorrectly reported. Although he says it was unlikely that citations could have had anything to do with the decline, he is reported later in the article to have said that THE's exclusion of kilo-papers (with 1,000 authors) affected Ottawa. But the kilo-papers were excluded in 2015 so that could not have contributed to the fall between 2015 and 1016.
The Fulcrum article then discusses how UO might improve. M'hamed Aisati, a vice-president at Elsevier, suggest getting more citations. This is frankly not very helpful. The THE methodology means that more citations are meaningless unless they are concentrated in exactly the right fields. And if more citations are accompanied by more publications then the effect could be counter-productive.
If UO is concerned about a genuine improvement in research productivity and quality there are now several global rankings that are quite reasonable. There are even rankings that attempt to measure things like innovation, teaching resources, environmental sustainability and web activity.
The THE rankings are uniquely opaque in that they hide the scores for specific indicators, they are extremely volatile, they depend far too much on dodgy data from institutions and reputation surveys that can be extremely unstable. Above all, the citations indicator is a hilarious generator of absurdity.
The University of Ottawa, and other Canadian universities, would be well advised to forget about the THE rankings or at least not take them so seriously.
Monday, January 09, 2017
Outbreak of Rankophilia
A plague is sweeping the Universities of the West, rankophilia or an irrational and obsessive concern with position and prospects in global rankings and a unwillingness to exercise normal academic caution and scepticism.
The latest victim is Newcastle University whose new head, Chris Day, wants to make his new employer one of the best in the world. Why does he want to do that?
It is sad that Day recognises that the core business of a university is not enough and that what really matters is proper marketing and shouting to rise up the tables.
Meanwhile Cambridge is yearning to regain its place in the QS top three and Yale is putting new emphasis on science and research with an eye on the global rankings.
The latest victim is Newcastle University whose new head, Chris Day, wants to make his new employer one of the best in the world. Why does he want to do that?
"His ambition - to get the university into the Top 100 in the world - is not simply a matter of personal or even regional pride, however. With universities increasingly gaining income from foreign students who often base their choices of global rankings, improving Newcastle’s position in the league tables has economic consequences."So Newcastle is turning its back on its previous vision of becoming a "civic university" and will try to match its global counterparts. It will do that by enhancing its research reputation.
'While not rowing back from Prof Brink’s mantra of “what are we good at, but what are we good for?”, Prof Day first week in the job saw him highlighting the need for Newcastle to concentrate on improving its reputation for academic excellence." '
Perhaps Newcastle will ascend into the magic 100, but the history of the THE rankings over the last few years is full of universities -- Alexandria, University of Tokyo, Tokyo Metropolitan University, University of Copenhagen, Royal Holloway, University of Cape Town, Middle East Technical University and others -- that have soared in the THE rankings for a while and then fallen, often because of nothing more than a twitch of a methodological finger.
Sunday, January 01, 2017
Ranking Teaching Quality
There has been a lot of talk lately about the quality of teaching and learning in universities. This has always an been an important element in national rankings such as the US News America's Best Colleges and the Guardian and Sunday Times in the UK, measured by things like standardised test scores, student satisfaction, reputation surveys, completion rates and staff student ratio.
There have been suggestions that university teaching staff need to be upgraded by attending courses in educational theory and practice or by obtaining some sort of certification or qualification.
The Higher Education Funding Council of England (HEFCE) has just published data on the number of staff with educational qualifications in English higher educational institutions.
The university with the largest number of staff with some sort of educational qualification is Huddersfield which unsurprisingly is very pleased. The university's website reports HEFCE's assertion that “information about teaching qualifications has been identified as important to students and is seen as an indicator of individual and institutional commitment to teaching and learning.”
The top six universities are:
1. University of Huddersfield
2. Teesside University
3. York St John University
4. University of Chester
5= University of St Mark and St John
5= Edge Hill University.
The bottom five are:
104= London School of Economics
104= Courtauld Institute of Art
106. Goldsmith's College
107= University of Cambridge
107= London School of Oriental and African Studies.
It seems that these data provide almost no evidence that a "commitment to teaching and learning" is linked with any sort of positive outcome. Correlations with the overall scores in the Guardian rankings and the THE Teaching Exercise Framework simulation are negative (-550 and -410 [-204 after benchmarking]).
In addition, the correlation between the percentage of staff with teaching qualifications and the Guardian indicators is negative for student satisfaction with the course (-161, insignificant), student satisfaction with teaching (-.197, insignificant), value added (-.352) and graduate employment (-.379).
But there is a positive correlation with student satisfaction with feedback (.323).
The correlations with the indicators in the THE simulation were similar: graduate employment -.416, (-.249 after benchmarking), -449 completion (-.130, insignificant, after benchmarking), and -.186 student satisfaction, insignificant (-.056 after benchmarking, insignificant).
The report does cover a variety of qualifications so it is possible that digging deeper might show that some types of credentials are more useful than others. Also, there are intervening variables: Some of the high scorers, for example, are upgraded teacher training colleges with a relatively low status and a continuing emphasis on education as a subject.
Still, unless you count a positive association with feedback, there is no sign that forcing or encouraging faculty to take teaching courses and credentials will have positive effects on university teaching.
There have been suggestions that university teaching staff need to be upgraded by attending courses in educational theory and practice or by obtaining some sort of certification or qualification.
The Higher Education Funding Council of England (HEFCE) has just published data on the number of staff with educational qualifications in English higher educational institutions.
The university with the largest number of staff with some sort of educational qualification is Huddersfield which unsurprisingly is very pleased. The university's website reports HEFCE's assertion that “information about teaching qualifications has been identified as important to students and is seen as an indicator of individual and institutional commitment to teaching and learning.”
The top six universities are:
1. University of Huddersfield
2. Teesside University
3. York St John University
4. University of Chester
5= University of St Mark and St John
5= Edge Hill University.
The bottom five are:
104= London School of Economics
104= Courtauld Institute of Art
106. Goldsmith's College
107= University of Cambridge
107= London School of Oriental and African Studies.
It seems that these data provide almost no evidence that a "commitment to teaching and learning" is linked with any sort of positive outcome. Correlations with the overall scores in the Guardian rankings and the THE Teaching Exercise Framework simulation are negative (-550 and -410 [-204 after benchmarking]).
In addition, the correlation between the percentage of staff with teaching qualifications and the Guardian indicators is negative for student satisfaction with the course (-161, insignificant), student satisfaction with teaching (-.197, insignificant), value added (-.352) and graduate employment (-.379).
But there is a positive correlation with student satisfaction with feedback (.323).
The correlations with the indicators in the THE simulation were similar: graduate employment -.416, (-.249 after benchmarking), -449 completion (-.130, insignificant, after benchmarking), and -.186 student satisfaction, insignificant (-.056 after benchmarking, insignificant).
The report does cover a variety of qualifications so it is possible that digging deeper might show that some types of credentials are more useful than others. Also, there are intervening variables: Some of the high scorers, for example, are upgraded teacher training colleges with a relatively low status and a continuing emphasis on education as a subject.
Still, unless you count a positive association with feedback, there is no sign that forcing or encouraging faculty to take teaching courses and credentials will have positive effects on university teaching.
Wednesday, December 21, 2016
The University of Tokyo did not fall in the rankings. It was pushed.
Times Higher Education (THE) has published an article by Devin Stewart that refers to a crisis of Japanese universities. He says:
"After Japan’s prestigious University of Tokyo fell from its number one spot to number seven in Times Higher Education’s Asia University Rankings earlier this year, I had a chance to travel to Tokyo to interview more than 40 people involved with various parts of the country’s education system.
Students, academics and professionals told me they felt a blow to their national pride from the news of the rankings drop. I found that the THE rankings result underscored the complex problems plaguing the country’s institutions of higher learning. "
The fall of Todai in the THE Asian rankings was preceded by a fall in the World University Rankings (WUR) from 23rd place in the 2014 rankings (2014-2015) to 43rd in 2015 (2015-2016). Among Asian universities in the WUR it fell from first place to third.
This was not the result of anything that happened to Todai over the course of a year. There was no exodus of international students, no collapse of research output, no mass suicide of faculty, no sudden and miraculous disappearance of citations. It was the result of a changing methodology including the exclusion from citation counts of mega-papers, mainly in particle physics, with more than a thousand authors. This had a disproportionate impact on the University of Tokyo, whose citation score fell from 74.7 to 60.9, and some other Japanese universities.
The university made a bit of a comeback in the world rankings this year, rising to 39th (with a slightly improved citations score of 62.4) after THE did some more tweaking and gave limited credit for citations of the mega-papers.
Todai did even worse in the 2016 Asian rankings, derived from the world rankings, falling to an embarrassing seventh place behind two Singaporean, two Chinese and two Hong Kong universities. How did that happen? There was nothing like this in other rankings. Todai's position in the Shanghai Academic Ranking of World Universities (ARWU) actually improved between 2015 and 2016, from 21st to 20th, and in the Round University Rankings from 47th to 37th, and it remained the top Asian university in the CWUR, URAP and National Taiwan University rankings.
Evidently THE saw things that others did not. They decided that Hong Kong and Mainland China were separately entities for ranking purposes and that Mainland students, faculty and collaborators in Hong Kong universities would be counted as international. The international orientation score of the University of Hong Kong (UHK) in the Asian rankings accordingly went up from 81.9 to 99.5 between 2015 and 2016. Peter Mathieson of the University of Hong Kong was aware of this and warned everyone not to get too excited. Meanwhile universities such as Hong Kong University of Science and Technology (HKUST) and Nanyang Technological University (NTU) Singapore were getting higher scores for citations, almost certainly as a result of the methodological changes.
In addition, as noted in earlier posts, THE recalibrated the weighting assigned to its indicators, reducing that given to the research and teaching reputation surveys, where Todai is a high flier, and increasing that for income from industry where Peking and Tsinghua universities have perfect scores and NTU, HKUST and UHK do better than Tokyo.
In 2015 THE issued a health warning:
"Because of changes in the underlying data, we strongly advise against direct comparisons with previous years’ World University Rankings."
They should have done that for the 2016 Asian rankings which added further changes. It is regrettable that THE has published an article which refers to a fall in the rankings. There has been no fall in any real sense. There has only been a lot of recalibration and changes in the way data is processed.
Japanese higher education should not be ashamed of any decline in quality. If there had been any, especially in research, it would have have shown up in other more stable and less opaque rankings. They should, however, be embarrassed if they allow national and university policies to be driven by methodological tweaking.
Friday, December 16, 2016
A new Super-University for Ireland?
University
rankings have become extremely influential over the last few years. This is not
entirely a bad thing. The initial publication of the Shanghai rankings in 2003,
for example, exposed the pretensions of many European universities revealing
just how far behind they had fallen in scientific research. It also showed China how far it had to go to
achieve scientific parity with the West.
Unfortunately,
rankings have also had malign effects. The THE and QS world rankings have
acquired a great deal of respect, trust, even reverence that may not be
entirely deserved. Both introduced significant methodological changes in 2015
and THE has made further changes in 2016 and the consequence of this is that
there have been some remarkable rises and falls within the rankings that have
had a lot of publicity but have little to do with any real change in quality.
In
addition, both QS and THE have increased the number of ranked universities which
can affect the mean score for indicators from which the processed scores given
to the public are derived. Both have surveys that can be biased and subjective.
Both are unbalanced: QS with a 50 % weighting for academic and employer surveys
and THE with field and year normalised citations plus a partial regional modification
with an official weighting of 30% (the modification means that everybody except the top scorer gets a bonus for citations). The remarkable rise of Anglia Ruskin University to
parity with Oxford and Princeton in this year’s THE research impact (citations)
indicator and the high placing of the Pontifical Catholic University of Chile
and the National University of Colombia in QS’s employers
survey are evidence that these rankings
continue to be implausible and unstable. To make higher education policy dependent
on their fluctuations is very unwise.
This
is particularly true of the two leading Irish universities, Trinity College Dublin
(TCD) and University College Dublin
(UCD), which have in fact been advancing in the Round University
Rankings produced by a Russian organisation
and ShanghaiRanking’s Academic Ranking of World Universities. These
two global rankings have methodologies that are generally stable and
transparent.
I
pointed out in 2015 that TCD had been steadily rising in the Shanghai ARWU since
2004, especially in the Publications indicator (papers in the Science Citation
Index - Expanded and the Social Science Citation Index) and PCP (productivity
per capita, that is the combined indicator scores divided by the number of
faculty). This year, to repeat an earlier post, TCD’s publication score again went up very slightly from 31
to 31.1 (27.1 in 2004) and the PCP quite significantly from 19 to 20.8 (13.9 in
2004), compared to top scores of 100 for Harvard and Caltech respectively.
UCD
has also continued to do well in the Shanghai rankings with the publications
score rising this year from 34.1 to 34.2 (27.3 in 2004) and PCP from 18.0 to
18.1 (8.1 in 2014).
The
Shanghai rankings are, of course, famous for not counting the arts and
humanities and not trying to measure anything related to teaching. The RUR
rankings from Russia are based on Thomson Reuters data, also used by THE until
two years ago and they do include publications in the humanities and
teaching-related metrics. They have 12 out of the 13 indicators in the THE World
University Rankings, plus eight others, but with a sensible weighting, for
example 8% instead of 30% for field normalised citations.
The
RUR rankings show that TCD rose from 174th overall in 2010
to 102nd in 2016. (193rd to 67th for research). UCD
rose from 213th overall to 195th (157th to 69th for research) although some Irish
universities such as NUI Galway, NUI Maynooth, University College Cork, and
Dublin City University have fallen.
It
is thoroughly disingenuous for Irish academics to claim that academic standards
are declining because of a lack of funds. Perhaps they will do so in the
future. But so far everything suggests that the two leading Irish universities
are making steady progress especially in research.
The
fall of UCD in this year’s THE rankings this year and TCD’s fall in 2015 and
the fall of both in the QS rankings mean very little. When there are such large
methodological changes it is pointless to discuss how to improve in the
rankings. Methodological changes can be made and unmade and universities made and unmade as the Middle East Technical
University found in 2015 when it fell from 85th place
in the THE world rankings to below 501st.
The
Irish Times of November 8th had an
article by Philip O’Kane that proposed that Irish universities should combine
in some ways to boost their position in the global rankings.
He
suggested that:
“The only
feasible course of action for Ireland to avert continued sinking in the world
rankings is to create a new “International University of Ireland”.
This could
be a world-class research university that consists exclusively of the internationally-visible
parts of all our existing institutions, and to do so at marginal cost using
joint academic appointments, joint facilities and joint student registration,
in a highly flexible and dynamic manner.
Those parts
that are not internationally visible would be excluded from this International
University of Ireland.”
It
sounds like he is proposing that universities maintain their separate identity
for some things but present a united front for international matters. This was
an idea that was proposed in India a while ago but was quickly shot down by
Phil Baty of THE. It
is most unlikely that universities could separate data for faculty, students,
and income, and publications of their international bits and send the data to
the rankers.
The
idea of a full merger is more practical but could be pointless or even counter-productive.
In 2012 a group of experts, headed by
European Commissioner Frans Van Vught,
suggested that UCD and TCD be merged to become a single world class university.
The
ironic thing about this idea is that a merger would help with the Shanghai
rankings that university bosses are studiously pretending do not exist but
would be of little or no use with the rankings that the bureaucrats and
politicians do care about.
The
Shanghai rankings are known for being as much about quantity as quality. A
merger of TCD and UCD would produce a significant gain for the university by
combining the number of publications, papers in Nature and Science, and
highly cited researchers. It would do no good for Nobel and Fields awards since
Trinity has two now and UCD none so the new institution would still only have
two (ShanghaiRanking does not count Peace and Literature). Overall, it is
likely that the new Irish super-university would rise about a dozen places in
the Shanghai rankings, perhaps even getting into the top 150 (TCD is currently
162nd).
But
it would probably not help with the rankings that university heads are so excited about. Many of the indicators in the QS and THE rankings are scaled
in some way. You might get more citations by adding together those of TCD and UCD,
for instance, but QS divide them by number of faculty which would also be
combined if there was a merger. You could combine the incomes of TCD and UCD
but then the combined income would be divided by the combined staff numbers.
The
only place where a merger would be of any point is the survey criteria, 50% in
QS and 33% in THE but the problem here is that the reputation of a new University
of Dublin or Ireland or whatever it is called is likely to be inferior to that
of TCD and UCD for some years to come. There
are places where merging universities is a sensible way of pooling the strengths
of a multitude of small specialist schools and research centres, for example
France and Russia. But for Ireland, there is little point if the idea is to get
ahead in the QS and THE rankings.
It
would make more sense for Irish universities to focus on the Shanghai rankings
where, if present trends continue, TCD will catch up with Harvard in about 240
years although by then the peaks of the intellectual world will probably be in
Seoul, Shanghai, Moscow, Warsaw and Tallinn.
Saturday, December 03, 2016
Yale Engages with the Rankings
Over the last few years, elite universities have become increasingly concerned with their status in the global rankings. A decade ago university heads were inclined to ignore rankings or to regard them as insignificant, biased or limited. The University of Texas at Austin, for example, did not take part in the 2010 Times Higher Education (THE) rankings although it relented and submitted data in 2011 after learning that other US public institutions had done so and had scored better than in the preceding THES-QS rankings
It seems that things are changing. Around the world there excellence initiatives, one element of which is often improving the position of aspiring universities in international rankings, are proliferating.
It should be a major concern that higher education policies and priorities are influenced or even determined by publications that are problematic and incomplete in several ways. Rankings count what can be counted and that usually means a strong emphasis on research. Indeed, in the case of the Taiwan, URAP and Shanghai rankings that is all they are concerned with. Attempts to measure teaching, especially undergraduate teaching, have been rather haphazard. Although the US News Best US Colleges ranking includes measures of class size, admission standards, course completion and peer evaluation indicators in global rankings such as THE and Quacquarelli Symonds (QS) focus on inputs such as staff student ratio or income that might have some relation to eventual student or graduate outcomes.
It is sad that some major universities are less interested in developing the assessment of teaching or student quality and more in adjusting their policies and missions to the agenda of the rankings, particularly the THE world rankings.
Yale is now jumping on the rankings carousel. For decades it has been happily sitting on top of the US News college rankings making up the top three along with Princeton and Harvard. But Yale does much less well in the current global rankings. This year it is ranked 11th by the Shanghai rankings, 9th among US universities, 15th by QS, 7th among US universities and behind Nanyang Technological University and Ecole Polytechnique Federale Lausanne, and 12th in THE world rankings, 8th in the USA.
And so:
"For an example of investing where Yale must be strong, I want to touch very briefly on rankings, although I share your nervousness about being overly reliant on what are far-from-perfect indicators. With our unabashed emphasis on undergraduate education, strong teaching in Yale College, and unsurpassed residential experience, Yale has long boasted one of the very highest-ranked colleges, perennially among the top three. In the ratings of world research universities, however, we tend to be somewhere between tenth and fifteenth. This discrepancy points to an opportunity, and that opportunity is science, as it is the sciences that most differentiate Yale from those above us on such lists."
The reasons for the difference between the US and the world rankings are that Yale is relatively small compared to the other Ivy League members and the leading state universities, that it is strong in the arts and humanities, and that it has a good reputation for undergraduate teaching.
One of the virtues of global ranking is the exposure of the weaknesses of western universities especially in the teaching of and research in STEM subjects and it does no harm for Yale to shift a bit from the humanities and social sciences to the hard sciences. To take account of research based rankings with a consistent methodology such as URAP, National Taiwan University or the Shanghai rankings is quite sensible. But Yale is asking for trouble if it becomes overly concerned with rankings such as THE or QS that are inclined to destabilising changes in methodology, rely on subjective survey data, assign disproportionate weights to certain indicators, emphasise input such as income or faculty resources rather than actual achievement, are demonstrably biased, and include indicators that are extremely counter-intuitive (Anglia Ruskin with a research impact equal to Princeton and greater than Yale, Pontifical Catholic University of Chile 28th in the world for employer reputation) .
Yale would be better off if it encouraged the development of cross-national tools to measure student achievement and quality of teaching or ranking metrics that assigned more weight to the humanities and social sciences.
It seems that things are changing. Around the world there excellence initiatives, one element of which is often improving the position of aspiring universities in international rankings, are proliferating.
It should be a major concern that higher education policies and priorities are influenced or even determined by publications that are problematic and incomplete in several ways. Rankings count what can be counted and that usually means a strong emphasis on research. Indeed, in the case of the Taiwan, URAP and Shanghai rankings that is all they are concerned with. Attempts to measure teaching, especially undergraduate teaching, have been rather haphazard. Although the US News Best US Colleges ranking includes measures of class size, admission standards, course completion and peer evaluation indicators in global rankings such as THE and Quacquarelli Symonds (QS) focus on inputs such as staff student ratio or income that might have some relation to eventual student or graduate outcomes.
It is sad that some major universities are less interested in developing the assessment of teaching or student quality and more in adjusting their policies and missions to the agenda of the rankings, particularly the THE world rankings.
Yale is now jumping on the rankings carousel. For decades it has been happily sitting on top of the US News college rankings making up the top three along with Princeton and Harvard. But Yale does much less well in the current global rankings. This year it is ranked 11th by the Shanghai rankings, 9th among US universities, 15th by QS, 7th among US universities and behind Nanyang Technological University and Ecole Polytechnique Federale Lausanne, and 12th in THE world rankings, 8th in the USA.
And so:
"For an example of investing where Yale must be strong, I want to touch very briefly on rankings, although I share your nervousness about being overly reliant on what are far-from-perfect indicators. With our unabashed emphasis on undergraduate education, strong teaching in Yale College, and unsurpassed residential experience, Yale has long boasted one of the very highest-ranked colleges, perennially among the top three. In the ratings of world research universities, however, we tend to be somewhere between tenth and fifteenth. This discrepancy points to an opportunity, and that opportunity is science, as it is the sciences that most differentiate Yale from those above us on such lists."
The reasons for the difference between the US and the world rankings are that Yale is relatively small compared to the other Ivy League members and the leading state universities, that it is strong in the arts and humanities, and that it has a good reputation for undergraduate teaching.
One of the virtues of global ranking is the exposure of the weaknesses of western universities especially in the teaching of and research in STEM subjects and it does no harm for Yale to shift a bit from the humanities and social sciences to the hard sciences. To take account of research based rankings with a consistent methodology such as URAP, National Taiwan University or the Shanghai rankings is quite sensible. But Yale is asking for trouble if it becomes overly concerned with rankings such as THE or QS that are inclined to destabilising changes in methodology, rely on subjective survey data, assign disproportionate weights to certain indicators, emphasise input such as income or faculty resources rather than actual achievement, are demonstrably biased, and include indicators that are extremely counter-intuitive (Anglia Ruskin with a research impact equal to Princeton and greater than Yale, Pontifical Catholic University of Chile 28th in the world for employer reputation) .
Yale would be better off if it encouraged the development of cross-national tools to measure student achievement and quality of teaching or ranking metrics that assigned more weight to the humanities and social sciences.
Monday, November 21, 2016
TOP500 Supercomputer Rankings
Every six months TOP500 publishes a list of the five hundred most powerful computer systems n the world. This is probably a good guide to the economic, scientific and technological future of the world's nation states.
The most noticeable change since November 2015 is that the number of supercomputers in China has risen dramatically from 108 to 171 systems while the USA has fallen from 200 to 171. Japan has fallen quite considerably from 37 to 27 and Germany and the UK by one each. France has added two supercomputers to reach 20.
In the whole of Africa there is exactly one supercomputer, in Cape Town. In the Middle East there are five, all in Saudi Arabia, three of them operated by Aramco.
Here is a list of countries with the number of computers in the top 500.
China 171
USA 171
Germany 32
Japan 27
France 20
UK 17
Poland 7
Italy 6
India 5
Russia 5
Saudi Arabia 5
South Korea 4
Sweden 4
Switzerland 4
Australia 3
Austria 3
Brazil 3
Netherlands 3
New Zealand 3
Denmark 2
Finland 2
Belgium 1
Canada 1
Czech Republic 1
Ireland 1
Norway 1
Singapore 1
South Africa 1
Spain 1
The most noticeable change since November 2015 is that the number of supercomputers in China has risen dramatically from 108 to 171 systems while the USA has fallen from 200 to 171. Japan has fallen quite considerably from 37 to 27 and Germany and the UK by one each. France has added two supercomputers to reach 20.
In the whole of Africa there is exactly one supercomputer, in Cape Town. In the Middle East there are five, all in Saudi Arabia, three of them operated by Aramco.
Here is a list of countries with the number of computers in the top 500.
China 171
USA 171
Germany 32
Japan 27
France 20
UK 17
Poland 7
Italy 6
India 5
Russia 5
Saudi Arabia 5
South Korea 4
Sweden 4
Switzerland 4
Australia 3
Austria 3
Brazil 3
Netherlands 3
New Zealand 3
Denmark 2
Finland 2
Belgium 1
Canada 1
Czech Republic 1
Ireland 1
Norway 1
Singapore 1
South Africa 1
Spain 1
Friday, November 18, 2016
QS seeks a Passion Integrity Empowerment and Diversity compliant manager
The big ranking brands seem to be suffering from a prolonged fit of megalomania, perhaps caused by the toxic gases of Brexit and the victory of the deplorables. The "trusted" THE, led by the "education secretary of the world", has just made a foray into the US college ranking market, published a graduate employability ranking and is now going to the University of Johannesburg for a BRICS Plus Various Places summit.
Meanwhile the "revered" QS, creator of "incredibly successful ranking initiatives" also appears to be getting ready for bigger and better things. They are advertising for a Ranking Manager who will be
"a suitably accomplished and inspirational leader", and possess "a combination of analytical capability, thought leadership and knowledge of the global higher education landscape" and " ensure an environment of Passion, Integrity, Empowerment and Diversity is maintained" and be "(h)ighly analytical with extensive data modelling experience" and have "great leadership attributes".
And so on and so on. Read it yourself. If you can get through to the end without laughing you could be a suitable candidate.
I can't wait to see who gets the job.
Meanwhile the "revered" QS, creator of "incredibly successful ranking initiatives" also appears to be getting ready for bigger and better things. They are advertising for a Ranking Manager who will be
"a suitably accomplished and inspirational leader", and possess "a combination of analytical capability, thought leadership and knowledge of the global higher education landscape" and " ensure an environment of Passion, Integrity, Empowerment and Diversity is maintained" and be "(h)ighly analytical with extensive data modelling experience" and have "great leadership attributes".
And so on and so on. Read it yourself. If you can get through to the end without laughing you could be a suitable candidate.
I can't wait to see who gets the job.
Wednesday, November 02, 2016
More on teaching-centred rankings
The UK is proposing to add a Teaching Excellence Framework (TEF) to the famous, or infamous, Research Excellence Framework (REF). The idea is that universities are to be judged according to their teaching quality which is to be measured by how many students manage to graduate, how satisfied students are with their courses and whether graduates are employed or in postgraduate courses shortly after graduation.
There are apparently going to be big rewards for doing well according to these criteria. It seems that universities that want to charge high tuition fees must reach a certain level.
Does one have to be a hardened cynic to suspect that there is going to be a large amount of manipulation if this is put into effect? Universities will be ranked according to the proportion of students completing their degrees? They will make graduating requirements easier, abolish compulsory courses in difficult things like dead white poets, foreign languages or maths, or allow alternative methods of assessment, group work, art projects and so on. We have, for example, already seen how the number of first and upper second class degrees awarded by British universities has risen enormously in the last few years.
Universities will be graded by student satisfaction? Just let the students know, very subtly of course, that if they say their university is no good then employers are less likely to give them jobs. Employment or postgraduate courses six months after graduation? Lots of internships and easy admissions to postgraduate courses.
In any case, it is all probably futile. A look at the Guardian University Guide rankings in a recent post here shows that if you want to find out about student outcomes six months after graduation the most relevant number the is average entry tariff, that is 'A' level grades three or four years earlier.
I doubt very much that employers and graduate, professional and business schools are really interested in the difference between an A and an A* grade or even an A and a B. Bluntly, they choose from candidates who they think are intelligent and trainable, something which correlates highly with 'A' Level grades or, across the Anglosphere Lake, SAT, ACT and GRE scores, and display other non-cognitive characteristics such as conscientiousness and open-mindedness. Also, they tend to pick people who generally resemble themselves as much as possible. Employers and schools tend to select candidates from those universities that are more likely to produce large numbers of graduates with the desired attributes.
Any teaching assessment exercise that does not measure or attempt to measure the cognitive skills of graduates is likely to be of little value.
In June Times Higher Education (THE) ran a simulation of a ranking of UK universities that might result from the TEF exercise. There were three indicators, student completion of courses, student satisfaction and graduate destinations, that is number of graduates employed or in post graduate courses six months after graduation. In addition to absolute scores, universities were benchmarked for gender, ethnicity, age, disability and subject.
There are many questions about the methodology of THE exercise, some of which are raised in the comments on the THE report.
The THE simulation appears to confirm that students' academic ability is more important than anything else when it comes to their career prospects. Comparing the THE scores for graduate destinations (absolute) with the other indicators in the THE TEF simulation and the Guardian rankings we get the following correlations.
Graduate Destinations (THE absolute) and:
Average Entry Tariff (Guardian) .772
Student completion (THE absolute) .750
Staff student ratio (Guardian inverted) .663
Spending per student (Guardian) .612
Satisfaction with course (Guardian) .486
Student satisfaction (THE absolute) .472
Satisfaction with teaching (Guardian) .443
Value added (Guardian) .347
Satisfaction with feedback (Guardian) -.239
So, a high score in the THE graduate destinations metric, like its counterpart in the Guardian rankings, is associated most closely with students' academic ability and their ability to finish their degree programmes, next with spending, moderately with overall satisfaction and satisfaction with teaching, and substantially less so with value added. Satisfaction with feedback has a negative association with career success narrowly defined.
Looking at the benchmarked score for Graduate Destinations we find that the correlations are more modest than with the absolute score. But average entry tariff is still a better predictor of graduate outcomes than value added.
Graduate Destinations (THE distance from benchmark) and:
Student completion (THE benchmarked) .487
Satisfaction with course (Guardian) .404
Staff student ratio (Guardian inverted) .385
Average entry tariff (Guardian) .383
Spending per student (Guardian) .383
Satisfaction with teaching (Guardian) .324
Student satisfaction (THE benchmarked) .305
Value added (Guardian) .255
Satisfaction with feedback (Guardian) .025
It is useful to know about student satisfaction and very useful for students to know how likely they are to finish their programmes. But until rankers and government agencies figure out how to estimate the subject knowledge and cognitive skills of graduates and the impact, if any, of universities then the current trend to teaching-centred rankings will not be helpful to anyone.
There are apparently going to be big rewards for doing well according to these criteria. It seems that universities that want to charge high tuition fees must reach a certain level.
Does one have to be a hardened cynic to suspect that there is going to be a large amount of manipulation if this is put into effect? Universities will be ranked according to the proportion of students completing their degrees? They will make graduating requirements easier, abolish compulsory courses in difficult things like dead white poets, foreign languages or maths, or allow alternative methods of assessment, group work, art projects and so on. We have, for example, already seen how the number of first and upper second class degrees awarded by British universities has risen enormously in the last few years.
Universities will be graded by student satisfaction? Just let the students know, very subtly of course, that if they say their university is no good then employers are less likely to give them jobs. Employment or postgraduate courses six months after graduation? Lots of internships and easy admissions to postgraduate courses.
In any case, it is all probably futile. A look at the Guardian University Guide rankings in a recent post here shows that if you want to find out about student outcomes six months after graduation the most relevant number the is average entry tariff, that is 'A' level grades three or four years earlier.
I doubt very much that employers and graduate, professional and business schools are really interested in the difference between an A and an A* grade or even an A and a B. Bluntly, they choose from candidates who they think are intelligent and trainable, something which correlates highly with 'A' Level grades or, across the Anglosphere Lake, SAT, ACT and GRE scores, and display other non-cognitive characteristics such as conscientiousness and open-mindedness. Also, they tend to pick people who generally resemble themselves as much as possible. Employers and schools tend to select candidates from those universities that are more likely to produce large numbers of graduates with the desired attributes.
Any teaching assessment exercise that does not measure or attempt to measure the cognitive skills of graduates is likely to be of little value.
In June Times Higher Education (THE) ran a simulation of a ranking of UK universities that might result from the TEF exercise. There were three indicators, student completion of courses, student satisfaction and graduate destinations, that is number of graduates employed or in post graduate courses six months after graduation. In addition to absolute scores, universities were benchmarked for gender, ethnicity, age, disability and subject.
There are many questions about the methodology of THE exercise, some of which are raised in the comments on the THE report.
The THE simulation appears to confirm that students' academic ability is more important than anything else when it comes to their career prospects. Comparing the THE scores for graduate destinations (absolute) with the other indicators in the THE TEF simulation and the Guardian rankings we get the following correlations.
Graduate Destinations (THE absolute) and:
Average Entry Tariff (Guardian) .772
Student completion (THE absolute) .750
Staff student ratio (Guardian inverted) .663
Spending per student (Guardian) .612
Satisfaction with course (Guardian) .486
Student satisfaction (THE absolute) .472
Satisfaction with teaching (Guardian) .443
Value added (Guardian) .347
Satisfaction with feedback (Guardian) -.239
So, a high score in the THE graduate destinations metric, like its counterpart in the Guardian rankings, is associated most closely with students' academic ability and their ability to finish their degree programmes, next with spending, moderately with overall satisfaction and satisfaction with teaching, and substantially less so with value added. Satisfaction with feedback has a negative association with career success narrowly defined.
Looking at the benchmarked score for Graduate Destinations we find that the correlations are more modest than with the absolute score. But average entry tariff is still a better predictor of graduate outcomes than value added.
Graduate Destinations (THE distance from benchmark) and:
Student completion (THE benchmarked) .487
Satisfaction with course (Guardian) .404
Staff student ratio (Guardian inverted) .385
Average entry tariff (Guardian) .383
Spending per student (Guardian) .383
Satisfaction with teaching (Guardian) .324
Student satisfaction (THE benchmarked) .305
Value added (Guardian) .255
Satisfaction with feedback (Guardian) .025
It is useful to know about student satisfaction and very useful for students to know how likely they are to finish their programmes. But until rankers and government agencies figure out how to estimate the subject knowledge and cognitive skills of graduates and the impact, if any, of universities then the current trend to teaching-centred rankings will not be helpful to anyone.
Subscribe to:
Posts (Atom)