If you are applying to a British university and you are concerned not with personal transformation, changing your life or social justice activism but with simple things like enjoying your course and finishing it and getting a job what would you look for? Performance in global rankings? Staff salaries? Spending? Staff student ratios?
Starting with student satisfaction, here are a few basic correlations between scores for overall student satisfaction on the Guardian UK rankings and a number of variables from the Guardian rankings, the Times Higher Education TEF simulation (THE), the Hefce survey of educational qualifications, and theTHE survey of vice-chancellor's pay.
Average Entry Tariff (Guardian) .479**
Staff student ratio (Guardian) .451**
Research Excellence Framework score (via THE) .379**
Spending per student (Guardian) .220 *
Vice chancellor salary (via THE) .167
Average salary (via THE) .031
Total staff (via THE) .099
Total students (via THE) .065
Teaching qualifications (Hefce) -161 (English universities only)
If there is one single thing that best predicts how satisfied you will be it is average entry tariff (A level grades). The number of staff compared to students, REF score, and spending per student also correlate significantly with student satisfaction.
None of the following are of any use in predicting student satisfaction: vice chancellor salary, average staff salary, total staff, total students or percentage of faculty with teaching qualifications.
i
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Monday, January 30, 2017
Thursday, January 26, 2017
Comments on the HEPI Report
The higher education industry tends to respond to global rankings in two ways. University bureaucrats and academics either get overexcited, celebrating when they are up, wallowing in self-pity when down, or they reject the idea of rankings altogether.
Bahram Bekhradnia of the Higher Education Policy Institute in the UK has published a report on international rankings which adopts the second option. University World News has several comments including a summary of the report by Bekhradnia.
To start off, his choice of rankings deserves comment. He refers to the "four main rankings", Academic Ranking of World Universities (ARWU) from Shanghai, Quacquarelli Symonds (QS), Times Higher Education (THE) and U- Multirank. It is true that the first three are those best known to the public, QS and Shanghai by virtue of their longevity and THE because of its skilled branding and assiduous cultivation of the great, the good and the greedy of the academic world. U- Multirank is chosen presumably because of its attempts to address, perhaps not very successfully, some of the issues that the author discusses.
But focusing on these four gives a misleading picture of the current global ranking scene. There are now several rankings that are mainly concerned with research -- Leiden, Scimago, URAP, National Taiwan University, US News -- and redress some of the problems with the Shanghai ranking by giving due weight to the social sciences and humanities, leaving out decades old Nobel and Fields laureates and including more rigorous markers of quality. In addition, there are rankings that measure web activity, environmental sustainability, employability and innovation. Admittedly, they do not do any of these very well but the attempts should at least be noted and they could perhaps lead to better things.
In particular, there is now an international ranking from Russia, Round University Ranking (RUR), which could be regarded as an improved version of the THE world rankings and which tries to give more weight to teaching, It uses almost the same array of metrics as THE plus some more but with rational and sensible weightings, 8% for field normalised citations, for example, rather than 30%.
Bekhradnia has several comments on the defects of current rankings. First, he says that they are concerned entirely or almost entirely with research. He claims that there are indicators in the QS and THE rankings that are actually although not explicitly about research. International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.
Bekhradnia is being a little unfair to THE. He asserts that if universities add to their faculty with research-only staff this will add to their faculty student metric, supposedly a proxy for teaching quality, thus turning the indicator into a measure of research. This is true of QS but it appears that THE does require universities to list research staff separately and excludes them from some indicators as appropriate. In any case, the number of research only staff is quite small outside the top hundred or so for most universities
It is true that most rankings are heavily, perhaps excessively, research-orientated but it would be a mistake to conclude that this renders them totally useless for evaluating teaching and learning. Other things being equal, a good record for research is likely to be associated with positive student and graduate outcomes such as satisfaction with teaching, completion of courses and employment.
For English universities the Research Excellence Framework (REF) score is more predictive of student success and satisfaction, according to indicators in the Guardian rankings and the recent THE Teaching Excellence Framework simulation than the percentage of staff with educational training or certification, faculty salaries or institutional spending, although it is matched by staff student ratio.
If you are applying to English universities and you want to know how likely you are to complete your course or be employed after graduation, probably the most useful things to know are average entry tariff (A levels), staff student ratio and faculty scores for the latest REF. There are of course intervening variables and the arrows of causation do not always fly in the same direction but scores for research indicators are not irrelevant to comparisons of teaching effectiveness and learning outcomes.
Next, the report deals with the issue of data, noting that internal data checks by THE and QS do not seem to be adequate. He refers to the case of Trinity College Dublin where a misplaced decimal point caused the university to drop several places in the THE word rankings. He then goes on to criticise QS for "data scraping" that is getting information from any available source. He notes that they caused Sultan Qaboos University (SQU) to drop 150 places in their world rankings apparently because QS took data from the SQU website that identified non teaching staff as teaching. I assume that the staff in question were administrators: if they were researchers then it would not have made any difference.
Bekhradnia is correct to point out that data from web sites is often incorrect or subject to misinterpretation. But to assume that such data is necessarily inferior to that reported by institutions to the rankers is debatable. QS has no need to be apologetic about resorting to data scraping. On balance information about universities is more likely to be correct if it comes from one of several and similar competitive sources, if it is from a source independent of the ranking organisation and the university, if it has been collected for reasons other than submission to the rankers, or if there are serious penalties for submitting incorrect data.
The best data for university evaluation and comparison is likely to be from third party databases that collect masses of information or from government agencies that require accuracy and honesty. After that institutional data from web sites and the like is unlikely to be significantly worse than that specifically submitted for ranking purposes.
There was an article in University World News in which Ben Sowter of QS took a rather defensive position with regard to data scraping. He need not have done so. In fact it would not be a bad idea for QS and others to do a bit more.
Bekhradnia goes on to criticise the reputation surveys. He notes that recycling unchanged responses over a period of five years, originally three, means that it is possible that QS is counting the votes of dead or retired academics. He also points out that the response rate to the surveys is very low. All this is correct although it is nothing new. But it should be pointed out that what is significant is not how many respondents there are but how representative they are of the group that is being investigated. The weighting given to surveys in the THE and QS rankings is clearly too much and QS's methods of selecting respondents are rather incoherent and can produce counter-intuitive results such as extremely high scores for some Asian and Latin American universities.
However, it is going too far to suggest that surveys should have no role. First reputation and perceptions are far from insignificant. Many students would, I suspect, prefer to go a university that is overrated by employers and professional schools than to one that provides excellent instruction and facilities but has failed to communicate this to the rest of the world.
In addition, surveys can provide a reality check when a university does a bit of gaming. For example King Abdulaziz University (KAU) has been diligently offering adjunct contracts to dozens of highly cited researchers around the world that require them to put the university as a secondary affiliation and thus allow it to get huge numbers of citations. The US News Arab Region rankings have KAU in the top five among Arab universities for a range of research indicators, publications, cited publications, citations, field weighted citation impact, publications in the top 10 % and the top 25%. But its academic reputation rank was only 26, definitely a big thumbs down.
Bekhradnia then refers to the advantage that universities get in the ARWU rankings simply by being big. This is certainly a valid point. However, it could be argued that quantity is a necessary prerequisite to quality and enables the achievement of economies of scale.
He also suggest that the practice of presenting lists in order is misleading since a trivial difference in the raw data could mean a substantial difference in the presented ranking. He proposes that it would be better to group universities into bands. The problem with this is that when rankers do resort to banding, it is fairly easy to calculate an overall score by adding up the published components. Bloggers and analysts do it all the time.
Bekhradnia concludes:
This is ultimately self defeating. The need and the demand for some sort of ranking is too widespread to set aside. Abandon explicit rankings and we will probably have implicit rankings of recommendations by self declared experts.
There is much to be done to make rankings better. The priority should be finding objective and internationally comparable measures of student attributes and attainment. That will be some distance in the future. For the moment what universities should be doing is to focus not on composite rankings but on the more reasonable and reliable indicators within specific rankings.
Bekhradnia does have a very good point at the end:
I would add that universities should stop celebrating when they do well in the rankings. The grim fate of Middle East Technical University should be present in the mind of every university head.
Bahram Bekhradnia of the Higher Education Policy Institute in the UK has published a report on international rankings which adopts the second option. University World News has several comments including a summary of the report by Bekhradnia.
To start off, his choice of rankings deserves comment. He refers to the "four main rankings", Academic Ranking of World Universities (ARWU) from Shanghai, Quacquarelli Symonds (QS), Times Higher Education (THE) and U- Multirank. It is true that the first three are those best known to the public, QS and Shanghai by virtue of their longevity and THE because of its skilled branding and assiduous cultivation of the great, the good and the greedy of the academic world. U- Multirank is chosen presumably because of its attempts to address, perhaps not very successfully, some of the issues that the author discusses.
But focusing on these four gives a misleading picture of the current global ranking scene. There are now several rankings that are mainly concerned with research -- Leiden, Scimago, URAP, National Taiwan University, US News -- and redress some of the problems with the Shanghai ranking by giving due weight to the social sciences and humanities, leaving out decades old Nobel and Fields laureates and including more rigorous markers of quality. In addition, there are rankings that measure web activity, environmental sustainability, employability and innovation. Admittedly, they do not do any of these very well but the attempts should at least be noted and they could perhaps lead to better things.
In particular, there is now an international ranking from Russia, Round University Ranking (RUR), which could be regarded as an improved version of the THE world rankings and which tries to give more weight to teaching, It uses almost the same array of metrics as THE plus some more but with rational and sensible weightings, 8% for field normalised citations, for example, rather than 30%.
Bekhradnia has several comments on the defects of current rankings. First, he says that they are concerned entirely or almost entirely with research. He claims that there are indicators in the QS and THE rankings that are actually although not explicitly about research. International faculty are probably recruited more for their research reputation than for anything else. Income from industry (THE) is of course a measure of reported funding for applied research. The QS academic reputation survey is officially about research and THE 's academic reputation survey of teaching is about postgraduate supervision.
Bekhradnia is being a little unfair to THE. He asserts that if universities add to their faculty with research-only staff this will add to their faculty student metric, supposedly a proxy for teaching quality, thus turning the indicator into a measure of research. This is true of QS but it appears that THE does require universities to list research staff separately and excludes them from some indicators as appropriate. In any case, the number of research only staff is quite small outside the top hundred or so for most universities
It is true that most rankings are heavily, perhaps excessively, research-orientated but it would be a mistake to conclude that this renders them totally useless for evaluating teaching and learning. Other things being equal, a good record for research is likely to be associated with positive student and graduate outcomes such as satisfaction with teaching, completion of courses and employment.
For English universities the Research Excellence Framework (REF) score is more predictive of student success and satisfaction, according to indicators in the Guardian rankings and the recent THE Teaching Excellence Framework simulation than the percentage of staff with educational training or certification, faculty salaries or institutional spending, although it is matched by staff student ratio.
If you are applying to English universities and you want to know how likely you are to complete your course or be employed after graduation, probably the most useful things to know are average entry tariff (A levels), staff student ratio and faculty scores for the latest REF. There are of course intervening variables and the arrows of causation do not always fly in the same direction but scores for research indicators are not irrelevant to comparisons of teaching effectiveness and learning outcomes.
Next, the report deals with the issue of data, noting that internal data checks by THE and QS do not seem to be adequate. He refers to the case of Trinity College Dublin where a misplaced decimal point caused the university to drop several places in the THE word rankings. He then goes on to criticise QS for "data scraping" that is getting information from any available source. He notes that they caused Sultan Qaboos University (SQU) to drop 150 places in their world rankings apparently because QS took data from the SQU website that identified non teaching staff as teaching. I assume that the staff in question were administrators: if they were researchers then it would not have made any difference.
Bekhradnia is correct to point out that data from web sites is often incorrect or subject to misinterpretation. But to assume that such data is necessarily inferior to that reported by institutions to the rankers is debatable. QS has no need to be apologetic about resorting to data scraping. On balance information about universities is more likely to be correct if it comes from one of several and similar competitive sources, if it is from a source independent of the ranking organisation and the university, if it has been collected for reasons other than submission to the rankers, or if there are serious penalties for submitting incorrect data.
The best data for university evaluation and comparison is likely to be from third party databases that collect masses of information or from government agencies that require accuracy and honesty. After that institutional data from web sites and the like is unlikely to be significantly worse than that specifically submitted for ranking purposes.
There was an article in University World News in which Ben Sowter of QS took a rather defensive position with regard to data scraping. He need not have done so. In fact it would not be a bad idea for QS and others to do a bit more.
Bekhradnia goes on to criticise the reputation surveys. He notes that recycling unchanged responses over a period of five years, originally three, means that it is possible that QS is counting the votes of dead or retired academics. He also points out that the response rate to the surveys is very low. All this is correct although it is nothing new. But it should be pointed out that what is significant is not how many respondents there are but how representative they are of the group that is being investigated. The weighting given to surveys in the THE and QS rankings is clearly too much and QS's methods of selecting respondents are rather incoherent and can produce counter-intuitive results such as extremely high scores for some Asian and Latin American universities.
However, it is going too far to suggest that surveys should have no role. First reputation and perceptions are far from insignificant. Many students would, I suspect, prefer to go a university that is overrated by employers and professional schools than to one that provides excellent instruction and facilities but has failed to communicate this to the rest of the world.
In addition, surveys can provide a reality check when a university does a bit of gaming. For example King Abdulaziz University (KAU) has been diligently offering adjunct contracts to dozens of highly cited researchers around the world that require them to put the university as a secondary affiliation and thus allow it to get huge numbers of citations. The US News Arab Region rankings have KAU in the top five among Arab universities for a range of research indicators, publications, cited publications, citations, field weighted citation impact, publications in the top 10 % and the top 25%. But its academic reputation rank was only 26, definitely a big thumbs down.
Bekhradnia then refers to the advantage that universities get in the ARWU rankings simply by being big. This is certainly a valid point. However, it could be argued that quantity is a necessary prerequisite to quality and enables the achievement of economies of scale.
He also suggest that the practice of presenting lists in order is misleading since a trivial difference in the raw data could mean a substantial difference in the presented ranking. He proposes that it would be better to group universities into bands. The problem with this is that when rankers do resort to banding, it is fairly easy to calculate an overall score by adding up the published components. Bloggers and analysts do it all the time.
Bekhradnia concludes:
"The international surveys of reputation should be dropped
– methodologically they are flawed, effectively they only
measure research performance and they skew the results in
favour of a small number of institutions."
This is ultimately self defeating. The need and the demand for some sort of ranking is too widespread to set aside. Abandon explicit rankings and we will probably have implicit rankings of recommendations by self declared experts.
There is much to be done to make rankings better. The priority should be finding objective and internationally comparable measures of student attributes and attainment. That will be some distance in the future. For the moment what universities should be doing is to focus not on composite rankings but on the more reasonable and reliable indicators within specific rankings.
Bekhradnia does have a very good point at the end:
"Finally, universities and governments should discount therankings when deciding their priorities, policies and actions.In particular, governing bodies should resist holding seniormanagement to account for performance in flawed rankings.Institutions and governments should do what they do becauseit is right, not because it will improve their position in therankings."
I would add that universities should stop celebrating when they do well in the rankings. The grim fate of Middle East Technical University should be present in the mind of every university head.
Sunday, January 22, 2017
What's Wrong with Ottawa?
The University of Ottawa (UO) has been a great success over the last few years, especially in research. In 2004 it was around the bottom third of the 202-300 band in the Shanghai Academic Ranking of World Universities. By 2016 it had reached the 201st place, although the Shanghai rankers still recorded it as being in the 201-300 band. Another signing of a highly cited researcher, another paper in Nature, a dozen more papers listed in the Science Citation Index and it would have made a big splash by breaking into the Shanghai top 200.
The Shanghai rankings have, apart from recent problems with the Highly Cited Researchers indicator, maintained a stable methodology so this is a very solid and remarkable achievement.
A look at the individual components of these rankings shows that UO has improved steadily in the the quantity and the quality of research. The score for publications rose from 37.8 to 44.4 between 2004 and 2016, from 13.0 to 16.1 for papers in Nature and Science, and from 8.7 to 14.5 for highly cited researchers (Harvard is 100 in all cases). For productivity (five indicators divided by number of faculty) the score went from 13.2 to 21.5 (Caltech is 100).
It is well known that the Shanghai rankings are entirely about research and ignore the arts and humanities. The Russian Round University Rankings (RUR), however, get their data from the same source as THE did until two years ago, include data from the arts and humanities, and have a greater emphasis on teaching related indicators.
In the RUR rankings, UO rose from 263rd place in 2010 to 211th overall in 2015, from 384th to 378th in five combined teaching indicators and from 177th to 142nd in five combined research indicators. Ottawa is doing well for research and and creeping up a bit for teaching related criteria, although the relationship between these and actual teaching may be rather tenuous.
RUR did not rank UO in 2016. I cannot find any specific reason but it is possible that the university did not submit data for the Institutional Profiles at Research Analytics.
Just for completeness, Ottawa is also doing well in the Webometrics ranking, which is mainly about web activity but does include a measure of research excellence. It is in the 201st spot there also.
It seems, however, that this is not good enough. In September, according to Fulcrum, the university newspaper, there was a meeting of the Board of Governors which discussed not the good results from RUR, Shanghai Ranking and Webometrics. but a fall in the Times Higher Education (THE) World University Rankings from the 201-250 band in 2015-16 to the 250-300 band in 2016-17. One board member even suggested taking THE to court.
So what happened to UO in last year's THE world rankings? The only area where it fell was for Research, from 36.7 to 21.0. In the other indicators or indicator groups, Teaching, Industry Income, International Orientation, Research Impact (citations), it got the same score or improved.
But this is not very helpful. There are actually three components in the research group of indicators, which has a weighting of 30%, two of which are scaled. A fall in the research component might be caused by a fall in its score for research reputation, a decline in its reported research income, a decline in the number of publications, a rise in the number of academic staff, or some combination of these.
The fall in UO's research score could not have been caused by more faculty. The number of full time faculty was 1,284 in 2012-13 and 1,281 in 2013-14.
There was a fall of 7.6% in Ottawa's "sponsored research income" between 2013 and 2014 but I am not sure if that is enough to produce such a large decline in the combined research indicators.
My suspicion is -- and until THE disaggregate their indicators it cannot be anything more -- that the problem lies with the 18% weighted survey of postgraduate teaching. Between 2015 and 2016 the percentage of survey respondents from the arts and humanities was significantly reduced while that from the social sciences and business studies was increased. This would be to the disadvantage of English speaking universities, including those in Canada, relatively strong in the humanities and to the advantage of Asian universities relatively strong in business studies. UO, for example, is ranked highly by Quacquarelli Symonds (QS) for English, Linguistics and Modern Languages, but not for Business Management Studies and Finance and Accounting.
This might have something to do with THE wanting to get enough respondents for business studies after they had been taken out of the social sciences and given their own section. If that is the case, Ottawa might get a pleasant surprise this year since THE are now treating law and education as separate fields and may have to find more respondents to get around the problem of small sample sizes. If so, this could help UO which appears to be strong in those subjects.
It seems, according to another Fulcrum article, that the university is being advised by Daniel Calto from Elsevier. He correctly points out that citations had nothing to do with this year's decline. He then talks about the expansion in the size of the rankings with newcomers pushing in front of UO. It is unlikely that this in fact had a significant effect on the university since most of the newcomers would probably enter below the 300 position and since there has been no effect on its score for teaching, international orientation, industry income or citations (research impact).
I suspect that Caito may have been incorrectly reported. Although he says it was unlikely that citations could have had anything to do with the decline, he is reported later in the article to have said that THE's exclusion of kilo-papers (with 1,000 authors) affected Ottawa. But the kilo-papers were excluded in 2015 so that could not have contributed to the fall between 2015 and 1016.
The Fulcrum article then discusses how UO might improve. M'hamed Aisati, a vice-president at Elsevier, suggest getting more citations. This is frankly not very helpful. The THE methodology means that more citations are meaningless unless they are concentrated in exactly the right fields. And if more citations are accompanied by more publications then the effect could be counter-productive.
If UO is concerned about a genuine improvement in research productivity and quality there are now several global rankings that are quite reasonable. There are even rankings that attempt to measure things like innovation, teaching resources, environmental sustainability and web activity.
The THE rankings are uniquely opaque in that they hide the scores for specific indicators, they are extremely volatile, they depend far too much on dodgy data from institutions and reputation surveys that can be extremely unstable. Above all, the citations indicator is a hilarious generator of absurdity.
The University of Ottawa, and other Canadian universities, would be well advised to forget about the THE rankings or at least not take them so seriously.
The Shanghai rankings have, apart from recent problems with the Highly Cited Researchers indicator, maintained a stable methodology so this is a very solid and remarkable achievement.
A look at the individual components of these rankings shows that UO has improved steadily in the the quantity and the quality of research. The score for publications rose from 37.8 to 44.4 between 2004 and 2016, from 13.0 to 16.1 for papers in Nature and Science, and from 8.7 to 14.5 for highly cited researchers (Harvard is 100 in all cases). For productivity (five indicators divided by number of faculty) the score went from 13.2 to 21.5 (Caltech is 100).
It is well known that the Shanghai rankings are entirely about research and ignore the arts and humanities. The Russian Round University Rankings (RUR), however, get their data from the same source as THE did until two years ago, include data from the arts and humanities, and have a greater emphasis on teaching related indicators.
In the RUR rankings, UO rose from 263rd place in 2010 to 211th overall in 2015, from 384th to 378th in five combined teaching indicators and from 177th to 142nd in five combined research indicators. Ottawa is doing well for research and and creeping up a bit for teaching related criteria, although the relationship between these and actual teaching may be rather tenuous.
RUR did not rank UO in 2016. I cannot find any specific reason but it is possible that the university did not submit data for the Institutional Profiles at Research Analytics.
Just for completeness, Ottawa is also doing well in the Webometrics ranking, which is mainly about web activity but does include a measure of research excellence. It is in the 201st spot there also.
It seems, however, that this is not good enough. In September, according to Fulcrum, the university newspaper, there was a meeting of the Board of Governors which discussed not the good results from RUR, Shanghai Ranking and Webometrics. but a fall in the Times Higher Education (THE) World University Rankings from the 201-250 band in 2015-16 to the 250-300 band in 2016-17. One board member even suggested taking THE to court.
So what happened to UO in last year's THE world rankings? The only area where it fell was for Research, from 36.7 to 21.0. In the other indicators or indicator groups, Teaching, Industry Income, International Orientation, Research Impact (citations), it got the same score or improved.
But this is not very helpful. There are actually three components in the research group of indicators, which has a weighting of 30%, two of which are scaled. A fall in the research component might be caused by a fall in its score for research reputation, a decline in its reported research income, a decline in the number of publications, a rise in the number of academic staff, or some combination of these.
The fall in UO's research score could not have been caused by more faculty. The number of full time faculty was 1,284 in 2012-13 and 1,281 in 2013-14.
There was a fall of 7.6% in Ottawa's "sponsored research income" between 2013 and 2014 but I am not sure if that is enough to produce such a large decline in the combined research indicators.
My suspicion is -- and until THE disaggregate their indicators it cannot be anything more -- that the problem lies with the 18% weighted survey of postgraduate teaching. Between 2015 and 2016 the percentage of survey respondents from the arts and humanities was significantly reduced while that from the social sciences and business studies was increased. This would be to the disadvantage of English speaking universities, including those in Canada, relatively strong in the humanities and to the advantage of Asian universities relatively strong in business studies. UO, for example, is ranked highly by Quacquarelli Symonds (QS) for English, Linguistics and Modern Languages, but not for Business Management Studies and Finance and Accounting.
This might have something to do with THE wanting to get enough respondents for business studies after they had been taken out of the social sciences and given their own section. If that is the case, Ottawa might get a pleasant surprise this year since THE are now treating law and education as separate fields and may have to find more respondents to get around the problem of small sample sizes. If so, this could help UO which appears to be strong in those subjects.
It seems, according to another Fulcrum article, that the university is being advised by Daniel Calto from Elsevier. He correctly points out that citations had nothing to do with this year's decline. He then talks about the expansion in the size of the rankings with newcomers pushing in front of UO. It is unlikely that this in fact had a significant effect on the university since most of the newcomers would probably enter below the 300 position and since there has been no effect on its score for teaching, international orientation, industry income or citations (research impact).
I suspect that Caito may have been incorrectly reported. Although he says it was unlikely that citations could have had anything to do with the decline, he is reported later in the article to have said that THE's exclusion of kilo-papers (with 1,000 authors) affected Ottawa. But the kilo-papers were excluded in 2015 so that could not have contributed to the fall between 2015 and 1016.
The Fulcrum article then discusses how UO might improve. M'hamed Aisati, a vice-president at Elsevier, suggest getting more citations. This is frankly not very helpful. The THE methodology means that more citations are meaningless unless they are concentrated in exactly the right fields. And if more citations are accompanied by more publications then the effect could be counter-productive.
If UO is concerned about a genuine improvement in research productivity and quality there are now several global rankings that are quite reasonable. There are even rankings that attempt to measure things like innovation, teaching resources, environmental sustainability and web activity.
The THE rankings are uniquely opaque in that they hide the scores for specific indicators, they are extremely volatile, they depend far too much on dodgy data from institutions and reputation surveys that can be extremely unstable. Above all, the citations indicator is a hilarious generator of absurdity.
The University of Ottawa, and other Canadian universities, would be well advised to forget about the THE rankings or at least not take them so seriously.
Monday, January 09, 2017
Outbreak of Rankophilia
A plague is sweeping the Universities of the West, rankophilia or an irrational and obsessive concern with position and prospects in global rankings and a unwillingness to exercise normal academic caution and scepticism.
The latest victim is Newcastle University whose new head, Chris Day, wants to make his new employer one of the best in the world. Why does he want to do that?
It is sad that Day recognises that the core business of a university is not enough and that what really matters is proper marketing and shouting to rise up the tables.
Meanwhile Cambridge is yearning to regain its place in the QS top three and Yale is putting new emphasis on science and research with an eye on the global rankings.
The latest victim is Newcastle University whose new head, Chris Day, wants to make his new employer one of the best in the world. Why does he want to do that?
"His ambition - to get the university into the Top 100 in the world - is not simply a matter of personal or even regional pride, however. With universities increasingly gaining income from foreign students who often base their choices of global rankings, improving Newcastle’s position in the league tables has economic consequences."So Newcastle is turning its back on its previous vision of becoming a "civic university" and will try to match its global counterparts. It will do that by enhancing its research reputation.
'While not rowing back from Prof Brink’s mantra of “what are we good at, but what are we good for?”, Prof Day first week in the job saw him highlighting the need for Newcastle to concentrate on improving its reputation for academic excellence." '
Perhaps Newcastle will ascend into the magic 100, but the history of the THE rankings over the last few years is full of universities -- Alexandria, University of Tokyo, Tokyo Metropolitan University, University of Copenhagen, Royal Holloway, University of Cape Town, Middle East Technical University and others -- that have soared in the THE rankings for a while and then fallen, often because of nothing more than a twitch of a methodological finger.
Sunday, January 01, 2017
Ranking Teaching Quality
There has been a lot of talk lately about the quality of teaching and learning in universities. This has always an been an important element in national rankings such as the US News America's Best Colleges and the Guardian and Sunday Times in the UK, measured by things like standardised test scores, student satisfaction, reputation surveys, completion rates and staff student ratio.
There have been suggestions that university teaching staff need to be upgraded by attending courses in educational theory and practice or by obtaining some sort of certification or qualification.
The Higher Education Funding Council of England (HEFCE) has just published data on the number of staff with educational qualifications in English higher educational institutions.
The university with the largest number of staff with some sort of educational qualification is Huddersfield which unsurprisingly is very pleased. The university's website reports HEFCE's assertion that “information about teaching qualifications has been identified as important to students and is seen as an indicator of individual and institutional commitment to teaching and learning.”
The top six universities are:
1. University of Huddersfield
2. Teesside University
3. York St John University
4. University of Chester
5= University of St Mark and St John
5= Edge Hill University.
The bottom five are:
104= London School of Economics
104= Courtauld Institute of Art
106. Goldsmith's College
107= University of Cambridge
107= London School of Oriental and African Studies.
It seems that these data provide almost no evidence that a "commitment to teaching and learning" is linked with any sort of positive outcome. Correlations with the overall scores in the Guardian rankings and the THE Teaching Exercise Framework simulation are negative (-550 and -410 [-204 after benchmarking]).
In addition, the correlation between the percentage of staff with teaching qualifications and the Guardian indicators is negative for student satisfaction with the course (-161, insignificant), student satisfaction with teaching (-.197, insignificant), value added (-.352) and graduate employment (-.379).
But there is a positive correlation with student satisfaction with feedback (.323).
The correlations with the indicators in the THE simulation were similar: graduate employment -.416, (-.249 after benchmarking), -449 completion (-.130, insignificant, after benchmarking), and -.186 student satisfaction, insignificant (-.056 after benchmarking, insignificant).
The report does cover a variety of qualifications so it is possible that digging deeper might show that some types of credentials are more useful than others. Also, there are intervening variables: Some of the high scorers, for example, are upgraded teacher training colleges with a relatively low status and a continuing emphasis on education as a subject.
Still, unless you count a positive association with feedback, there is no sign that forcing or encouraging faculty to take teaching courses and credentials will have positive effects on university teaching.
There have been suggestions that university teaching staff need to be upgraded by attending courses in educational theory and practice or by obtaining some sort of certification or qualification.
The Higher Education Funding Council of England (HEFCE) has just published data on the number of staff with educational qualifications in English higher educational institutions.
The university with the largest number of staff with some sort of educational qualification is Huddersfield which unsurprisingly is very pleased. The university's website reports HEFCE's assertion that “information about teaching qualifications has been identified as important to students and is seen as an indicator of individual and institutional commitment to teaching and learning.”
The top six universities are:
1. University of Huddersfield
2. Teesside University
3. York St John University
4. University of Chester
5= University of St Mark and St John
5= Edge Hill University.
The bottom five are:
104= London School of Economics
104= Courtauld Institute of Art
106. Goldsmith's College
107= University of Cambridge
107= London School of Oriental and African Studies.
It seems that these data provide almost no evidence that a "commitment to teaching and learning" is linked with any sort of positive outcome. Correlations with the overall scores in the Guardian rankings and the THE Teaching Exercise Framework simulation are negative (-550 and -410 [-204 after benchmarking]).
In addition, the correlation between the percentage of staff with teaching qualifications and the Guardian indicators is negative for student satisfaction with the course (-161, insignificant), student satisfaction with teaching (-.197, insignificant), value added (-.352) and graduate employment (-.379).
But there is a positive correlation with student satisfaction with feedback (.323).
The correlations with the indicators in the THE simulation were similar: graduate employment -.416, (-.249 after benchmarking), -449 completion (-.130, insignificant, after benchmarking), and -.186 student satisfaction, insignificant (-.056 after benchmarking, insignificant).
The report does cover a variety of qualifications so it is possible that digging deeper might show that some types of credentials are more useful than others. Also, there are intervening variables: Some of the high scorers, for example, are upgraded teacher training colleges with a relatively low status and a continuing emphasis on education as a subject.
Still, unless you count a positive association with feedback, there is no sign that forcing or encouraging faculty to take teaching courses and credentials will have positive effects on university teaching.
Subscribe to:
Posts (Atom)