Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Thursday, May 07, 2020
Observations on the Indian Ranking Boycott
Seven Indian Institutes of Technology (IITs) -- Delhi, Bombay, Guwahati, Kanpur, Kharagpur, Madras, and Roorkee -- have announced that they will be boycotting this year's Times Higher Education (THE) World University Rankings. The move has been coming for some time. Indian universities have not performed well in most rankings but they have done especially badly in THE's.
Take a look at the at the latest THE world rankings and the performance of three elite institutions. IIT Delhi (IITD) and IIT Bombay (IITB) are in the 401-500 band, and the Indian Institute of Science (IISc) is 301-350.
It is noticeable that these three all do much better in the QS world rankings where IIT Delhi is 182nd, IIT Bombay 152nd, and IISc 184th. That no doubt explains why these Institutes are boycotting THE but still engaging with QS.
It should be pointed out that with regard to research THE probably treats the Institutes better than they deserve. The Shanghai rankings, which are concerned only with research, have IITD and IITB in the 701-800 band, and IISc at 401-500. In the US News Best Global Universities IITD is 654th, ITB 513th, and IISc 530th.
The dissatisfaction with THE is understandable. Indeed it might be surprising that the IITs have taken so long to take action. They complain about transparency and the parameters. They have a point, in fact several points. The THE rankings are uniquely opaque: they combine eleven indicators into three clusters so it is impossible for a reader to figure out exactly why a university is doing so well or so badly for teaching or research. THE's income and international metrics, three of each, also work against Indian universities.
It is, however, noticeable that a few Indian universities have done surprisingly well in the THE world rankings: IIT Ropar and IIT Indore are in the top 400 and IIT Gandhinagar in the top 600 thanks to high scores for citations. IIT Ropar is credited with a score of 100, making it fourth in the world behind those giants of research impact: Aswan University, Brandeis University, and Brighton and Sussex Medical School.
Regular readers of this blog will know what is coming next. IIT Ropar has contributed to 15 papers related to the multi-author and hugely cited Global Burden of Disease Study (GBDS), which is slightly less than 1.5% of its total papers over the relevant period but well over 40% of citations.
It would be relatively simple for the mutinous seven to recruit one or two researchers involved in the GBDS and in a few years -- assuming the current methodology or something like it continues -- they too would be getting near "perfect" scores for citations and heading for top three hundred spots.
They may, however, have judged that the THE methodology is going to be changed sooner or later -- now looking like a little bit later -- or that aiming for the QS reputation surveys is more cost effective. Or perhaps they were simply unaware of exactly how to get a good score in the THE rankings.
It is sad that the Indian debate over ranking has largely been limited to comparisons between THE and QS. There are other rankings that are technically better in some ways and are certainly better suited to Indian circumstances. The Round University Ranking which has 20 indicators and a balanced weighting has IISC in 62nd place with extremely good scores for financial sustainability and doctoral students.
The boycott is long overdue. If it leads to a more critical and sceptical approach to ranking then it may do everybody a lot of good.
Sunday, April 19, 2020
THE's WUR 3.0 is on the way
Alert to readers. Some of this post covers ground I have been over before. See here, here and here. I plead guilty to self-plagiarism.
Times Higher Education (THE) is talking about a 3.0 version of its World University Rankings to be announced at this year's academic summit in Toronto and implemented in 2021, a timetable that may not survive the current virus crisis. I will discuss what is wrong with the rankings, what THE could do, and what it might do.
The magazine has achieved an enviable position in the university rankings industry. Global rankings produced by reliable university researchers with sensible methodologies, such as the CWTS Leiden Ranking, University Ranking by Academic Performance (Middle East Technical University) and the National Taiwan University Rankings are largely ignored by the media, celebrities and university administrators. In contrast, THE is almost always one of the Big Four rankings (the others are QS, US News, and Shanghai Ranking), the Big Three or the Big Two and sometimes the only global ranking that is discussed.
The exalted status of THE is remarkable considering that it has many defects. It seems that the prestigious name -- there are still people who think that is the Times newspaper or part of it -- and skillful public relations campaigns replete with events, workshops. gala dinners and networking lunches have eroded the common sense and critical capacity of the education media and the administrators of the Ivy League, the Russell Group and their imitators.
There are few things more indicative of the inadequacy of the current leadership of Western higher education than their toleration of a ranking that puts Aswan University top of the world for research impact by virtue of its participation in the Gates funded Global Burden of Disease Study and Anadolu University top for innovation because it reported its income from private online courses as research income from industry. Would they really accept that sort of thing from a master's thesis candidate? It is true that the "Sokal squared" hoax has shown that that the capacity for critical thought has been seriously attenuated in the humanities and social sciences but one would expect better from philosophers, physicists and engineers.
The THE world and regional rankings are distinctively flawed in several ways. First, a substantial amount of their data comes directly from institutions. Even if universities are 100% honest and transparent the probability that data will flow smoothly and accurately from branch campuses, research centres and far flung campuses through the committees tasked with data submission and on to the THE team is not very high.
THE has implemented an audit by PricewaterhouseCooper (PwC) but that seems to be about "testing the key controls to capture and handle data, and a full reperformance of the calculation of the rankings" and does not extend to checking the validity of the data before it enters the mysterious machinery of the rankings. PwC state that this is a "limited assurance engagement."
Second, THE is unique among the well-known rankings in bundling eleven of its 13 indicators in three groups with composite scores. That drastically reduces the utility of the rankings since it is impossible to figure out whether, for example, an improvement for research results from an increase in the number of published papers, an increase in research income, a decline in the number of research and academic staff, a better score for research reputation, or some combination of these. Individual universities can gain access to more detailed information but that is not necessarily helpful to students or other stakeholders.
Third, the THE rankings give a substantial weighting to various input metrics. One of these is income which is measured by three separate indicators, total institutional income, research income, and research income from industry. Of the other world rankings only the Russian Round University Rankings do this.
There is of course some relationship between funding and productivity but it is far from absolute and universal. The Universitas 21 system rankings, for example, show that countries like Malaysia and Saudi Arabia have substantial resources but so far have achieved only a modest scientific output while Ireland has done very well in maintaining output despite a limited and declining resource base.
The established universities of the world seem to be quite happy with these income indicators which, whatever happens, are greatly to their advantage. If their overall score goes down this can be plausibly attributed to a decline in funding that can be used to demand money from national resources. At a time when austerity has threatened the well being of many vulnerable groups, with more suffering to come in the next few months, it is arguable that universities are not those most deserving of state funding.
Fourth, another problem arises from THE counting doctoral students in two indicators. It is difficult to see how the number of doctoral students or degrees can in itself add to the quality of undergraduate or master's teaching and this could act to the detriment of liberal arts colleges like Williams or Harvey Mudd which have an impressive record of produced employable graduates.
These indicators may also have the perverse consequence of forcing people who would benefit from a master's or post graduate diploma course into doctoral programs with high rates of non-completion.
Fifthly, the two stand alone indicators are very problematic. The industry income indicator purports to represent universities' contributions to innovation. An article by Alex Usher found that the indicator appeared to be based on very dubious data. See here for a reply by Phil Baty that is almost entirely tangential to the criticism. Even if the data were accurate it is a big stretch to claim that this is a valid measure of a university's contribution to innovation.
The citations indicator which is supposed to measure research impact, influence or quality is a disaster. Or it should be: the defects of this metric seem to have passed unnoticed everywhere it matters.
The original sin of the citations indicator goes back to the early days of the THE rankings after that unpleasant divorce from QS. THE used data from the ISI database, as it was then known, and in return agreed to give prominence to an indicator that was almost the same as the InCites platform that was a big-selling product.
The indicator is assigned a weighting of 30% which is much higher than that given to publications and higher than given to citations by QS, Shanghai US News or RUR. In fact this understates the weighting. THE has a regional modification or country bonus that divides the impact score of a university by the square root of the impact score of the country where it is located. The effect of this is that the scores of universities in the top country will remain unchanged but everybody else will get an increase, a big one for low scoring countries, a smaller one for those scoring higher. Previously the bonus applied to the whole of the indicator but now it is 50%. Basically this means that universities are rewarded for being in a low scoring country.
The reason originally given for this was that some countries lack the networking and funds to nurture citation rich research. Apparently, such a problem has no relevance to international indicators. This was in fact probably an ad hoc way of getting round the massive gap between the world's elite and other universities with regard to citations, much bigger than most other metrics.
The effect of this was to give a big advantage to mediocre universities surrounded by low achieving peers. Combined with other defects it has produced big distortions in the indicator.
This indicator is overnormalised. Citation scores are based not on a simple count of citations but rather on a comparison with the world average of citations according to year of publication, type of publication, and academic field, over three hundred of them. A few years ago someone told THE that absolute counting of citations was a mortal sin and that seems to have become holy scripture. There is clearly a need to take account of disciplinary variations, such as the relative scarcity of citations in literary studies and philosophy and their proliferation in medical research and physics but the finer the analysis gets the more chance there is that outliers will exert a disproportionate effect on the impact score.
Perhaps the biggest problem with the THE rankings is the failure to use fractional counting of citations. There is an increasing problem with papers with scores, hundreds, occasionally thousands of "authors", in particle physics, medicine and genetics. Such papers often attract thousands of citations partly because of their scientific importance, partly because many of their authors will find opportunities to cite themselves.
The result is that until 2014-15 a university with a modest contribution to a project like the Large Hadron Collider Project could get a massive score for citations especially if its overall output of papers was not high and especially if it was located in a country were citations were generally low.
The 2014-15 THE world rankings included among the world's leaders for citations Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology and Bogazici University.
Then THE introduced some reforms. Papers with over a thousand authors were excluded from the citation count, the country bonus was halved, and the source of bibliometric data was switched from ISI to Scopus. This was disastrous for those universities that had over-invested in physics especially in Turkey, South Korea and France.
The next year THE started counting the mega-papers again but introduced a modified form of fractional counting. Papers with a thousand plus papers were counted according to their contribution to the paper with a minimum of five per cent.
The effect of these changes was to replace physics privilege with medicine privilege. Fractional counting did not apply to papers with hundreds of authors but less than a thousand and so a new batch of improbable universities started getting near perfect scores for citations and began to break into the top five hundred or thousand in the world. Last year these included Aswan University, the Indian University of Technology Ropar, the University of Peradeniya, Anglia Ruskin University, the University of Reykjavik, and the University of Environmental and Occupational Health Japan.
They did so because of participation in the Global Burden of Disease Study combined with a modest overall output of papers and/or the good fortunate to be located in a country with a low impact score.
There is something else about the indicator that should be noted. THE includes self-citations and on a couple of occasions has said that this does not make any significant difference. Perhaps not in the aggregate, but there have been occasions when self-citers have in fact made a large difference to the scores of specific universities. In 2009 Alexandria University broke into the top 200 world universities by virtue of a self-citer and a few friends. In 2017 Veltech University was the third best university in India and the best in Asia for citations all because of exactly one self-citing author, In 2018 the university had for some reason completely disappeared from the Asian rankings.
So here are some fairly obvious things that THE ought to do:
But what will THE actually do?
Duncan Ross, THE data director, has published a few articles setting out some talking points (here, here, here, here).
He suggests that in the citation indicator THE should take the 75th percentile as the benchmark rather than the mean when calculating field impact scores. If I understand it correctly this would reduce the extreme salience of outliers in this metric.
It seems that a number of new citations measures are being considered with the proportion of most cited publications apparently getting the most favourable consideration. Unfortunately it seems that they are not going any further with fractional counting, supposedly because it will discourage collaboration.
Ross mentions changing the weighting of the indicators but does not seem enthusiastic about this. He also discusses the importance of measuring cross-disciplinary research.
THE is also considering the doctoral student measures with the proportion of doctoral students who eventually graduate. They are thinking about replacing institutional income with "a more precise measure," perhaps spending on teaching and teaching related activities. That would probably not be a good idea. I can think of all sorts of ways in which institutions could massage the data so that in the end it would be as questionable as the current industry income indicator.
It seems likely that patents will replace income from industry as the proxy for innovation.
So it appears that there will be some progress in reforming the THE world rankings. Whether it will be enough remains to be seen.
Times Higher Education (THE) is talking about a 3.0 version of its World University Rankings to be announced at this year's academic summit in Toronto and implemented in 2021, a timetable that may not survive the current virus crisis. I will discuss what is wrong with the rankings, what THE could do, and what it might do.
The magazine has achieved an enviable position in the university rankings industry. Global rankings produced by reliable university researchers with sensible methodologies, such as the CWTS Leiden Ranking, University Ranking by Academic Performance (Middle East Technical University) and the National Taiwan University Rankings are largely ignored by the media, celebrities and university administrators. In contrast, THE is almost always one of the Big Four rankings (the others are QS, US News, and Shanghai Ranking), the Big Three or the Big Two and sometimes the only global ranking that is discussed.
The exalted status of THE is remarkable considering that it has many defects. It seems that the prestigious name -- there are still people who think that is the Times newspaper or part of it -- and skillful public relations campaigns replete with events, workshops. gala dinners and networking lunches have eroded the common sense and critical capacity of the education media and the administrators of the Ivy League, the Russell Group and their imitators.
There are few things more indicative of the inadequacy of the current leadership of Western higher education than their toleration of a ranking that puts Aswan University top of the world for research impact by virtue of its participation in the Gates funded Global Burden of Disease Study and Anadolu University top for innovation because it reported its income from private online courses as research income from industry. Would they really accept that sort of thing from a master's thesis candidate? It is true that the "Sokal squared" hoax has shown that that the capacity for critical thought has been seriously attenuated in the humanities and social sciences but one would expect better from philosophers, physicists and engineers.
The THE world and regional rankings are distinctively flawed in several ways. First, a substantial amount of their data comes directly from institutions. Even if universities are 100% honest and transparent the probability that data will flow smoothly and accurately from branch campuses, research centres and far flung campuses through the committees tasked with data submission and on to the THE team is not very high.
THE has implemented an audit by PricewaterhouseCooper (PwC) but that seems to be about "testing the key controls to capture and handle data, and a full reperformance of the calculation of the rankings" and does not extend to checking the validity of the data before it enters the mysterious machinery of the rankings. PwC state that this is a "limited assurance engagement."
Second, THE is unique among the well-known rankings in bundling eleven of its 13 indicators in three groups with composite scores. That drastically reduces the utility of the rankings since it is impossible to figure out whether, for example, an improvement for research results from an increase in the number of published papers, an increase in research income, a decline in the number of research and academic staff, a better score for research reputation, or some combination of these. Individual universities can gain access to more detailed information but that is not necessarily helpful to students or other stakeholders.
Third, the THE rankings give a substantial weighting to various input metrics. One of these is income which is measured by three separate indicators, total institutional income, research income, and research income from industry. Of the other world rankings only the Russian Round University Rankings do this.
There is of course some relationship between funding and productivity but it is far from absolute and universal. The Universitas 21 system rankings, for example, show that countries like Malaysia and Saudi Arabia have substantial resources but so far have achieved only a modest scientific output while Ireland has done very well in maintaining output despite a limited and declining resource base.
The established universities of the world seem to be quite happy with these income indicators which, whatever happens, are greatly to their advantage. If their overall score goes down this can be plausibly attributed to a decline in funding that can be used to demand money from national resources. At a time when austerity has threatened the well being of many vulnerable groups, with more suffering to come in the next few months, it is arguable that universities are not those most deserving of state funding.
Fourth, another problem arises from THE counting doctoral students in two indicators. It is difficult to see how the number of doctoral students or degrees can in itself add to the quality of undergraduate or master's teaching and this could act to the detriment of liberal arts colleges like Williams or Harvey Mudd which have an impressive record of produced employable graduates.
These indicators may also have the perverse consequence of forcing people who would benefit from a master's or post graduate diploma course into doctoral programs with high rates of non-completion.
Fifthly, the two stand alone indicators are very problematic. The industry income indicator purports to represent universities' contributions to innovation. An article by Alex Usher found that the indicator appeared to be based on very dubious data. See here for a reply by Phil Baty that is almost entirely tangential to the criticism. Even if the data were accurate it is a big stretch to claim that this is a valid measure of a university's contribution to innovation.
The citations indicator which is supposed to measure research impact, influence or quality is a disaster. Or it should be: the defects of this metric seem to have passed unnoticed everywhere it matters.
The original sin of the citations indicator goes back to the early days of the THE rankings after that unpleasant divorce from QS. THE used data from the ISI database, as it was then known, and in return agreed to give prominence to an indicator that was almost the same as the InCites platform that was a big-selling product.
The indicator is assigned a weighting of 30% which is much higher than that given to publications and higher than given to citations by QS, Shanghai US News or RUR. In fact this understates the weighting. THE has a regional modification or country bonus that divides the impact score of a university by the square root of the impact score of the country where it is located. The effect of this is that the scores of universities in the top country will remain unchanged but everybody else will get an increase, a big one for low scoring countries, a smaller one for those scoring higher. Previously the bonus applied to the whole of the indicator but now it is 50%. Basically this means that universities are rewarded for being in a low scoring country.
The reason originally given for this was that some countries lack the networking and funds to nurture citation rich research. Apparently, such a problem has no relevance to international indicators. This was in fact probably an ad hoc way of getting round the massive gap between the world's elite and other universities with regard to citations, much bigger than most other metrics.
The effect of this was to give a big advantage to mediocre universities surrounded by low achieving peers. Combined with other defects it has produced big distortions in the indicator.
This indicator is overnormalised. Citation scores are based not on a simple count of citations but rather on a comparison with the world average of citations according to year of publication, type of publication, and academic field, over three hundred of them. A few years ago someone told THE that absolute counting of citations was a mortal sin and that seems to have become holy scripture. There is clearly a need to take account of disciplinary variations, such as the relative scarcity of citations in literary studies and philosophy and their proliferation in medical research and physics but the finer the analysis gets the more chance there is that outliers will exert a disproportionate effect on the impact score.
Perhaps the biggest problem with the THE rankings is the failure to use fractional counting of citations. There is an increasing problem with papers with scores, hundreds, occasionally thousands of "authors", in particle physics, medicine and genetics. Such papers often attract thousands of citations partly because of their scientific importance, partly because many of their authors will find opportunities to cite themselves.
The result is that until 2014-15 a university with a modest contribution to a project like the Large Hadron Collider Project could get a massive score for citations especially if its overall output of papers was not high and especially if it was located in a country were citations were generally low.
The 2014-15 THE world rankings included among the world's leaders for citations Tokyo Metropolitan University, Federico Santa Maria Technical University, Florida Institute of Technology and Bogazici University.
Then THE introduced some reforms. Papers with over a thousand authors were excluded from the citation count, the country bonus was halved, and the source of bibliometric data was switched from ISI to Scopus. This was disastrous for those universities that had over-invested in physics especially in Turkey, South Korea and France.
The next year THE started counting the mega-papers again but introduced a modified form of fractional counting. Papers with a thousand plus papers were counted according to their contribution to the paper with a minimum of five per cent.
The effect of these changes was to replace physics privilege with medicine privilege. Fractional counting did not apply to papers with hundreds of authors but less than a thousand and so a new batch of improbable universities started getting near perfect scores for citations and began to break into the top five hundred or thousand in the world. Last year these included Aswan University, the Indian University of Technology Ropar, the University of Peradeniya, Anglia Ruskin University, the University of Reykjavik, and the University of Environmental and Occupational Health Japan.
They did so because of participation in the Global Burden of Disease Study combined with a modest overall output of papers and/or the good fortunate to be located in a country with a low impact score.
There is something else about the indicator that should be noted. THE includes self-citations and on a couple of occasions has said that this does not make any significant difference. Perhaps not in the aggregate, but there have been occasions when self-citers have in fact made a large difference to the scores of specific universities. In 2009 Alexandria University broke into the top 200 world universities by virtue of a self-citer and a few friends. In 2017 Veltech University was the third best university in India and the best in Asia for citations all because of exactly one self-citing author, In 2018 the university had for some reason completely disappeared from the Asian rankings.
So here are some fairly obvious things that THE ought to do:
- change the structure of the rankings to give more prominence to publications and less to citations
- remove the income indicators or reduce their weighting
- replace the income from industry indicator with a count of patents preferably those accepted rather than filed
- in general, where possible replace self-submitted with third party data
- if postgraduate students are to be counted then count master's as well as doctoral students
- get rid of the country bonus which exaggerates the scores of mediocre or sub-mediocre institutions because they are in the poorly performing countries
- adopt a moderate form of normalisation with a dozen or a score of fields rather than the present 300+
- use full-scale fractional counting
- do not count self citations, even better do not count intra-institutional citations
- do not count secondary affiliations, although that is something that is more the responsibility of publishers
- introduce two or more measures of citations.
But what will THE actually do?
Duncan Ross, THE data director, has published a few articles setting out some talking points (here, here, here, here).
He suggests that in the citation indicator THE should take the 75th percentile as the benchmark rather than the mean when calculating field impact scores. If I understand it correctly this would reduce the extreme salience of outliers in this metric.
It seems that a number of new citations measures are being considered with the proportion of most cited publications apparently getting the most favourable consideration. Unfortunately it seems that they are not going any further with fractional counting, supposedly because it will discourage collaboration.
Ross mentions changing the weighting of the indicators but does not seem enthusiastic about this. He also discusses the importance of measuring cross-disciplinary research.
THE is also considering the doctoral student measures with the proportion of doctoral students who eventually graduate. They are thinking about replacing institutional income with "a more precise measure," perhaps spending on teaching and teaching related activities. That would probably not be a good idea. I can think of all sorts of ways in which institutions could massage the data so that in the end it would be as questionable as the current industry income indicator.
It seems likely that patents will replace income from industry as the proxy for innovation.
So it appears that there will be some progress in reforming the THE world rankings. Whether it will be enough remains to be seen.
Tuesday, April 14, 2020
Who's doing research on Covid-19?
This is crude and simple. I typed in Covid-19 with fields Article title, Abstract, Keywords.
The first and oldest item of 1,500 to appear was "The New Coronavirus, the Current King of China." by S A Plotkin in the Journal of the Pediatric Infectious Diseases Society. I wonder if there will be a sequel, "The Current King of the World."
The top universities for number of publications are:
1. Tongji Medical College
2. Huazhong University of Science and Technology
3. Chinese Academy of Medical Sciences and Peking Union Medical College
4. "School of Medicine"
5. London School of Hygiene and Tropical Medicine
6. Wuhan University
7. Chinese Academy of Sciences
8. Capital Medical University
9. Fudan University
10. National University of Singapore.
The top funding agencies are:
1. National Natural Sciences Foundation of China
2. National Basic Research Program of China
3. National Institutes of Health
4. Fundamental Research Funds for the Central Universities
5. Wellcome Trust
6. Chinese Academy of Sciences
7. Canadian Institutes of Health Research
8. National Science Foundation
9. Agence Nationale de la Recherche
10. Chinese Academy of Medical Research
The first and oldest item of 1,500 to appear was "The New Coronavirus, the Current King of China." by S A Plotkin in the Journal of the Pediatric Infectious Diseases Society. I wonder if there will be a sequel, "The Current King of the World."
The top universities for number of publications are:
1. Tongji Medical College
2. Huazhong University of Science and Technology
3. Chinese Academy of Medical Sciences and Peking Union Medical College
4. "School of Medicine"
5. London School of Hygiene and Tropical Medicine
6. Wuhan University
7. Chinese Academy of Sciences
8. Capital Medical University
9. Fudan University
10. National University of Singapore.
The top funding agencies are:
1. National Natural Sciences Foundation of China
2. National Basic Research Program of China
3. National Institutes of Health
4. Fundamental Research Funds for the Central Universities
5. Wellcome Trust
6. Chinese Academy of Sciences
7. Canadian Institutes of Health Research
8. National Science Foundation
9. Agence Nationale de la Recherche
10. Chinese Academy of Medical Research
Tuesday, March 10, 2020
University of California Riverside:Is it America's Fastest Rising University?
Not really.
It seems that one major function of rankings is to cover up the decline or stagnation of major western universities. An example is the University of California Riverside (UCR), a leading public institution.
Recently there was a tweet and a web page extolling UCR as "America's fastest rising university" on the strength of its ascent in four different rankings: Forbes 80 places in two years, US News 33 places in two years, Times Higher Education (THE) 83 places in two years and the Center for World University Rankings (CWUR) 41 places in one year.
This surprised me a bit because I was under the impression that the University of California system had been declining in the research-based rankings and that UCR had not done so well in the THE world rankings. So I had a quick look. In 2019-20 UCR was in the 251-300 band in the THE world rankings. Two years before that it was 198th and in 2010-11 it was 117th. I have little trust in THE but that is evidence of a serious decline by any standards.
But it seems that the tweeter was thinking about the THE/Wall Street Journal US Teaching Rankings. In 2019-20 Riverside was 189th there, in 2018-19 212th, in 2017-18 272nd, and in 2016-17 it was 364th.
That is definitely a substantial rise and it is rather impressive considering the this rise occurred while the UCR was falling in THE's world rankings. Most of the rise occurred in the outcomes "pillar" and probably was a result of the introduction in 2018 of a new indicator that measured student debt.
UCR's rise had nothing to do with resources, environment or engagement. It is not possible to disentangle the components within THE's four pillars but it is a very plausible hypothesis that a good part of UCR's success is the result of a methodological change that introduced an element where this university was especially strong.
Another ranking where UCR does well is the Center for World University Ranking now published in the UAE. Last year UCR was 204th, in 2018-19 245th, and in 2017-18 218th.
The fall and rise of UCR over two years follows the weighting given to research. When the weighting was 25% UCR's position was 218th. When the research weighting was increased to 70 % UCR fell to 245th. When it fell to 50% UCR rose to 204th. So, here again UCR's rank is dependent on methodological tweaking. It does well when the weighting for research is reduced and less well when it is increased.
I assume that the US News (USN) rankings refer to America's Best Colleges, where UCR does very well. It was 91st last year among national universities and 1st for social mobility.
The rise of UCR in these rankings is also the result of methodological changes. In 2019 USN began to shift away from academic excellence to social mobility, which basically means admitting and graduating larger numbers of recognised and protected groups. The acceptance rate criterion has been scrapped and metrics related to social mobility such as the graduation rates of low income students have been introduced. UCR has not necessarily improved: what has happened is that the ranking has put more weight on things that it is good at and less on those where it performs less well.
The Forbes ranking also refers to changes announced in 2017 that "better align this list with what FORBES values most: entrepreneurship, success, impact and the consumer experience." It is likely that these had a favourable impact on UCR's performance here as well.
When it comes to international research based rankings over several years the story is a different one of steady decline. Starting with total publications in the publications indicator of the CWTS Leiden Ranking, UCR went from 270th in 2006-09 to 392nd in 2014-17. Much of this was due to the general decline of American universities but even within the US group there was a decline from 88th to 93rd.
The decline is starker if we look at the most rigorous expression of quality, the proportion of publications in the top 1% of journals. In the same period UCR fell from 12th to 130th worldwide and 11th to 59th in the USA.
Turning to the Scimago Institution Rankings which include patents and altmetrics, Riverside fell from 151st in 2011 to 228th in 2019. Among US institutions it fell from 70th in 2011 to 85th in 2019.
That is the situation with regard to research based rankings. Moving on the rankings that include things related to teaching UCR fell from 271st in the QS world rankings in 2017 to 454th in 2020. In the Round University Rankings it fell from 197th in 2010 to 231st in 2916 and then stopped participating from 2017 onward.
It seems fairly clearly that UCR has been in declining in indicators relating to research, especially research of the highest quality. It also performs poorly in those rankings that combine teaching withresearch and internationalisation metrics. Exactly why must wait for another post but I strongly suspect that the underlying reason is the declining ability of incoming students and the retreat from meritocracy in graduate school admissions and faculty appointments and promotions.
Friday, February 28, 2020
Polish Universities in International Rankings
My short article on Polish universities and international rankings has just been published by Forum Akademikie. The article in Polish can be accessed here. Translation and editing by Piotr Kieracinski. The full journal issue is here.
Here is the English version.
Here is the English version.
Richard Holmes
Polish Universities and International
Rankings
A Brief History of International University Rankings
After a false start with the Asiaweek rankings of
1999-2000, international university rankings took off in 2003 with the Academic
Ranking of World Universities (ARWU), published by Shanghai Jiao Tong
University and then by the Shanghai Ranking Consultancy.
In 2004 two new rankings appeared: the Ranking Web of
Universities, better known as Webometrics, which originally measured only web
activity, and the Times Higher Education Supplement (THES) –
Quacquarelli Symonds (QS) World University Rankings, which emphasised research
and also included faculty resources and internationalisation indicators.
Since then the number of rankings, metrics and data points
has increased prompting ranking consultant Daniel Guhr to talk about “vast data
lakes”. Rankings have become more complex and sophisticated and often use statistical
techniques such as standardisation and field normalisation.
In addition to global rankings, specialist rankings of
regions, subjects, and business schools have appeared. International rankings continue
to have a bias towards research but some try to find a way of capturing data
that might be relevant to teaching and learning or to university third missions
such as sustainability, gender equity and open access. They have also become
significant in shaping national higher education policy and institutional
strategy.
Although the media usually talk about the big four rankings
or sometimes the big three or big two, there are now many more. The IREG
Inventory of International Rankings includes 17 global rankings in addition to
regional and specialised rankings and various spin offs. Since the publication of
the inventory more global rankings have appeared and no doubt there are more to
come.
Media Perceptions of Rankings
It is unfortunate that the media and public perception of global
rankings has little relation to reality. The Times Higher Education (THE)
World University Rankings are by far the most prestigious but they have serious
defects. They lack transparency with eleven indicators bundled into three broad
groups. They rely on subjective surveys and questionable data submitted by
institutions. They are unbalanced with a 30% weighting going towards a citation
indicator that can be influenced by a handful of papers in a multi-author
international project and that has elevated a succession of little-known places
to world research leadership. These include the University of Reykjavik, Babol
Noshirvani University of Technology, Aswan University, and Anglia Ruskin University.
The problems of the THE world rankings are illustrated by
looking at the fate of a leading Polish university in recent editions. In the
2014-15 rankings the University of Warsaw was ranked 301-350 but in 2015-16 it
fell to 501-600. This was entirely the result of a dramatic fall in the score
for citations and that was entirely the result of a methodological change. In
2015 THE stopped counting citations to papers with over a thousand “authors”.
This badly affected the University of Warsaw which, along with Warsaw
University of Technology, had been contributing to the Large Hadron Collider
project a producer of many such papers. The University of Warsaw’s decline in
the rankings had nothing to do with any defect. It was simply the result of
THE’s tweaking.
Although they receive little attention from the media there
are now several global rankings published by universities and research councils
that include more universities, cover a broader range of indicators and are
technically as good as or better than the Big Four. These include the National
Taiwan University Rankings, University Ranking by Academic Performance
published by Middle East Technical University, the Scimago Institution Rankings
and CWTS Leiden Ranking.
Polish Universities in Global Rankings
Turning to the current position of Polish universities in
international rankings there is a great deal of variation. There are 14 in the
THE rankings with 4 in the top 1000, but 410 in the Webometrics rankings of
which 10 are in the top 1000. The ranking with the best representation of
Polish universities is Scimago with 54 in the top 1000.
Of the “big four” rankings -- THE, QS, Shanghai, US News
-- the best for analysing the current standing of the International
Visibility Project (IntVP) universities is the US News Best Global Universities
(BGU). THE and QS are unbalanced with too much emphasis on a single indicator,
citations and academic survey respectively. The Shanghai rankings include Nobel
and Fields awards some of which are several decades old. It should be noted
that BGU is an entirely research based ranking.
The list below indicates the world rank of Polish universities
in the latest US News BGU:
University of Warsaw 286
Jagiellonian University 343
Warsaw University of Technology 611
AGH University of Science and Technology
635
Adam Mickiewicz University 799
University of Wroclaw 833
Medical University of Wroclaw 926
Wroclaw University of Science and
Technology 961
Nicholas Copernicus University 984
Medical University of Gdansk 995
Medical University of Warsaw 1033
University of Silesia 1082
University of Gdansk 1096
University of Lodz 1119
Gdansk University of Technology 1148
Poznan University of Technology 1148
Lodz University of Technology 1194
Lodz Medical University 1203
Warsaw University of Life Sciences
1221
Pomeranian Medical University
1303
Poznan University of Medical Sciences
1312
Silesian University of Technology
1351
Krakow University of Technology 1363
University of Warmia 1363
Medical University Silesia 1399
Medical University of Lublin 1414
University of Rseszov 1430
Technical University Czestochowa 1445
Poznan University of Life Sciences
1457
Wroclaw University of Life and
Environmental Sciences 1465
Agricultural University of Lublin. No
overall rank. 214 for agriculture
Medical University of Bialystok. No
overall rank. 680 for clinical medicine.
One thing that emerges from this list is that the Polish university
system suffers from a serious handicap in this ranking and in others due to the
existence of independent specialist universities of technology, business and
medicine. Consolidation of small specialist institutions could bring about
significant improvements as has recently happened in France.
There is also some variation in the rank of the best performing
Polish universities. In most rankings the top scoring Polish university is in
the 300s or 490s. There are, however, some exceptions. The default indicator in
Leiden Ranking, total publications, has Jagiellonian University at 247, GreenMetric
has Adam Mickiewicz University at 160 and the Moscow Three Missions University
Rankings puts the University of Warsaw at 113. On the other hand, no Polish
university gets higher than the 600-800 band in the THE world rankings.
The various rankings have very different methodologies and
indicators. THE for example includes income in three indicators. The QS
rankings gave a combined weighting of 50 % to reputation surveys. Scimago counts
patents and the Center for World University Rankings (CWUR), now based in the
Arab Gulf, the achievements of alumni. It would be a good idea to look carefully
at the content and format of all the rankings before using them for evaluation
or benchmarking.
Polish Universities: Strengths and Weaknesses
Poland has certain advantages with regard to international rankings.
It has an excellent secondary school system as shown by above average performance
in PISA and other international standardised tests. Current data indicates that
it has adequate teaching resources, shown by statistics for staff student
ratio. It has cultural and economic links to the East, with the Anglosphere and
within the EU that are likely in the future to produce fruitful research
partnerships and networks.
On the other hand, the evidence of current rankings is that
Polish universities relatively underfunded and that doctoral education is still
relatively limited. The reputation for research in the world is not very high although
is better for regional reputation.
One exception to the limited international visibility of
Polish universities is a recent British film, The Last Passenger, in
which a hijacked train is saved by a few heroes one of whom has an engineering
degree from Gdansk University of Technology.
Poland and the Rankings
It would be unwise for Poland, or indeed any country, to
focus on a single ranking. Some rankings have changed their methodology and will
probably continue to do so and this might lead to unexpected rises or falls. THE
has announced that there will be a new 3.0 version of the world rankings
towards the end of this year.
Any university or university system wishing to engage with
the rankings should be aware that they often favour certain types of
institutions. The Shanghai rankings, for example, privilege medical research
and totally ignore the arts and humanities. Scimago includes data about patents
which would give technological universities an advantage. THE’s current
methodology gives a massive privilege to participants in multi-contributor
projects. QS uses several channels to obtain respondents for its academic and
employer surveys. One of these is a list of potential respondents provided by
the universities. Universities are very likely to nominate those who are likely
to support them in the surveys.
Guidelines for Polish universities as they seek to establish
and extend their international presence.
First, improving international visibility will take time.
Quick fixes such as recruiting highly cited adjunct faculty or taking part in
high profile projects may be counterproductive, especially if there is an
unannounced change in methodology.
Second, before launching a campaign to rise in the global
rankings some universities might consider regional or specialist rankings
first, such as the THE Europe Teaching Rankings, the QS graduate employability
ranking, the Indonesian GreenMetric rankings or business school rankings.
Third, Universities should also consider the cost of taking
part in the rankings. US News, QS and THE require universities to submit
data and this can be time consuming especially for THE which asks for data in
ten subjects. Many universities seem to need three or four staff dedicated to
rankings.
Fourth, it would be wise to monitor all international
rankings for data that can be used for internal evaluation or publicity.
Fifth, universities should match their profiles, strengths
and weaknesses with the methodology of specific rankings. Universities with
strengths in medical research might perform well in the Shanghai rankings.
Sixth, it is not a good idea to focus exclusively on any
single ranking or to make any one ranking the standard of excellence.
1.
Sunday, January 19, 2020
Boycotting the Shanghai Rankings?
John Fitzgerald of Swinburne University of Technology has written an article in the Journal of Political Risk that argues that Western universities should boycott, that is not participate in or refer to, the Shanghai rankings in order to show opposition to the current authoritarian trend in Chinese higher education.
There seems a bit of selective indignation at work here. China is hardly the only country in the world with authoritarian governments, ideologically docile universities, or crackdowns on dissidents. Nearly everywhere in Africa, most of the Middle East, Russia, much of Eastern Europe, and perhaps India would seem as guilty as China, if not more so.
American and other Western universities themselves are in danger of becoming one party institutions based on an obsessive hatred of Trump or Brexit, a pervasive cult of "diversity", political tests for admission, appointment and promotion, and periodic media or physical attacks on dissenters or those who associate with dissenters.
Perhaps academics should boycott the THE or other rankings to protest the treatment by Cambridge University of Noah Carl or Jordan Peterson?
One way of resisting the wave of repression, according to Professor Fitzgerald, is to "no longer reference the ARWU rankings or participate in the Shanghai Jiaotong rankings process which risks spreading the Chinese Communist Party's university model globally. Universities that continue to participate or reference the Shanghai rankings should be tasked by their faculty and alumni to explain why they are failing to uphold the principles of free inquiry and institutional autonomy as fiercely as Xi Jinping is undermining them."
It is hard to see what Fitzgerald means by not participating in the Shanghai rankings. The Academic Ranking of World Universities (ARWU) uses publicly available data from western sources, the Web of Science, Nature, Science, the Clarivate Analytics list of Highly Cited Researchers, and Nobel and Fields awards. Universities cannot avoid participating in them. They can denounce and condemn the rankings until their faces turn bright purple but they cannot opt out. They are ranked by ARWU whether they like it or not.
As for referencing, presumably citing the Shanghai rankings or celebrating university achievements there, Fitzgerald's proposals would seem self defeating. The rankings actually understate the achievements of leading Chinese universities. In the latest ARWU Tsinghua University and Peking University are ranked 43rd and 53rd. The QS World University Rankings puts them 16th and 22nd and the THE world rankings 23rd and 24th.
If anyone wanted to protest the rise of Chinese universities they should turn to the QS and THE rankings where they do well because of reputation, income (THE), and publications in high status journals. It is also possible to opt out of the THE rankings simply by not submitting data.
If oppressive policies did affect the quality of research produced by Chinese universities this would be more likely to show up in the Shanghai rankings through the departure of highly cited researchers or declining submissions to Nature or Science than in the THE or QS rankings where a decline would be obscured if reputation scores continued to hold steady.
Fitzgerald's proposals are pointless and self defeating and ascribe a greater influence to rankings than they actually have.
There seems a bit of selective indignation at work here. China is hardly the only country in the world with authoritarian governments, ideologically docile universities, or crackdowns on dissidents. Nearly everywhere in Africa, most of the Middle East, Russia, much of Eastern Europe, and perhaps India would seem as guilty as China, if not more so.
American and other Western universities themselves are in danger of becoming one party institutions based on an obsessive hatred of Trump or Brexit, a pervasive cult of "diversity", political tests for admission, appointment and promotion, and periodic media or physical attacks on dissenters or those who associate with dissenters.
Perhaps academics should boycott the THE or other rankings to protest the treatment by Cambridge University of Noah Carl or Jordan Peterson?
One way of resisting the wave of repression, according to Professor Fitzgerald, is to "no longer reference the ARWU rankings or participate in the Shanghai Jiaotong rankings process which risks spreading the Chinese Communist Party's university model globally. Universities that continue to participate or reference the Shanghai rankings should be tasked by their faculty and alumni to explain why they are failing to uphold the principles of free inquiry and institutional autonomy as fiercely as Xi Jinping is undermining them."
It is hard to see what Fitzgerald means by not participating in the Shanghai rankings. The Academic Ranking of World Universities (ARWU) uses publicly available data from western sources, the Web of Science, Nature, Science, the Clarivate Analytics list of Highly Cited Researchers, and Nobel and Fields awards. Universities cannot avoid participating in them. They can denounce and condemn the rankings until their faces turn bright purple but they cannot opt out. They are ranked by ARWU whether they like it or not.
As for referencing, presumably citing the Shanghai rankings or celebrating university achievements there, Fitzgerald's proposals would seem self defeating. The rankings actually understate the achievements of leading Chinese universities. In the latest ARWU Tsinghua University and Peking University are ranked 43rd and 53rd. The QS World University Rankings puts them 16th and 22nd and the THE world rankings 23rd and 24th.
If anyone wanted to protest the rise of Chinese universities they should turn to the QS and THE rankings where they do well because of reputation, income (THE), and publications in high status journals. It is also possible to opt out of the THE rankings simply by not submitting data.
If oppressive policies did affect the quality of research produced by Chinese universities this would be more likely to show up in the Shanghai rankings through the departure of highly cited researchers or declining submissions to Nature or Science than in the THE or QS rankings where a decline would be obscured if reputation scores continued to hold steady.
Fitzgerald's proposals are pointless and self defeating and ascribe a greater influence to rankings than they actually have.
Thursday, January 16, 2020
The decline of standardised testing
Over the last few years there has been a trend in American higher education to reduce the significance of standardised tests -- SAT, ACT, GRE, GMAT, LSAT -- in university admissions. A large number of institutions have gone test optional, meaning that it is up to students whether or not they submit their scores. Those who do not submit will not be rejected but will be assessed by other criteria such as high school grades and ranks, recommendations, social awareness, grit, coping with adversity, leadership, sports, membership of protected groups, and so on.
Most test optional schools are small liberal arts colleges but recently they were joined by the University of Chicago, an elite university by any standards.
Going test optional has a number of advantages. Students with low test scores will be more likely to apply and that will lower the percentage of applicants admitted which will make the universities look more selective. It could also help with the rankings which may at first sight seem a bit of a paradox. US News has declared that for the America's Best Colleges rankings up to 25% of applicants can withhold their SAT or ACT scores without the college being penalised.
It seems that if universities can arrange to admit 75% of applicants wholly or partly on the basis of their test scores and allow another quarter to be admitted because of a "holistic" assessment then they may suffer a measurable fall in the average academic ability of their students but not enough to get in trouble with the rankers or to undermine their reputation for academic excellence.
It seems likely that US News will continue to adjust its rankings to accommodate the test optional trend, Recently, for example, the ranking of online graduate education courses lowered the threshold for full credit for quantitative and verbal GRE scores from 75% to 25% of admitted students. The justification for this is that "although many ranked programs with selective admissions made use of GRE scores in limited circumstances very often submitting these scores was optional or waived for applicants."
This will lead to the problem of a substantial number of students being admitted with significantly lower test scores or without taking the tests and so the gap between the most and least able students is likely to widen. Many of those admitted without submitting scores will suffer a serious blow to their self respect as they go from being the academic super stars of their high school or undergraduate program to ranking at the bottom of any assessed test or assignment.
There will accordingly be pressure on colleges and graduates schools to relax grading standards, give credit for group work, allow students to repeat courses, mandate contextualised assessment policies, hold instructors responsible for the performance of students. Faculty who talk about the decline in standards or disparities in achievement will be disciplined and ostracised.
The significant thing about standardised tests is that they correlate quite highly with general intelligence or cognitive ability and also with each other. They played a significant role in the growth of American higher education and research in the twentieth century and the transformation of the Ivy league from a place for producing literate and well behaved young gentlemen into intellectual powerhouses that contributed to the economic and scientific dominance of the US in the second half of the twentieth century.
They are also a good predictor of academic performance although perhaps not quite as good by themselves as high school grades which are influenced by conscientiousness and social conformity.
It now seems likely that there will be increasing pressure to get rid of standardised tests altogether. In California there is a court case in process to make it illegal to used tests for university admissions and Carol Christ, Chancellor of the University of California (UC) Berkeley, has declared in favour of abolition.
Getting rid of tests will mean getting rid of an objective measure of students' intelligence and academic ability. Grades are, as noted above, a slightly better predictor overall of academic performance but there are contexts where tests can add vital information to the admission process. Grade inflation throughout US high schools is creating a large number of students with perfect or near perfect grades but with huge differences in cognitive skills. Without tests there will be no way of distinguishing the truly capable from the diligently mediocre or the aggressively conformist.
If UC does stop using the SAT or ACT for admissions it is unlikely that it will institute a policy of open admissions, at least not yet. It is more probable they it will shift the criteria for selection to high school grades, teachers' recommendations, group membership, and unsupervised personal essays. The consequences of selection though inflatable high school grades and other subjective measures will almost certainly be a significant decline in the average cognitive skills of students at currently selective universities.
American universities will probably become more representative of the ethnic, gender and racial structure of America or the world, more conscientious, more extroverted, more socially aware. Perhaps this will be compensation for the decline in cognitive ability.
It is unlikely that the levelling process will end there. In the years to come there will very probably be demands that universities stop using high school grades or admission essays or anything else that shows a social or racial gap. Studies will be cited showing that wealthy white parents help their children with homework or drive them to volunteering activities or pay for sports equipment or get professional advice about their diversity essays.
Ultimately there will be a situation where American universities see a noticeable decline in the academic and cognitive ability of students and graduates in comparison with China and the Chinese diaspora, Japan, Korea, Russia and Eastern Europe and maybe India. Almost certainly this will be attributed by educational experts to the stinginess of federal and state authorities.
Perhaps there will come another Sputnik moment when America realises that it has fallen behind its competitors. If so it will probably be too late.
Most test optional schools are small liberal arts colleges but recently they were joined by the University of Chicago, an elite university by any standards.
Going test optional has a number of advantages. Students with low test scores will be more likely to apply and that will lower the percentage of applicants admitted which will make the universities look more selective. It could also help with the rankings which may at first sight seem a bit of a paradox. US News has declared that for the America's Best Colleges rankings up to 25% of applicants can withhold their SAT or ACT scores without the college being penalised.
It seems that if universities can arrange to admit 75% of applicants wholly or partly on the basis of their test scores and allow another quarter to be admitted because of a "holistic" assessment then they may suffer a measurable fall in the average academic ability of their students but not enough to get in trouble with the rankers or to undermine their reputation for academic excellence.
It seems likely that US News will continue to adjust its rankings to accommodate the test optional trend, Recently, for example, the ranking of online graduate education courses lowered the threshold for full credit for quantitative and verbal GRE scores from 75% to 25% of admitted students. The justification for this is that "although many ranked programs with selective admissions made use of GRE scores in limited circumstances very often submitting these scores was optional or waived for applicants."
This will lead to the problem of a substantial number of students being admitted with significantly lower test scores or without taking the tests and so the gap between the most and least able students is likely to widen. Many of those admitted without submitting scores will suffer a serious blow to their self respect as they go from being the academic super stars of their high school or undergraduate program to ranking at the bottom of any assessed test or assignment.
There will accordingly be pressure on colleges and graduates schools to relax grading standards, give credit for group work, allow students to repeat courses, mandate contextualised assessment policies, hold instructors responsible for the performance of students. Faculty who talk about the decline in standards or disparities in achievement will be disciplined and ostracised.
The significant thing about standardised tests is that they correlate quite highly with general intelligence or cognitive ability and also with each other. They played a significant role in the growth of American higher education and research in the twentieth century and the transformation of the Ivy league from a place for producing literate and well behaved young gentlemen into intellectual powerhouses that contributed to the economic and scientific dominance of the US in the second half of the twentieth century.
They are also a good predictor of academic performance although perhaps not quite as good by themselves as high school grades which are influenced by conscientiousness and social conformity.
It now seems likely that there will be increasing pressure to get rid of standardised tests altogether. In California there is a court case in process to make it illegal to used tests for university admissions and Carol Christ, Chancellor of the University of California (UC) Berkeley, has declared in favour of abolition.
Getting rid of tests will mean getting rid of an objective measure of students' intelligence and academic ability. Grades are, as noted above, a slightly better predictor overall of academic performance but there are contexts where tests can add vital information to the admission process. Grade inflation throughout US high schools is creating a large number of students with perfect or near perfect grades but with huge differences in cognitive skills. Without tests there will be no way of distinguishing the truly capable from the diligently mediocre or the aggressively conformist.
If UC does stop using the SAT or ACT for admissions it is unlikely that it will institute a policy of open admissions, at least not yet. It is more probable they it will shift the criteria for selection to high school grades, teachers' recommendations, group membership, and unsupervised personal essays. The consequences of selection though inflatable high school grades and other subjective measures will almost certainly be a significant decline in the average cognitive skills of students at currently selective universities.
American universities will probably become more representative of the ethnic, gender and racial structure of America or the world, more conscientious, more extroverted, more socially aware. Perhaps this will be compensation for the decline in cognitive ability.
It is unlikely that the levelling process will end there. In the years to come there will very probably be demands that universities stop using high school grades or admission essays or anything else that shows a social or racial gap. Studies will be cited showing that wealthy white parents help their children with homework or drive them to volunteering activities or pay for sports equipment or get professional advice about their diversity essays.
Ultimately there will be a situation where American universities see a noticeable decline in the academic and cognitive ability of students and graduates in comparison with China and the Chinese diaspora, Japan, Korea, Russia and Eastern Europe and maybe India. Almost certainly this will be attributed by educational experts to the stinginess of federal and state authorities.
Perhaps there will come another Sputnik moment when America realises that it has fallen behind its competitors. If so it will probably be too late.
Sunday, December 15, 2019
Sometimes Nice Guys Finish First for Something: the Case of Dartmouth College
There is a crisis approaching for the universities of the global North. A fundamental problem is that declining or stagnant birth rates are reducing the number of potential students, especially in North America, and that will eventually undermine their economic viability. See this article in Inside Higher Ed for the situation in the US.
The options seem to be limited. Universities could downsize and reduce the numbers of staff and students and, at elite US institutions, spending on country club facilities and an ever expanding army of administrators. They could revise their missions by offering fewer graduate courses, especially in the humanities and social sciences, and more vocational programs.
There seems, however, to be little appetite at the moment for such measures. Many universities are trying to maintain income and size by recruiting from abroad. For a while it appeared that western universities would be saved by thousands of international, mainly Chinese, students. But now it looks like fewer Chinese will be coming and there seems to be no substitute in sight. European universities got excited about Middle Eastern refugees filling the empty seats in lecture halls but then it turned out that most lacked the linguistic and cognitive skills for higher education.
The problem is exacerbated by the general decline or flatlining of cognitive skills of potential students, measured by PISA scores or standardized tests. There have been various hypotheses about the cause: smart phones, too much screen time, immigration, dysgenic fertility, inadequate teaching methods, lack of funding, institutional racism and sexism, toxic Trumpism. But, whatever the cause there seems little hope of a recovery any time soon.
Business schools appear to be part of this trend. MBA students tend to be highly mobile and they are not limited to choosing, as many US undergraduates are, between a community college, the local state university and a struggling private college. Faced with competition from European and Asian schools and online courses, soaring costs and declining applications, many US business schools are at best treading water and at worst in serious danger of drowning.
Dartmouth College, a venerable Ivy League school, is no exception. Back in 2014 it reported the biggest drop in applications in 21 years. Although the college continues to hold its place in the US News Best Colleges rankings it has fallen in the Shanghai rankings, suggesting that it is failing to attract leading researchers as well as talented students.
Dartmouth's Tuck School of Business has suffered as much or more than the rest of the institution. In 2011 it was first in the Economist's Full Time MBA ranking and second in 2012, starting a steady decline until 2019 when it was twelfth.
In 2018 Tuck tried to reverse the steady decline by adopting a new approach to admissions. It was not enough for Tuck students to be smart, accomplished and aware. They have to be nice.
Back in my days in grammar school my English teacher would be outraged by the use of that word. But standards have changed.
How to measure niceness? By an essay and a referee's report. One does not have to be excessively cynical to see that there is obvious room for gaming and bias here. Their is a large amount of writing and talking about coaching for standardized tests, none about whether essays like these have the any real authenticity or validity.
But perhaps I am being too cynical. Maybe Dartmouth's business school has done something right. The latest THE business and economics subject rankings puts Dartmouth 44th in the world for business and economics, which is very creditable, ahead of Boston University, Zhejiang, Edinburgh and Johns Hopkins..
With THE whenever there is a surprisingly high overall score it is a good idea to check the citations indicator which is supposedly a measure of research impact or influence. Sure enough, Dartmouth is second in the world for citations in business and economics just behind Central South University in China and just ahead of Peter the Great St Petersburg polytechnic University.
Could it be that all that niceness is somehow radiating out from the Tuck and causing researchers around the world to cite Dartmouth articles?
The options seem to be limited. Universities could downsize and reduce the numbers of staff and students and, at elite US institutions, spending on country club facilities and an ever expanding army of administrators. They could revise their missions by offering fewer graduate courses, especially in the humanities and social sciences, and more vocational programs.
There seems, however, to be little appetite at the moment for such measures. Many universities are trying to maintain income and size by recruiting from abroad. For a while it appeared that western universities would be saved by thousands of international, mainly Chinese, students. But now it looks like fewer Chinese will be coming and there seems to be no substitute in sight. European universities got excited about Middle Eastern refugees filling the empty seats in lecture halls but then it turned out that most lacked the linguistic and cognitive skills for higher education.
The problem is exacerbated by the general decline or flatlining of cognitive skills of potential students, measured by PISA scores or standardized tests. There have been various hypotheses about the cause: smart phones, too much screen time, immigration, dysgenic fertility, inadequate teaching methods, lack of funding, institutional racism and sexism, toxic Trumpism. But, whatever the cause there seems little hope of a recovery any time soon.
Business schools appear to be part of this trend. MBA students tend to be highly mobile and they are not limited to choosing, as many US undergraduates are, between a community college, the local state university and a struggling private college. Faced with competition from European and Asian schools and online courses, soaring costs and declining applications, many US business schools are at best treading water and at worst in serious danger of drowning.
Dartmouth College, a venerable Ivy League school, is no exception. Back in 2014 it reported the biggest drop in applications in 21 years. Although the college continues to hold its place in the US News Best Colleges rankings it has fallen in the Shanghai rankings, suggesting that it is failing to attract leading researchers as well as talented students.
Dartmouth's Tuck School of Business has suffered as much or more than the rest of the institution. In 2011 it was first in the Economist's Full Time MBA ranking and second in 2012, starting a steady decline until 2019 when it was twelfth.
In 2018 Tuck tried to reverse the steady decline by adopting a new approach to admissions. It was not enough for Tuck students to be smart, accomplished and aware. They have to be nice.
Back in my days in grammar school my English teacher would be outraged by the use of that word. But standards have changed.
How to measure niceness? By an essay and a referee's report. One does not have to be excessively cynical to see that there is obvious room for gaming and bias here. Their is a large amount of writing and talking about coaching for standardized tests, none about whether essays like these have the any real authenticity or validity.
But perhaps I am being too cynical. Maybe Dartmouth's business school has done something right. The latest THE business and economics subject rankings puts Dartmouth 44th in the world for business and economics, which is very creditable, ahead of Boston University, Zhejiang, Edinburgh and Johns Hopkins..
With THE whenever there is a surprisingly high overall score it is a good idea to check the citations indicator which is supposedly a measure of research impact or influence. Sure enough, Dartmouth is second in the world for citations in business and economics just behind Central South University in China and just ahead of Peter the Great St Petersburg polytechnic University.
Could it be that all that niceness is somehow radiating out from the Tuck and causing researchers around the world to cite Dartmouth articles?
Sunday, November 10, 2019
When will Tsinghua Overtake Harvard?
One of the most interesting trends in higher education over the last few years is the rise of China and the relative decline of the USA.
Winston Churchill said the empires of the future will be empires of the mind. If that is so then this century will very likely be the age of Chinese hegemony. Chinese science is advancing faster than that of the USA on all or nearly all fronts. Unless we count things like critical race theory or queer studies.
This is something that should show up in the global rankings if we track them over at least a few years. So, here is a comparison of the top two universities in the two countries according to indicators of research output and research quality over a decade.
Unfortunately, most international rankings are not very helpful in this respect. Few of the current ones provide data for a decade or more. QS and THE have seen frequent changes in methodology and THE's citation indicator although charmingly amusing is not useful unless you think that Aswan University, Anglia Ruskin University and the University of Peradeniya are world beaters for research impact. Two helpful rankings here are Shanghai Academic Ranking of World Universities (ARWU), and Leiden Ranking.
Let's compare the comparative performance of Tsinghua University and Harvard in the Shanghai Ranking's indicator of research output, papers over a one year period, excluding arts and humanities. The published scores are derived from the square roots of the raw data with the top scorer getting a score of 100.
In 2009 Harvard's score was 100 while that for Tsinghua was 55.8. In 2019 it was 100 for Harvard and 79.5 for Tsinghua. So the gap is closing 2.37 points every year. At that rate it would take about nine years for Tsinghua to catch up so look out for 2028.
Of course, this is quantity not quality so take a look at another indicator, Highly Cited Researchers. This is a moderately gamable metric and I suspect that Shanghai might have to abandon it one day but it captures the willingness and ability of universities to sponsor research of a high quality. In 2009 Tsinghua's score was zero compared to Harvard's 100. In 2019 it is 37.4. If everything continues at the same rate Tsinghua will overtake Harvard in another 17 years.
Looking at the default indicator in Leiden Ranking, total publications, in 2007-10 Tsinghua was 35% of Harvard and in 2014-17 56%. Working from that Tsinghua would achieve parity in 2029-33, in the rankings published in 2035.
Looking at a measure of research quality, publications in the top 10% of journals, Tsinghua was 15% of Harvard in 2007-10 and 34% in 2014-17. From that, Tsinghua should reach parity in 2038-42. in the rankings published in 2044, assuming Leiden is still following its current methodology.
So it looks like Tsinghua will reach parity in research output in a decade or a decade or a decade and a half and high quality research in a decade and a half or two decades and a half.
Winston Churchill said the empires of the future will be empires of the mind. If that is so then this century will very likely be the age of Chinese hegemony. Chinese science is advancing faster than that of the USA on all or nearly all fronts. Unless we count things like critical race theory or queer studies.
This is something that should show up in the global rankings if we track them over at least a few years. So, here is a comparison of the top two universities in the two countries according to indicators of research output and research quality over a decade.
Unfortunately, most international rankings are not very helpful in this respect. Few of the current ones provide data for a decade or more. QS and THE have seen frequent changes in methodology and THE's citation indicator although charmingly amusing is not useful unless you think that Aswan University, Anglia Ruskin University and the University of Peradeniya are world beaters for research impact. Two helpful rankings here are Shanghai Academic Ranking of World Universities (ARWU), and Leiden Ranking.
Let's compare the comparative performance of Tsinghua University and Harvard in the Shanghai Ranking's indicator of research output, papers over a one year period, excluding arts and humanities. The published scores are derived from the square roots of the raw data with the top scorer getting a score of 100.
In 2009 Harvard's score was 100 while that for Tsinghua was 55.8. In 2019 it was 100 for Harvard and 79.5 for Tsinghua. So the gap is closing 2.37 points every year. At that rate it would take about nine years for Tsinghua to catch up so look out for 2028.
Of course, this is quantity not quality so take a look at another indicator, Highly Cited Researchers. This is a moderately gamable metric and I suspect that Shanghai might have to abandon it one day but it captures the willingness and ability of universities to sponsor research of a high quality. In 2009 Tsinghua's score was zero compared to Harvard's 100. In 2019 it is 37.4. If everything continues at the same rate Tsinghua will overtake Harvard in another 17 years.
Looking at the default indicator in Leiden Ranking, total publications, in 2007-10 Tsinghua was 35% of Harvard and in 2014-17 56%. Working from that Tsinghua would achieve parity in 2029-33, in the rankings published in 2035.
Looking at a measure of research quality, publications in the top 10% of journals, Tsinghua was 15% of Harvard in 2007-10 and 34% in 2014-17. From that, Tsinghua should reach parity in 2038-42. in the rankings published in 2044, assuming Leiden is still following its current methodology.
So it looks like Tsinghua will reach parity in research output in a decade or a decade or a decade and a half and high quality research in a decade and a half or two decades and a half.
Friday, October 25, 2019
Using Webometrics to Rank University Systems
Recently there has been some interest in ranking higher education systems in addition to institutions or departments. See here and here. But both of these efforts, from Universitas 21 and QS, rank only 50 countries.
The Webometrics rankings attempt to cover every university in the world or anything that might conceivably claim to be a university, institute or college. The indicators comprise web activity and research output. So, there is data here to create a simple and comprehensive ranking of countries. Below is the list of countries and territories ranked according to the world rank of the highest ranked university. If the Webometrics methodology remains unchanged it will be updated twice a year.
The table is not very surprising overall but it is worth noting that the leading Asian countries are already in the top ten and that Brazil and Mexico are not too far behind. The performance of Arab countries is not too impressive even if they are rich in oil.
It's a safe bet that the highest ranked Chinese university will rise steadily over the next few years followed by South Korea and Singapore, but probably not Hong Kong and Australia.
The Webometrics rankings attempt to cover every university in the world or anything that might conceivably claim to be a university, institute or college. The indicators comprise web activity and research output. So, there is data here to create a simple and comprehensive ranking of countries. Below is the list of countries and territories ranked according to the world rank of the highest ranked university. If the Webometrics methodology remains unchanged it will be updated twice a year.
The table is not very surprising overall but it is worth noting that the leading Asian countries are already in the top ten and that Brazil and Mexico are not too far behind. The performance of Arab countries is not too impressive even if they are rich in oil.
It's a safe bet that the highest ranked Chinese university will rise steadily over the next few years followed by South Korea and Singapore, but probably not Hong Kong and Australia.
Rank
|
Country
|
Rank of highest ranked university
|
1
|
USA
|
1
|
2
|
UK
|
7
|
3
|
Canada
|
19
|
4
|
Switzerland
|
32
|
5
|
China
|
33
|
6
|
Hong Kong
|
45
|
7
|
Australia
|
46
|
8
|
Singapore
|
50
|
9
|
Netherlands
|
63
|
10
|
Japan
|
69
|
11
|
Brazil
|
74
|
12
|
Denmark
|
76
|
13
|
Belgium
|
78
|
14
|
Finland
|
87
|
15
|
Norway
|
93
|
16
|
Germany
|
97
|
17
|
Sweden
|
106
|
18
|
Taiwan
|
111
|
19
|
South Korea
|
116
|
20
|
Italy
|
120
|
21
|
Spain
|
133
|
22
|
Mexico
|
141
|
23
|
Austria
|
150
|
24
|
New Zealand
|
153
|
25
|
Israel
|
157
|
26
|
Czech Republic
|
204
|
27
|
Portugal
|
208
|
28
|
Greece
|
224
|
29
|
Russia
|
226
|
30
|
Argentina
|
228
|
31
|
Ireland
|
230
|
32
|
South Africa
|
274
|
33
|
France
|
292
|
34
|
Chile
|
323
|
35
|
Malaysia
|
352
|
36
|
Argentina
|
372
|
37
|
Poland
|
388
|
38
|
Saudi Arabia
|
415
|
39
|
Iran
|
417
|
40
|
Estonia
|
440
|
41
|
Serbia
|
464
|
42
|
India
|
471
|
43
|
Turkey
|
475
|
44
|
Thailand
|
513
|
45
|
Iceland
|
533
|
46
|
Hungary
|
563
|
47
|
Egypt
|
602
|
48
|
Colombia
|
614
|
49
|
Croatia
|
619
|
50
|
Luxembourg
|
631
|
51
|
Puerto Rico
|
649
|
52
|
Belarus
|
684
|
53
|
Cyprus
|
700
|
54
|
Macau
|
720
|
55
|
Slovakia
|
732
|
56
|
Lithuania
|
750
|
57
|
Indonesia
|
771
|
58
|
Costa Rica
|
844
|
59
|
Malta
|
866
|
60
|
Romania
|
881
|
61
|
Bulgaria
|
934
|
62
|
Jamaica
|
953
|
63
|
Qatar
|
958
|
64
|
Peru
|
971
|
65
|
Kenya
|
987
|
66
|
Vietnam
|
1013
|
67
|
Slovenia
|
1103
|
68
|
Latvia
|
1106
|
69
|
Uganda
|
1129
|
70
|
Jordan
|
1149
|
71
|
UAE
|
1158
|
72
|
Philippines
|
1199
|
73
|
Ghana
|
1209
|
74
|
Nigeria
|
1233
|
75
|
Pakistan
|
1269
|
76
|
Ethiopia
|
1314
|
77
|
Oman
|
1346
|
78
|
Georgia
|
1423
|
79
|
Morocco
|
1515
|
80
|
North Macedonia
|
1569
|
81
|
Venezuela
|
1593
|
82
|
Ecuador
|
1638
|
83
|
Palestine
|
1646
|
84
|
Bosnia
|
1669
|
85
|
Kazakhstan
|
1793
|
86
|
Trinidad
|
1794
|
87
|
Iraq
|
1804
|
88
|
Brunei
|
1829
|
89
|
Fiji
|
1831
|
90
|
Bangladesh
|
1895
|
91
|
Tanzania
|
1913
|
92
|
Ukraine
|
1977
|
93
|
Sri Lanka
|
1981
|
94
|
Zimbabwe
|
2014
|
95
|
Algeria
|
2061
|
96
|
Cuba
|
2134
|
97
|
Bahrain
|
2161
|
98
|
Kuwait
|
2200
|
99
|
Mozambique
|
2280
|
100
|
Paraguay
|
2297
|
101
|
Mauritius
|
2422
|
102
|
Guatemala
|
2458
|
103
|
Uruguay
|
2499
|
104
|
Botswana
|
2583
|
105
|
Grenada
|
2583
|
106
|
Armenia
|
2643
|
107
|
Liechtenstein
|
2761
|
108
|
Montenegro
|
2878
|
109
|
Guam
|
2900
|
110
|
Sudan
|
2936
|
111
|
Bolivia
|
2960
|
112
|
Mongolia
|
2962
|
113
|
Benin
|
2980
|
114
|
Malawi
|
3001
|
115
|
Zambia
|
3001
|
116
|
Senegal
|
3008
|
117
|
Moldova
|
3151
|
118
|
Tunisia
|
3198
|
119
|
Rwanda
|
3220
|
120
|
Nepal
|
3243
|
121
|
Namibia
|
3316
|
122
|
Panama
|
3391
|
123
|
Cameroon
|
3527
|
124
|
Barbados
|
3538
|
125
|
Azerbaijan
|
3573
|
126
|
US Virgin Islands
|
3579
|
127
|
Syria
|
3593
|
128
|
Burkina Faso
|
3634
|
129
|
Dominica
|
3679
|
130
|
Honduras
|
3892
|
131
|
Uzbekistan
|
4017
|
132
|
Libya
|
4040
|
133
|
Yemen
|
4126
|
134
|
Faroe Islands
|
4368
|
135
|
Madagascar
|
4372
|
136
|
Togo
|
4392
|
137
|
Eswatini
|
4428
|
138
|
Laos
|
4431
|
139
|
Nicaragua
|
4458
|
140
|
El Salvador
|
4542
|
141
|
Kyrgyzstan
|
4554
|
142
|
French Polynesia
|
4640
|
143
|
Albania
|
4735
|
144
|
Monaco
|
4842
|
145
|
Dominican Republic
|
4903
|
146
|
Cambodia
|
5060
|
147
|
San Marino
|
5107
|
148
|
Papua New Guinee
|
5205
|
149
|
Greenland
|
5378
|
150
|
Afghanistan
|
5676
|
151
|
Lesotho
|
5872
|
152
|
Antigua
|
6040
|
153
|
Guyana
|
6149
|
154
|
Ivory Coast
|
6306
|
155
|
Anguilla
|
6374
|
156
|
Suriname
|
6641
|
157
|
Democratic Republic of the Congo
|
7033
|
158
|
American Samoa
|
7213
|
159
|
Myanmar
|
7221
|
160
|
Belize
|
7497
|
161
|
Micronesia
|
7962
|
162
|
Haiti
|
8082
|
163
|
Angola
|
8091
|
164
|
Bhutan
|
8159
|
165
|
Niger
|
8384
|
166
|
Sierra Leone
|
8560
|
167
|
Somalia
|
10154
|
168
|
St Kitts & Nevis
|
10527
|
169
|
Cape Verde
|
10685
|
170
|
Andorra
|
10772
|
171
|
Gambia
|
11020
|
172
|
Seychelles
|
11235
|
173
|
South Sudan
|
12329
|
174
|
Cayman Islands
|
13011
|
175
|
Samoa
|
13132
|
176
|
Bermuda
|
13431
|
177
|
British Virgin Islands
|
13694
|
178
|
Maldives
|
13864
|
179
|
Palau
|
13864
|
180
|
St Lucia
|
13981
|
181
|
Tajikistan
|
14180
|
182
|
Djibouti
|
14186
|
183
|
Central African Republic
|
14433
|
184
|
Northern Marianas
|
14444
|
185
|
Marshall Islands
|
15827
|
186
|
Gabon
|
16002
|
187
|
Aruba
|
16347
|
188
|
Solomon Islands
|
17867
|
189
|
Montserrat
|
18103
|
190
|
East Timor
|
18433
|
191
|
Guinea
|
18588
|
192
|
French Guiana
|
18703
|
193
|
Liberia
|
19463
|
194
|
Isle of Man
|
20029
|
195
|
Mali
|
20172
|
196
|
Mauretania
|
22144
|
197
|
Equatorial Guinea
|
23382
|
198
|
Niue
|
23892
|
199
|
Eritrea
|
24481
|
200
|
Turks & Caicos Islands
|
27918
|
Subscribe to:
Posts (Atom)