Thursday, September 20, 2018

Philosophy Department Will Ignore GRE Scores

The philosophy department at the University of Pennsylvania has taken a step away from fairness and objectivity in university admissions. It will no longer look at the GRE scores of applicants to its graduate programme. 

The department is good but not great. It is ranked 27th in the Leiter Report rankings and in the 101-150 band in the QS world subject rankings.

So how will students be selected without GRE scores? It seems it will be by letters of recommendation, undergraduate GPA, writing samples, admission statements.

Letters of recommendation have very little validity. The value of undergraduate grades has eroded in recent years and very likely will continue to do so. Admission essays and diversity statements  say little about academic ability and a lot about political conformism.

The reasons for the move are not convincing. Paying for the GRE is supposed to be a burden on low income students. But the cost is much less than Penn's exorbitant tuition fees. It is also claimed that the GRE and other standardised tests do not predict performance in graduate school. In fact they are a reasonably good predictor of academic success although they should not be used by themselves. 

Then there is the claim that the GRE "sometimes" underpredicts the performance of minorities and women. No doubt it sometimes does but then presumably sometimes it does not. Unless there is evidence that the underprediction is significant and that it is greater than that of other indicators this claim is meaningless.

What will be the result of this? The department will be able to admit students who "do not test well" but who can get good grades, something that is becoming less difficult at US colleges, or  persuade letter writers at reputable schools that they will do well.

It is likely that more departments across the US will follow Penn's lead. American graduate programmes will slowly become less rigorous and less able to compete with the rising universities of Asia.








Sunday, September 09, 2018

Ranking Global Rankings: Information


Another indicator for ranking global rankings might be the amount of information that they contain. Here are 17 global rankings in the IREG Inventory ranked according to the number of indicators or groups of indicators for which scores or ranks are given. The median and the mode are both six.

The number for U-Multirank is perhaps misleading since data is not provided for all universities. 



 Number of indicators or indicator groups with scores or ranks

Rank
Ranking
Address of publisher
Number of indicators
1
Germany 
112
2
Russia
20
3
Netherlands
19
4
USA
13
5
Taiwan
8
6
UAE
7
7=
UK
6
7=
China 
6
7=
Indonesia
6
7=
URAP University Ranking by Academic Performance
Turkey
6
11
UK
5
12
Spain
4
13
Spain
3
14
UK
2
15=
France
1
15=
Reuters Top 100 Innovative Universities  
USA
1
15=
Australia
1



Monday, September 03, 2018

Ranking Global Rankings: Inclusion

The number of international global universities continues to grow and it is becoming harder to keep track of them. Earlier this year IREG published an inventory of international rankings that included 17 global rankings. Here are those rankings in order of the number of institutions that they rank in the most recent edition.

Webometrics is the clear winner, followed by uniRank and SCImago. There are, of course, other indicators to think about and some of these will be covered later.






Number of Institutions ranked

Rank
Ranking
Address of publisher
Number ranked
1
Spain
28,077
2
Australia
13,146
3
Spain
5,637
4
URAP University Ranking by Academic Performance
Turkey
2,500
5
U-Multirank
Germany 
1,500
6
USA
1,250
7
THE World University Rankings
UK
1,000+
8= Shanghai Ranking ARWU China 
1,000
8= CWUR University Rankings 
UAE
1,000
10
QS World University Rankings
UK
916
11
CWTS Leiden Ranking Netherlands 903
12
Taiwan 800
13
Russia
783
14
UI GreenMetric Ranking Indonesia 619
15
UK 500
16
France
150
17
Reuters Top 100 Innovative Universities  
USA
100


Sunday, September 02, 2018

Ranking US Rankings

Forbes Magazine has an article by Willard Dix that ranks US  ranking sites. The ranking is informal without specifying indicators but the author does give us an idea of what he thinks a good ranking should do.

Here are the top five of thirteen:
1.  US News: America's Best Colleges
2.  Money magazine: Best Colleges Ranking
3.  Forbes: America's Top Colleges
4.  Kiplinger's Best College Values
5.  Washington Monthly: College Guide and Rankings.

Reading through the comments it is possible to get an idea of the criteria of a good ranking. Rankings should contain a lot of information, they should be comprehensive and include a large number of institutions, they should provide data that helps prospective students and stakeholders, they should be published for several years, if they use surveys they should have a lot of respondents, they should have face validity (a list with a "revolutionary algorithm" that puts non-Ivy places at the top is in 13th place). 





Friday, August 24, 2018

Why is Australia doing well in the Shanghai rankings?

I am feeling a bit embarrassed. In a recent post I wrote about the Shanghai Rankings (ARWU) being a bit boring (which is good) because university ranks usually do not change very much. But then I noticed that a couple of Australian universities did very well in the latest rankings. One of them, the Australian National University (ANU), has risen a spectacular (for ARWU) 31 places over last year. The Financial Review says that "[u]niversity scientific research has boosted the position of two Australian universities in a global ranking of higher education providers." 

The ranking is ARWU and the rise in the ranking is linked to the economic contribution of Australian universities, especially those in the Group of Eight.

So how well did Australian universities do? The top performer, as in previous years, is the University of Melbourne, which went up a spot to 38th place. Two other universities went up a lot in a very un-Shanghainese way, ANU, already mentioned, from 69th to 38th place and the University of Sydney from  to 83rd to  68th

The University of Queensland was unchanged in 55th place while Monash fell from 78th to 91st  and the University of Western Australia from 91st to 93rd. 

How did ANU and Sydney do it? The ANU scores for Nobel and Fields awards were unchanged. Publications were up a bit  and papers in Nature and Science down a bit.  

What made the difference was the score for highly cited researchers, derived from lists kept by Clarivate Analytics, which rose from 15.4 to 23.5, a difference of 8.1 or, after weighting, 1.62 points of the overall score. The difference in total scores between 2017 and 2018 was 1.9 so those highly cited researchers made up most of the difference.

In 2016 ANU had two researchers in the list, which was used for the 2017 rankings. One was also on the 2017 list, used in 2018. In 2017 there were six ANU  highly cited researchers, one from the previous year and one who had moved from MIT. The other four were long serving ANU researchers.

Let's be clear. ANU has not been handing out unusual contracts or poaching from other institutions. It has grown its own researchers and should be congratulated.

But using an indicator where a single researcher can lift a top 100 university seven or eight places is an invitation to perverse consequences. ARWU should consider whether it is time to explore other measures of research impact.

The improved scores for the University of Sydney resulted from an increase between 2016 and 2017 in the number of articles published in the Science Citation Index Expanded and the Social Science Citation Index.

Saturday, August 18, 2018

Who Cares About University rankings?

A paper by Ludo Waltman and Nees Jan van Eck asks what users of the Leiden Ranking are interested in. There's some interesting stuff but for now I just want to look at where the users come from.

The top ten countries where visitors originate are:

1.  USA
2.  Australia
3.  Netherlands
4.  UK
5.  Turkey
6.  Iran
7.  South Korea
8.  France
9.  Germany
10. Denmark.

The authors consider the number of visitors from Australia, Turkey, Iran and South Korea to be "quite remarkable."

Let's look at other signs of interest in rankings. Here are the top countries for respondents to the 2018 QS academic survey:

1.  USA
2.  UK
3.  Malaysia
4= Australia
4= South Korea
4= Russia
7= Italy
7= Japan
9= Brazil
9= Canada

And here are the top ten countries for visitors to this blog:

1. USA
2. UK
3. Russia
4. France
5. Germany
6. Ukraine
7. Canada
8. Malaysia
9. Australia
10. Singapore.

The three countries on all three lists are UK, USA and Australia. The countries on two lists are South Korea, Russia, Malaysia, France, Germany and Canada.










https://www.cwts.nl/blog?article=n-r2s2a4&title=what-are-users-of-the-cwts-leiden-ranking-interested-in

http://rankingwatch.blogspot.com/2018/06/responses-to-qs.html

Saturday, August 11, 2018

Will THE do something about the citations indicator?


International university rankings can be a bit boring sometimes. It is difficult to get excited about the Shanghai rankings, especially at the upper end: Chicago down two places, Peking up one. There was a bit of excitement in 2014 when there was a switch to a new list of highly cited researchers and some universities went up and down a few places, or even a few dozen, but that seems over with now.

The Times Higher Education (THE) world rankings are always fun to read, especially the citations indicator, which since 2010 has proclaimed a succession of unlikely places as having an outsize influence on the world of research: Alexandria University, Hong Kong Baptist University, Bilkent University, Royal Holloway University of London, National Research University MEPhi Moscow, Tokyo Metropolitan University, Federico Santa Maria Technical University Chile, St George's University of London, Anglia Ruskin University Cambridge, Babol Noshirvani University University of Technology Iran.

I wonder if the good and the great of the academic world ever feel uncomfortable about going to those prestigious THE summits while places like the above are deemed to be the equal for research impact or the superior of Chicago or Melbourne or Tsinghua. Do they even look at the indicator scores?

These remarkable results are not because of deliberate cheating but of THE's methodology. First, research documents are divided into 300 plus fields, five types of documents, and five years of publication, and then the world average number of citations (mean) is calculated for each type of publication in each field and in each year. Altogether there are 8000 "cells" with which the average of each university in the THE rankings is compared .

This means that if a university manages to get a few publications in a field where citations are typically low it could easily get a very high citations score. 

Added to this is a "regional modification" where the final citation impact score is divided by the square root of the score of the country in which the country is located. This results in most universities receiving an increased score which is very small for those in productive countries and very high for those in countries that generate few citations. The modification is now applied to half of the citations indicator score.

Then we have the problems of those famous kilo-author mega-cited papers. These are papers with dozens, scores, or hundreds of participating institutions and similar numbers of authors and citations. Until 2015 THE treated every author as as though they were the sole author of a paper, including those with thousands of authors. Then in 2015 they stopped counting papers with over a thousand authors and in 2016 they introduced a modified fractional counting of citations for papers with over thousand authors. Citations were distributed proportionally among the authors with a minimum allotment of five per cent.

There are problems with all of these procedures. Treating every  authors as as the sole author meant that a few places can get massive citation counts from taking part in one or two projects such as the CERN project or the global burden of disease study . On the other hand excluding mega papers is also not helpful since it omits some of the most significant current research.

The simplest solution would be fractional counting all around, just dividing the number of citations of all papers by the numbers of contributors or contributing institutions. This is the default option of Leiden Ranking and there seems no compelling reason why THE could not so.

There are some other issues that should be dealt with. One is the question of self-citation. This is probably not a widespread issue but it has caused problems on a couple of occasions.

Something else that THE might want to think about is the effect of the rise of in the number of authors with multiple affiliations. So far only one university has recruited large numbers of adjunct staff whose main function seems to be  listing the university as a secondary affiliation at the top of published papers but there could be more in the future. 

Of course, none of this would matter very much if the citations indicator were given a reasonable weighting of, say, five or ten percent but it has more weight than any other indicator -- the next is the research reputation survey with 18 %. A single mega-paper or even a few strategically placed citations in a low cited field can have a huge impact on a university's overallscore.

There are signs that THE is getting embarrassed at the bizarre effects of this indicator. Last year Phil Baty, THE's ranking editor,  spoke about its quirky results. 

Recently, Duncan Ross, data director at THE, has written about the possibility of of a methodological change. He notes that currently the  benchmark world score for the 8000 plus cells  is determined by the mean. He speculates about using the median instead. The problem with this is that a majority of papers are never cited so the median for many of the cells is going to be zero. So he proposes, based on an analysis from the recent THE Latin American rankings, that the 75th percentile be used. 

Ross suggests that this would make the THE rankings more stable, especially the Latin American rankings where the threshold number of articles is quite low. 

It would also allow the inclusion of more universities that currently fall below the threshold. This, I suspect, is something that is likely to appeal to the THE management.

It is very good that THE appears willing to think about reforming the citations indicator. But a bit of tweaking will not be enough. 





Sunday, July 22, 2018

The flight from bright: Dartmouth wants nice MBA students

The retreat from intelligence as a qualification for entrance into American universities continues. We have already seen the University of Chicago join the ranks of test-optional colleges and it seems that for many years Harvard has been discriminating against prospective Asian students who supposedly lack the qualities of grit, humour, sensitivity, kindness, courage, and leadership that are necessary to study physics or do research in economics.

There has been a lot of indignation about the implication that Harvard should actually think that Asians were uniquely lacking in humour and grit and so on.

But even if Asians were lacking in these qualities that is surely no reason to deny them admission to elite institutions if they have the ability to perform at the highest intellectual level. Sensitivity, kindness and a sense of humour etc are no doubt desirable but they are highly subjective, culture specific, difficult to operationalise and almost impossible to assess with any degree of validity. They also could have a disparate impact on racial, gender and ethnic groups.

Now Dartmouth College is going down the same path. What do you need to get into the Tuck School of Business?

"True to the school’s long-held reputation for being applicant-friendly and transparent in its admissions process, the new, simplified criteria comprise four attributes reflective of successful Tuck students: smart, nice, accomplished, and aware."

I doubt that Dartmouth will be the only place to admit students because they are nice, or good at pretending to be nice or able to afford niceness trainers. And how will niceness be assessed?

There will be an essay: "Tuck students are nice, and invest generously in one another's success. Share an example of how you have helped someone else succeed. (500 words)."

Referees will be asked: "Tuck student are nice. Please  comment on how the candidate interacts with others including when the interaction is difficult or challenging."

Soon no doubt we will hear demands for niceness of students to be included as in indicator in university rankings. There will be compulsory workshops on how to confront the nastiness within. Studies will show that niceness is an essential attribute for success in research, business, sport, war and journalism and that it is something in which ciswhitestraightmales, especially those not differently abled, are desperately deficient.

And we are likely to see articles wondering why Asian universities are mysteriously overtaking the West in anything based on cognitive skills. 




View Printable Version
My article in University World News can be accessed here. Comments can be made at this blog.


GLOBAL
How should rankings assess teaching and learning?

Tuesday, July 17, 2018

Chicago goes test-optional

The University of Chicago has gone test-optional. Prospective students will no longer be required to submit their SAT or ACT scores when applying, although probably most will continue to do so.

Many colleges in the US have done this already and candidates who choose not to submit test scores are admitted on the basis of high school grades, perceived personal attributes, recommendations, essays, extra-curricular activities and/or membership of a valued group. Most 
of these, but not all, appear to be small liberal arts colleges. Chicago is the first major US research university to do so but is unlikely to be the last.

A common justification is that dropping the test requirement allows universities to recruit students from disadvantaged or underrepresented groups who may not do well on standardised tests. 
Perhaps it does but there is also a less altruistic reason. Going test-optional might help Chicago to maintain or even improve its position in the US News rankings while allowing the overall academic ability of its students to slide.

If the students who choose not to submit test scores are scoring below average then the overall test scores will rise which will improve Chicago's standing in the rankings. Apparently it is US News policy to avoid penalising institutions as long as 75% of the incoming class submit their scores. Also, if they get more applicants then the admission rate goes down and the university appears more selective. All in all, it looks like a win-win situation. But as more students are selected because they can produce a two minute video, are members of a protected group, or voice support for current orthodoxies, overall academic quality will gradually drift downwards. 


There are signs that higher education in the West is moving away from the objective standardised testing of academic ability. It is likely that those admitted because they are likeable or passionate, show leadership qualities or bring new perspectives to the classroom will find the cold realities of advanced physics or philosophy frustrating and will demand that standards be adjusted to accommodate them.

Meanwhile the long slow convergence of America and China will continue. China is now level with the US for many measures of research output and parity in quality will  probably come soon. If American schools abandon the rigorous selection of students, teachers and researchers they are likely to fall behind.


Wednesday, July 11, 2018

Should Malaysian universities celebrate rising in the QS Rankings?


Should Malaysian universities celebrate rising in the QS Rankings?

My article in the Kuala Lumpur New Straits Times can be accessed here. You can post comments at this blog.


Saturday, July 07, 2018

The THE European Teaching Rankings

On July 11th Times Higher Education (THE) will publish their new European university rankings. These are supposed to be about teaching and seem to give priority to students as consumers of higher education.

They are similar to THE's Japanese and US rankings with four "pillars": Engagement (five indicators derived from the European Student Survey), Resources (three indicators), Outcomes (three indicators) and Environment, which consists entirely of the gender ratio of faculty and students.

THE are presenting these rankings as an innovative pilot project so they might contain interesting insights lacking in other international rankings. But it looks like THE will follow previous practice and only give scores for the four pillars and not for the component indicators. This would drastically reduce their value for students and other indicators since it would be difficult or impossible to figure exactly what has contributed to a high or a low score for any of the pillars.

Although the rankings claim to assess teaching, there is still a substantial research component here. Papers to staff ratio gets a weighting of 7.5%, and THE's survey of postgraduate teaching, which correlates very closely with the research survey, gets 10%.

What is missing here is any serious measure of the quality of students or graduates. This is the great omission of the current global ranking scene. QS have a survey of employers and CWUR counts the prizes won by university alumni. Neither of these are relevant for the great majority of institutions around the world.

The most valuable metrics in the US News national ranking are the test scores and high school standing of admitted students. The blunt reality is that employers and graduate and professional schools are interested in the cognitive skills, subject knowledge, conscientiousness and, sadly and increasingly, the willingness to conform of graduates and the ability to universities to nurture these is closely related to students' performance on standardised tests and national exams. It is disappointing that THE have been unable to find a way of capturing the quality of students and graduates.

It is also odd that THE are able to supply data on only one aspect of institutional environment, that is gender ratio.

U-Multirank already covers some of the indicators included in the new rankings and has a reasonable coverage of European universities. Whether THE can do better will be seen on the eleventh.