Ranking Education Schools
The US News and World Report, publishers of America's Best Colleges, are teaming up with the National Council on Teachers Quality to produce a rating of teacher preparation programs.
Many Education deans are strongly opposed. See here.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Sunday, February 13, 2011
We are all equal
I have come across an interesting article, "The equality of intelligence", in the philosopher's magazine by Nina Power. It is one of a series, "Ideas of the century" (I am not sure which one).
Power, whose dissertation is entitled From Theoretical Antihumanism to Practical Humanism: The Political Subject in Sartre, Althusser and Badiou and who is a senior lecturer at Roehampton University, refers to the work of Jacques Rancière,
Another example of her writing is Sarah Palin: Castration as Plenitude. Presumably that is potentially understandable by everybody.
I have come across an interesting article, "The equality of intelligence", in the philosopher's magazine by Nina Power. It is one of a series, "Ideas of the century" (I am not sure which one).
Power, whose dissertation is entitled From Theoretical Antihumanism to Practical Humanism: The Political Subject in Sartre, Althusser and Badiou and who is a senior lecturer at Roehampton University, refers to the work of Jacques Rancière,
"who never tires of repeating his assertion that equality is not just something to be fought for, but something to be presupposed, is, for me, one of the most important ideas of the past decade. Although Rancière begins the discussion of this idea in his 1987 text The Ignorant Schoolmaster, it is really only in the last ten years that others have taken up the idea and attempted to work out what it might mean for politics, art and philosophy. Equality may also be something one wishes for in a future to come, after fundamental shifts in the arrangement and order of society. But this is not Rancière’s point at all. Equality is not something to be achieved, but something to be presupposed, universally. Everyone is equally intelligent."Just in case you thought she was kidding:
"In principle then, there is no reason why a teacher is smarter than his or her student, or why educators shouldn’t be able to learn alongside pupils in a shared ignorance (coupled with the will to learn). The reason why we can relatively quickly understand complex arguments and formulae that have taken very clever people a long time to work out lends credence to Rancière’s insight that, at base, nothing is in principle impossible to understand and that everyone has the potential to understand anything."Power seems to be living in a different universe from those of us in the academic periphery. Perhaps she is actually pulling a Sokalian stunt but I suspect not. This sort of thing might be funny to many of us but it seems to be taken seriously in departments of education around the world. Just take a look at the model teaching philosophy statements found on the Internet.
Another example of her writing is Sarah Palin: Castration as Plenitude. Presumably that is potentially understandable by everybody.
Friday, February 11, 2011
More on Citations
A column in the THE by Phil Baty indicates that there might be some change in the research impact indicator in the forthcoming THE World University Rankings. It is good that THE is considering changes but I have a depressing feeling that Thomson Reuters, who collect the citations data, are going to have more weight in this matter than anyone or anything else.
Baty refers to a paper by Simon Pratt who manages the data for TR and THE.
Also, in the real world are there many universities that are excellent in a single field, defined as narrowly as theoretical physics or applied mathematics, while being mediocre or worse in everything else? Anyone who thinks that Alexandria is the fourth best university in the world for research impact because of its uncontested excellence in mathematics should take a look here.
There are also problems with normalising by region. Precisely what the regions are for the purposes of this indicator is not stated. If Africa is a region, does this mean that Alexandria got another boost, one denied to other Middle Eastern universities? Is Istanbul in Europe and Bilkent in Asia? Does Singapore get an extra weighting because of the poor performance of its Southeastern neighbours?
There are two other aspects of the normalisation that are not foregrounded in the article. First, TR apparently use normalisation by year. In some disciplines it is rare for a paper to be cited within a year of publication..In others it is commonplace. An article that is classified as being in a low citation field would get a massive boost if in addition it had a few citations within months of publication.
Remember also that the scores represent averages. A small number of total publications means an immense advantage for a university that has a few highly cited article in low cited fields and is located in a normally unproductive region. Alexandria's remarkable success was due to the convergence of four favourable factors: credit for publishing in a low citation sub-discipline, the frequent citation of recently published papers, being located in a continent whose scholars are not generally noticed and finally the selfless cooperation of hundreds of faculty who graciously refrained from sending papers to ISI indexed journals.
Alexandria University may not be open for the rest of this year and may not take part in the second THE WUR exercise. One wonders though how many universities around the world could benefit from these four factors and how many are getting ready to submit data to Thomson Reuters.
A column in the THE by Phil Baty indicates that there might be some change in the research impact indicator in the forthcoming THE World University Rankings. It is good that THE is considering changes but I have a depressing feeling that Thomson Reuters, who collect the citations data, are going to have more weight in this matter than anyone or anything else.
Baty refers to a paper by Simon Pratt who manages the data for TR and THE.
The issue was brought up again this month in a paper to the RU11 group of 11 leading research universities in Japan. It was written by Simon Pratt, project manager for institutional research at Thomson Reuters, which supplies the data for THE’s World University Rankings.This is correct but perhaps we should also consider whether the number of citations to papers in genetics is telling us something about the value that societies place on genetics rather than on mathematics and perhaps that is something that should not be ignored.
Explaining why THE’s rankings normalise for citations data by discipline, Pratt highlights the extent of the differences. In molecular biology and genetics, there were more than 1.6 million citations for the 145,939 papers published between 2005 and 2009, he writes; in mathematics, there were just 211,268 citations for a similar number of papers (140,219) published in the same period.
Obviously, an institution with world-class work in mathematics would be severely penalised by any system that did not reflect such differences in citations volume.
Also, in the real world are there many universities that are excellent in a single field, defined as narrowly as theoretical physics or applied mathematics, while being mediocre or worse in everything else? Anyone who thinks that Alexandria is the fourth best university in the world for research impact because of its uncontested excellence in mathematics should take a look here.
There are also problems with normalising by region. Precisely what the regions are for the purposes of this indicator is not stated. If Africa is a region, does this mean that Alexandria got another boost, one denied to other Middle Eastern universities? Is Istanbul in Europe and Bilkent in Asia? Does Singapore get an extra weighting because of the poor performance of its Southeastern neighbours?
There are two other aspects of the normalisation that are not foregrounded in the article. First, TR apparently use normalisation by year. In some disciplines it is rare for a paper to be cited within a year of publication..In others it is commonplace. An article that is classified as being in a low citation field would get a massive boost if in addition it had a few citations within months of publication.
Remember also that the scores represent averages. A small number of total publications means an immense advantage for a university that has a few highly cited article in low cited fields and is located in a normally unproductive region. Alexandria's remarkable success was due to the convergence of four favourable factors: credit for publishing in a low citation sub-discipline, the frequent citation of recently published papers, being located in a continent whose scholars are not generally noticed and finally the selfless cooperation of hundreds of faculty who graciously refrained from sending papers to ISI indexed journals.
Alexandria University may not be open for the rest of this year and may not take part in the second THE WUR exercise. One wonders though how many universities around the world could benefit from these four factors and how many are getting ready to submit data to Thomson Reuters.
Monday, February 07, 2011
Training for Academics
The bureaucratisation of higher education continues relentlessly. Times Higher Education reports on moves to make all UK academics undergo compulsory training. This is not a totally useless idea: a bit of training in teaching methodology would do no harm at all for all those unprepared graduate assistants, part-timers and new Ph Ds that make up an increasing proportion of the work force in European and American universities.
But the higher education establishment has more than this in mind.
The bureaucratisation of higher education continues relentlessly. Times Higher Education reports on moves to make all UK academics undergo compulsory training. This is not a totally useless idea: a bit of training in teaching methodology would do no harm at all for all those unprepared graduate assistants, part-timers and new Ph Ds that make up an increasing proportion of the work force in European and American universities.
But the higher education establishment has more than this in mind.
Plans to revise the UK Professional Standards Framework were published by the HEA in November after the Browne Review called for teaching qualifications to be made compulsory for new academics.
The framework, which was first published in 2006, is used to accredit universities' teaching-development activities, but the HEA has admitted that many staff do not see it as "relevant" to their career progression.
Under the HEA's proposals, the updated framework says that in future, all staff on academic probation will have to complete an HEA-accredited teaching programme, such as a postgraduate certificate in higher education. Postgraduates who teach would also have to take an HEA-accredited course.
A "sector-wide profile" on the number of staff who have reached each level of the framework would be published by the HEA annually.A comment by "agreed" indicates just what is likely to happen.
Meanwhile, training courses would have to meet more detailed requirements.
I did one of these course a couple of years ago. I learnt nothing from the "content" that I couldn't have learnt in a fraction of the time by reading a book. The bulk of the course was an attempt to compel all lecturers to adopt fashionable models of teaching with no regard to the need for students to learn content. The example set by the lecturers on the course was apalling: ill prepared, dogmatic, and lacking in substance. A failure to connect with the "students" and a generally patronising tone was just one of the weaknesses. Weeks of potentially productive time were taken up by jumping through hoops and preparing assignments. This is not an isolated case, I know of several other such courses in other institutions that were equally shambolic. I'm all for improving the qulaity of teaching, but this is nonsensical. The only real benefit was the collegial relations with academics from other deaprtments forged through common bonds of disgust and mockery aimed at this ridiculous enterprise (presumably designed to justify the continued employment of failed academics from other disciplines given the role of teaching the reast of us how to teach).
Thursday, February 03, 2011
Comparing Rankings 2
Number of Indicators
A ranking that contained only a single indicator would not be very interesting. Providing that the indicators are actually measuring different things, rankings with many indicators would contain more information. On the other hand, the more indicators there are the more likely it is that some will be redundant.
At the moment, the THE World University Rankings are in first place with 13 indicators and Paris Mines is last with only one. We should note, however, that the THE indicators are combined into 5 super-indicators and scores are given only for the latter.
So we have the following order.
1. THE World University Rankings: 13 indicators (scores are given for only 5 indicator groups)
2. HEEACT: 8 indicators
3= Academic Ranking of World Universities (Shanghai): 6 indicators
3= QS World University Rankings: 6 indicators
5. Leiden: 5 (strictly speaking, 5 separate rankings)
6= Webometrics: 4
6= Scimago Institutions Ranking: 4 (1 used for ranking)
8. Paris Mines Tech: 1
Number of Indicators
A ranking that contained only a single indicator would not be very interesting. Providing that the indicators are actually measuring different things, rankings with many indicators would contain more information. On the other hand, the more indicators there are the more likely it is that some will be redundant.
At the moment, the THE World University Rankings are in first place with 13 indicators and Paris Mines is last with only one. We should note, however, that the THE indicators are combined into 5 super-indicators and scores are given only for the latter.
So we have the following order.
1. THE World University Rankings: 13 indicators (scores are given for only 5 indicator groups)
2. HEEACT: 8 indicators
3= Academic Ranking of World Universities (Shanghai): 6 indicators
3= QS World University Rankings: 6 indicators
5. Leiden: 5 (strictly speaking, 5 separate rankings)
6= Webometrics: 4
6= Scimago Institutions Ranking: 4 (1 used for ranking)
8. Paris Mines Tech: 1
Thursday, January 27, 2011
All Ears
Times Higher Education and Thomson Reuters are considering changes to their ranking methodology. It seems that the research impact indicator (citations) will figure prominently in their considerations. Phil Baty writes:
If anyone would like to justify these results they are welcome to post a comment.
I would like to make these suggestions for modifying the citations indicator.
Do not count self-citations, citations to the same journal in which a paper is published or citations to the same university. This would reduce, although not completely eliminate, manipulation of the citation system. If this is not done there will be massive self citation and citation of friends and colleagues. It might even be possible to implement a measure of net citation by deducting citations from an institution from the citations to it, thus reduce the effect of tacit citation agreements.
Normalisation by subject field is probably going to stay. It is reasonable that some consideration should be given to scholars who work in fields where citations are delayed and infrequent. However, it should be recognised that the purpose of this procedure is to identify pockets of excellent and research institutions are not built around a few pockets or even a single one. There are many ways of measuring research impact and this is just one of them. Others that might be used include total citations, citations per faculty, citations per research income and h-index.
Normalisation by year is especially problematical and should be dropped. It means that a handful of citations to an article classified as being in a low-citation discipline in the same year could dramatically multiply the score for this indicator. It also introduces an element of potential instability. Even if the methodology remains completely unchanged this year, Alexandria and Bilkent and others are going to drop scores of places as papers go on receiving citations but get less value from them as the benchmark number rises.
Raising the threshold of number of publications might not be a good idea. It is certainly true that Leiden University have a threshold of 400 publications a year but Leiden is measuring only research impact while THE and TR are measuring a variety of indicators. There are already too many blank spaces in these rankings and their credibility will be further undermined if universities are not assessed on an indicator with such a large weighting.
Times Higher Education and Thomson Reuters are considering changes to their ranking methodology. It seems that the research impact indicator (citations) will figure prominently in their considerations. Phil Baty writes:
In a consultation document circulated to the platform group, Thomson Reuters suggests a range of changes for 2011-12.It would be very wise to do something drastic about the citations indicator. According to last year's rankings, Alexandria university is the fourth best university in the world for research impact, Hong Kong Baptist University is second in Asia, Ecole Normale Superieure Paris best in Europe with Royal Holloway University of London fourth, University of California Santa Cruz fourth in the USA and the University of Adelaide best in Australia.
A key element of the 2010-11 rankings was a "research influence" indicator, which looked at the number of citations for each paper published by an institution. It drew on some 25 million citations from 5 million articles published over five years, and the data were normalised to reflect variations in citation volume between disciplines.
Thomson Reuters and THE are now consulting on ways to moderate the effect of rare, exceptionally highly cited papers, which could boost the performance of a university with a low publication volume.
One option would be to increase the minimum publication threshold for inclusion in the rankings, which in 2010 was 50 papers a year.
Feedback is also sought on modifications to citation data reflecting different regions' citation behaviour.
Thomson Reuters said that the modifications had allowed "smaller institutions with good but not outstanding impact in low-cited countries" to benefit.
If anyone would like to justify these results they are welcome to post a comment.
I would like to make these suggestions for modifying the citations indicator.
Do not count self-citations, citations to the same journal in which a paper is published or citations to the same university. This would reduce, although not completely eliminate, manipulation of the citation system. If this is not done there will be massive self citation and citation of friends and colleagues. It might even be possible to implement a measure of net citation by deducting citations from an institution from the citations to it, thus reduce the effect of tacit citation agreements.
Normalisation by subject field is probably going to stay. It is reasonable that some consideration should be given to scholars who work in fields where citations are delayed and infrequent. However, it should be recognised that the purpose of this procedure is to identify pockets of excellent and research institutions are not built around a few pockets or even a single one. There are many ways of measuring research impact and this is just one of them. Others that might be used include total citations, citations per faculty, citations per research income and h-index.
Normalisation by year is especially problematical and should be dropped. It means that a handful of citations to an article classified as being in a low-citation discipline in the same year could dramatically multiply the score for this indicator. It also introduces an element of potential instability. Even if the methodology remains completely unchanged this year, Alexandria and Bilkent and others are going to drop scores of places as papers go on receiving citations but get less value from them as the benchmark number rises.
Raising the threshold of number of publications might not be a good idea. It is certainly true that Leiden University have a threshold of 400 publications a year but Leiden is measuring only research impact while THE and TR are measuring a variety of indicators. There are already too many blank spaces in these rankings and their credibility will be further undermined if universities are not assessed on an indicator with such a large weighting.
Tuesday, January 25, 2011
More Rankings
With acknowledgements to Registrarism, here is news about more new rankings.
Universities are being ranked according to the popularity of their Twitter accounts. According to an article in the Chronicle of Higher Education:
The top five are:
1. Berkeley (Surprise!)
2. Nottingham
3. York (Canada)
4. Northeastern
5. Cornell
Universiti Putra Malaysia is 6th and Universitas Indonesia 15th.
With acknowledgements to Registrarism, here is news about more new rankings.
Universities are being ranked according to the popularity of their Twitter accounts. According to an article in the Chronicle of Higher Education:
Stanford earned a Klout score of 70, with Syracuse University, Harvard University, and the University of Wisconsin at Madison all following with a score of 64.Another is the Green Metric Ranking of World Universities, compiled by Universitas Indonesia. The criteria are green statistics, energy and climate change, waste, water and transportation.
The top 10 is rounded out by University of California at Berkeley, Butler University, Temple University, Tufts University, the University of Minnesota, the University of Texas at Austin, and Marquette University.
The top five are:
1. Berkeley (Surprise!)
2. Nottingham
3. York (Canada)
4. Northeastern
5. Cornell
Universiti Putra Malaysia is 6th and Universitas Indonesia 15th.
Cambridge and Harvard
After the break with THE, QS decided to continue the old methodology of the 2004-2009 rankings. At least, that is what they said. It was therefore surprising to see that, according to data provided by QS, there were in fact a number of noticeable rises and falls between 2009 and 2010 although nothing like as much as in previous years.
For example the University of Munich fell from 66th place to 98th place, the Free University of Berlin from 70th to 94th and Stockholm University from 168th to 215th while University College Dublin rose from 114th to 89th and Wurzburg from 309th to 215th.
But perhaps the most remarkable news was that Cambridge replaced Harvard as the world's best university. In every other ranking Harvard is well ahead.
So how did it happen? According to Martin Ince, “Harvard has taken more students since the last rankings were compiled without an equivalent increase in the number of academics.”
In other words there should have been a lower faculty student ratio and therefore a lower score for this indicator. This in fact happened. Harvard’s score went from 98 to 97.
Ince also says that there was an “improvement in staffing levels”, at Cambridge, presumably meaning that there was an increase in the number of faculty relative to the number of students. Between 2009 and 2010 Cambridge’s score for the student faculty remained the same at 100 which is consistent with Ince’s claim.
In addition to this, there was a "significant growth in the number of citations per faculty member" for Cambridge. It is not impossible that the number of citations racked up by Cambridge had risen relative to Harvard but the QS indicator counts citations over a five year so even a substantial increase in publications or citations would take a few years to have an equivalent effect on this indicator. Also note that this indicator is citations per faculty and it appears that the number of faculty at Cambridge has gone up relative to Harvard. So we would expect any increase in citations to be cancelled out by a similar increase in faculty.
It looks a little odd then that for this indicator the Cambridge score rose from 89 to 93, four points, which is worth 0.8 in the weighted total score. That, by the way, was the difference between Harvard and Cambridge in 2009.
The oddity is compounded when we look at other high ranking universities. between 2009 and 2010 Leiden's score for citations per faculty rose from 97 to 99, Emory from 90 to 95, Oxford from 80 to 84, Florida from 70 to 75.
It would at first sight appear plausible that if Harvard, the top scorer in both years, did worse on this indicator then everybody or nearly everybody else would do better. But if we look at universities further down the table, we found the opposite. Between 2009 and 2010 for this indicator Bochum fell from 43 to 34, Ghent from 43 to 37, Belfast from 44 to 35 and so on.
Could it be that there was some subtle and unannounced change in the method by which the raw scores were transformed into indicator scores. Is it just a coincidence that the change was sufficient to erase the difference between Harvard and Cambridge?
http://www.wiziq.com/tutorial/90743-QS-World-University_Rankings-top-500
After the break with THE, QS decided to continue the old methodology of the 2004-2009 rankings. At least, that is what they said. It was therefore surprising to see that, according to data provided by QS, there were in fact a number of noticeable rises and falls between 2009 and 2010 although nothing like as much as in previous years.
For example the University of Munich fell from 66th place to 98th place, the Free University of Berlin from 70th to 94th and Stockholm University from 168th to 215th while University College Dublin rose from 114th to 89th and Wurzburg from 309th to 215th.
But perhaps the most remarkable news was that Cambridge replaced Harvard as the world's best university. In every other ranking Harvard is well ahead.
So how did it happen? According to Martin Ince, “Harvard has taken more students since the last rankings were compiled without an equivalent increase in the number of academics.”
In other words there should have been a lower faculty student ratio and therefore a lower score for this indicator. This in fact happened. Harvard’s score went from 98 to 97.
Ince also says that there was an “improvement in staffing levels”, at Cambridge, presumably meaning that there was an increase in the number of faculty relative to the number of students. Between 2009 and 2010 Cambridge’s score for the student faculty remained the same at 100 which is consistent with Ince’s claim.
In addition to this, there was a "significant growth in the number of citations per faculty member" for Cambridge. It is not impossible that the number of citations racked up by Cambridge had risen relative to Harvard but the QS indicator counts citations over a five year so even a substantial increase in publications or citations would take a few years to have an equivalent effect on this indicator. Also note that this indicator is citations per faculty and it appears that the number of faculty at Cambridge has gone up relative to Harvard. So we would expect any increase in citations to be cancelled out by a similar increase in faculty.
It looks a little odd then that for this indicator the Cambridge score rose from 89 to 93, four points, which is worth 0.8 in the weighted total score. That, by the way, was the difference between Harvard and Cambridge in 2009.
The oddity is compounded when we look at other high ranking universities. between 2009 and 2010 Leiden's score for citations per faculty rose from 97 to 99, Emory from 90 to 95, Oxford from 80 to 84, Florida from 70 to 75.
It would at first sight appear plausible that if Harvard, the top scorer in both years, did worse on this indicator then everybody or nearly everybody else would do better. But if we look at universities further down the table, we found the opposite. Between 2009 and 2010 for this indicator Bochum fell from 43 to 34, Ghent from 43 to 37, Belfast from 44 to 35 and so on.
Could it be that there was some subtle and unannounced change in the method by which the raw scores were transformed into indicator scores. Is it just a coincidence that the change was sufficient to erase the difference between Harvard and Cambridge?
http://www.wiziq.com/tutorial/90743-QS-World-University_Rankings-top-500
Thursday, January 20, 2011
Comparing Rankings
International university rankings are now proliferating in much the same way that American rankings have multiplied over the last few decades although so far there is no global equivalent to top party schools or best colleges for squirrels.
It is now time for a cursory review and comparison of the major international rankings. I will omit recent rankings, those that look as though they may not be repeated or those that provide insufficient information about methodology.
The list is as follows:
Academic Ranking of World Universities
Higher Education Evaluation and Accreditation Council of Taiwan
International Professional Ranking of Higher Education Institutions (Paris Mines Tech)
Leiden Ranking
QS World University rankings
Scimago Institutions Ranking
THE World University Rankings
Webometrics Ranking of World Universities
The first attribute to be considered is simply the number of universities ranked. A ranking might have an impeccable methodology and analyse a score of indicators with the strictest attention to current bibliometric theory and statistical technique. If, however, it only ranks a few hundred universities it is of no use to those interested in the thousands left outside the elite of the ranked.
I am counting the number of universities in published rankings. Here the winner is clearly Webometrics, followed by Scimago.
Webometrics 12,300
Scimago 1,955
QS WUR 616
ARWU 500
HEEACT 500
Leiden 500
THE WUR 400
Paris Mines 376
International university rankings are now proliferating in much the same way that American rankings have multiplied over the last few decades although so far there is no global equivalent to top party schools or best colleges for squirrels.
It is now time for a cursory review and comparison of the major international rankings. I will omit recent rankings, those that look as though they may not be repeated or those that provide insufficient information about methodology.
The list is as follows:
Academic Ranking of World Universities
Higher Education Evaluation and Accreditation Council of Taiwan
International Professional Ranking of Higher Education Institutions (Paris Mines Tech)
Leiden Ranking
QS World University rankings
Scimago Institutions Ranking
THE World University Rankings
Webometrics Ranking of World Universities
The first attribute to be considered is simply the number of universities ranked. A ranking might have an impeccable methodology and analyse a score of indicators with the strictest attention to current bibliometric theory and statistical technique. If, however, it only ranks a few hundred universities it is of no use to those interested in the thousands left outside the elite of the ranked.
I am counting the number of universities in published rankings. Here the winner is clearly Webometrics, followed by Scimago.
Webometrics 12,300
Scimago 1,955
QS WUR 616
ARWU 500
HEEACT 500
Leiden 500
THE WUR 400
Paris Mines 376
Monday, January 17, 2011
Shanghai Ranks Macedonia
I am not sure how accurate the folowing report is. The whole of Macedonia has never had a Nobel or Fields award winner or an ISI highly cited researcher, has published fewer articles than Loughborugh University and has no articles in Nature or Science. It is difficult to see just what a team of seven from Shanghai would evaluate, especially since ARWU is reluctant to get involved with teaching quality or publications in the arts and humanities. Still, it is perhaps indicative that a European country has turned to China to evaluate its universities.
I am not sure how accurate the folowing report is. The whole of Macedonia has never had a Nobel or Fields award winner or an ISI highly cited researcher, has published fewer articles than Loughborugh University and has no articles in Nature or Science. It is difficult to see just what a team of seven from Shanghai would evaluate, especially since ARWU is reluctant to get involved with teaching quality or publications in the arts and humanities. Still, it is perhaps indicative that a European country has turned to China to evaluate its universities.
"Shanghai Jiao Tong University, which analyzes the top universities in the world on quality of faculty, research output quality of education and performance, has been selected to evaluate the public and private institutions for higher education in Macedonia, Minister of Education and Science Nikola Todorov told reporters on Sunday.
The ranking team included the Shanghai University Director, Executive and six members of the University's Center, Todorov said, pointing out that Macedonia is to be the first country from the region to be part of the the Academic Ranking of World Universities (ARWU), commonly known as the Shanghai ranking.
"The Shanghai ranking list is the most relevant in the world, and being part of it is a matter of prestige. We shall be honored our institutions for higher education to be evaluated by this university. This is going to be a revolution in the education sector, as for the first time we are offered an opportunity to see where we stand in regard to the quality," Todorov said"
Sunday, January 16, 2011
Full QS Rankings 2010
QS have published full details including indicator scores of the top 400 universities in their 2010 rankings. In the transparency stakes this brings them level with THE who have an iphone/ipad app that provides these details for the main indicators but not the sub-indicators.
QS have published full details including indicator scores of the top 400 universities in their 2010 rankings. In the transparency stakes this brings them level with THE who have an iphone/ipad app that provides these details for the main indicators but not the sub-indicators.
Thursday, January 13, 2011
Microsoft Academic Search
Microsoft has developed a computer science research ranking. Organisations, mainly but not entirely universities, are ranked according to number of citations and there is also data on publications and the h-index.
The top five in the world are Stanford, MIT, Berkeley, Carnegie-Mellon and Microsoft. Harvard is seventh and Cambridge 18th.
Top regional universities are:
Africa -- Cape Town
Asia and Oceania -- Tel Aviv
Europe -- Cambridge
North America -- Stanford
South America -- Universidade de Sao Paulo
Microsoft has developed a computer science research ranking. Organisations, mainly but not entirely universities, are ranked according to number of citations and there is also data on publications and the h-index.
The top five in the world are Stanford, MIT, Berkeley, Carnegie-Mellon and Microsoft. Harvard is seventh and Cambridge 18th.
Top regional universities are:
Africa -- Cape Town
Asia and Oceania -- Tel Aviv
Europe -- Cambridge
North America -- Stanford
South America -- Universidade de Sao Paulo
Monday, January 10, 2011
The Disposable Academic
An article in the Economist (print edition, 18-31/12/2010, 146-8) analyses the plight of many of the world's Ph Ds. Many can expect nothing more a succession of miserable post-doc fellowships, short-term contracts or part-time jobs teaching remedial or matriculation classes. And those are the lucky ones who actually get their diploma.
It seems that the financial return for a Ph D is only marginally higher than that for a master's. Since there are undoubtedly variations by institution and discipline, it follows that for many joining a doctoral program is a losing proposition in every way.
One wonders whether countries like South Africa and some in Southeast Asia are creating future problems in the drive to boost the production of Ph Ds.
An article in the Economist (print edition, 18-31/12/2010, 146-8) analyses the plight of many of the world's Ph Ds. Many can expect nothing more a succession of miserable post-doc fellowships, short-term contracts or part-time jobs teaching remedial or matriculation classes. And those are the lucky ones who actually get their diploma.
It seems that the financial return for a Ph D is only marginally higher than that for a master's. Since there are undoubtedly variations by institution and discipline, it follows that for many joining a doctoral program is a losing proposition in every way.
One wonders whether countries like South Africa and some in Southeast Asia are creating future problems in the drive to boost the production of Ph Ds.
Friday, January 07, 2011
Value for Money
An article by Richard Vedder describes how the publication of data by Texas A and M University shows enormous variation in the cost of faculty members per student taught.
An article by Richard Vedder describes how the publication of data by Texas A and M University shows enormous variation in the cost of faculty members per student taught.
I recently asked my student research assistant to explore this data by choosing, more or less at random, 40 professors of the university's main campus at College Station — including one highly paid professor with a very modest teaching load in each department and one instructor who is modestly paid but teaches many students. The findings were startling, even for a veteran professor like myself.There are of course questions to be asked about whether the data included the supervision of dissertations and the difficulty of the courses taught. Even so, the results deserve close scrutiny and might even be a model for some sort of international comparison.
The 20 high-paid professors made, on average, over $200,000 each, totaling a little over $5 million annually to the university. These professors collectively taught 125 students last year, or roughly $40,000 per student. Since a typical student takes about 10 courses a year, the average cost of educating a student exclusively with this group of professors would be about $400,000, excluding other costs beyond faculty salaries.
Tuesday, January 04, 2011
Dumbing Down of University Grades
An article in the London Daily Telegraph shows that the number of first and upper second class degrees awarded by British universities has risen steadily over the last few decades. Their value to employers as an indicator of student quality has accordingly diminished.
David Barrett reports that:
We might note that the THE-QS rankings until 2009 and the QS rankings of last year have probably done quite a lot to encourage complacency by consistently overrating British universities especially Oxbridge and the London colleges.
An article in the London Daily Telegraph shows that the number of first and upper second class degrees awarded by British universities has risen steadily over the last few decades. Their value to employers as an indicator of student quality has accordingly diminished.
David Barrett reports that:
The latest data shows that the criteria for awarding degrees has changed dramatically - despite complaints from many universities that grade inflation at A-level has made it hard for them to select candidates.
Traditionally, first class honours have been awarded sparingly to students who show exceptional depth of knowledge and originality.
But the new figures add further weight to a report by MPs last year which found that "inconsistency in standards is rife" and accused vice-chancellors of "defensive complacency".
We might note that the THE-QS rankings until 2009 and the QS rankings of last year have probably done quite a lot to encourage complacency by consistently overrating British universities especially Oxbridge and the London colleges.
Thursday, December 23, 2010
Top US Colleges by Salary
PayScale has published its ranking of US colleges by mid-career median salary. The top five are:
1. Harvey Mudd College
2. Princeton
3. Dartmouth
4. Harvard
5. Caltech
The top schools in various categories are:
Engineering: Harvey Mudd
Ivy League: Princeton
Liberal Arts: Harvey Mudd
Party Colleges: Union College, NY
Private Research Universities: Princeton
State Universities: Colorado School of Mines
PayScale has published its ranking of US colleges by mid-career median salary. The top five are:
1. Harvey Mudd College
2. Princeton
3. Dartmouth
4. Harvard
5. Caltech
The top schools in various categories are:
Engineering: Harvey Mudd
Ivy League: Princeton
Liberal Arts: Harvey Mudd
Party Colleges: Union College, NY
Private Research Universities: Princeton
State Universities: Colorado School of Mines
Saturday, December 04, 2010
Can 25 Million Citations be Wrong?
Perhaps not but a few hundred might.
University World News has an article by Phil Baty, deputy editor of Times Higher Education, that discusses the recent THE World University Rankings. He is mainly concerned with the teaching component of the rankings and I hope to discuss this in a little while. However, there are some remarks about the citation component that are worth commenting on. He says:
It is also very likely -- although I cannot recall seeing direct confirmation -- that Thomson Reuters were benchmarking by year so that a university would score more for a citation to a recently cited article than one to an article published four years ago.
In the case of Alexandria and other universities that scored unexpectedly well we are not talking about millions of citations. We are talking about dozens of papers and hundreds of citations the effect of which has been enormously magnified because the papers were in a low-citation field and were cited within months of publication.
Remember also that we are talking about averages. This means that a university will get a higher score the smaller the number of papers that are published in ISI- indexed journals. Alexandria did not do so well just because El Naschie published a lot and was cited a lot. It also did well because overall it published few articles. Had Alexandria researchers published more then its score would have been correspondingly lower..
Perhaps El Naschie constitutes a clear pocket of excellence, although that is not entirely uncontroversial. But he is a clear pocket of excellence who only became visible because of the modest achievement of the rest of the university. Conversely, there are probably many modestly cited researchers in Europe and the USA who might welcome a move to a university in Asia or Latin America where a few papers and citations in a low cited discipline would blossom into such a pocket.
Is Aleaxandria one of two or three anomalies? There are in fact many more more anomalies perhaps not quite so obvious and this can be seen by comparing scores for research impact with other rankings of research output and impact such as HEEACT and Scimago or with the scores for research in the THE rankings themselves. It would also be interesting if THE released the released of the academic reputational survey.
Consider what would happen if we had a couple of universities that were generally similar, with the same income, staff-student ratio and so on. One however had published two or three times the number of ISI indexed articles as the other. Both had a few researchers who had been cited more frequently than is usual for their discipline. Under the current system, the first university would get a much lower score than the second. Can this really be consider a preference for quality over quantity? Only if we think that publishing in ISI journals adds to quantity but does not indicate quality.
I hope that food for thought means radical revision of the citations indicator.
A minimal list of changes would include adding more markers of research impact, removing self citations and citations to the same university and the same journal and combining the score for the various disciplinary fields.
If this can be done then the THE rankings may become what was promised.
Perhaps not but a few hundred might.
University World News has an article by Phil Baty, deputy editor of Times Higher Education, that discusses the recent THE World University Rankings. He is mainly concerned with the teaching component of the rankings and I hope to discuss this in a little while. However, there are some remarks about the citation component that are worth commenting on. He says:
"We look at research in a number of different ways, examining research reputation, income and research volume (through publication in leading academic journals indexed by Thomson Reuters). But we give the highest weighting to an indicator of 'research influence', measured by the number of times a university's published research is cited by academics around the globe.First, normalisation of data means that the number of citations is compared to a benchmark derived from the world average number of citations for a subject area. A large number of citations might mean that an article has been warmly received. It might equally well mean that the article was in a field where articles are typically cited a lot. Comparing simple numbers of citations in a field like literary studies to those in medical research would be unfair to the former since citations there are relatively scarce. So part of the reason for Alexandria University's remarkable success in the recent THE WUR was not just the number of citations of the papers of Mohamed El Naschie but also that he was publishing in a field with a low frequency of citations. Had he published papers on medicine nobody would have noticed.
We looked at more than 25 million citations over a five-year period from more than five million articles. All the data were normalised to reflect variations in citation volume between different subject areas.
This indicator has proved controversial, as it has shaken up the established order, giving high scores to some smaller institutions with clear pockets of research excellence, often at the expense of the larger research-intensive universities.
We make no apology for recognising quality over quantity, but we concede that our decision to openly include in the tables the two or three extreme statistical outliers, in the interests of transparency, has given some fuel for criticism, and has given us some food for thought for next year's table."
It is also very likely -- although I cannot recall seeing direct confirmation -- that Thomson Reuters were benchmarking by year so that a university would score more for a citation to a recently cited article than one to an article published four years ago.
In the case of Alexandria and other universities that scored unexpectedly well we are not talking about millions of citations. We are talking about dozens of papers and hundreds of citations the effect of which has been enormously magnified because the papers were in a low-citation field and were cited within months of publication.
Remember also that we are talking about averages. This means that a university will get a higher score the smaller the number of papers that are published in ISI- indexed journals. Alexandria did not do so well just because El Naschie published a lot and was cited a lot. It also did well because overall it published few articles. Had Alexandria researchers published more then its score would have been correspondingly lower..
Perhaps El Naschie constitutes a clear pocket of excellence, although that is not entirely uncontroversial. But he is a clear pocket of excellence who only became visible because of the modest achievement of the rest of the university. Conversely, there are probably many modestly cited researchers in Europe and the USA who might welcome a move to a university in Asia or Latin America where a few papers and citations in a low cited discipline would blossom into such a pocket.
Is Aleaxandria one of two or three anomalies? There are in fact many more more anomalies perhaps not quite so obvious and this can be seen by comparing scores for research impact with other rankings of research output and impact such as HEEACT and Scimago or with the scores for research in the THE rankings themselves. It would also be interesting if THE released the released of the academic reputational survey.
Consider what would happen if we had a couple of universities that were generally similar, with the same income, staff-student ratio and so on. One however had published two or three times the number of ISI indexed articles as the other. Both had a few researchers who had been cited more frequently than is usual for their discipline. Under the current system, the first university would get a much lower score than the second. Can this really be consider a preference for quality over quantity? Only if we think that publishing in ISI journals adds to quantity but does not indicate quality.
I hope that food for thought means radical revision of the citations indicator.
A minimal list of changes would include adding more markers of research impact, removing self citations and citations to the same university and the same journal and combining the score for the various disciplinary fields.
If this can be done then the THE rankings may become what was promised.
Friday, December 03, 2010
Is there a Future for Citations?
simplification administrative has some caustic comments on the role of self-citation and reciprocal citation in the remarkable performance of Alexandria University in the 2010 THE rankings.
The title, 'Bibliometry -- already broken', is perhaps unduly pessimistic but THE and Thomson Reuters are going to have to move quickly if they are to rescue their rankings. An obvious remedy would include removing self-citations and intra-university and intra-journal citations from the count.
simplification administrative has some caustic comments on the role of self-citation and reciprocal citation in the remarkable performance of Alexandria University in the 2010 THE rankings.
The title, 'Bibliometry -- already broken', is perhaps unduly pessimistic but THE and Thomson Reuters are going to have to move quickly if they are to rescue their rankings. An obvious remedy would include removing self-citations and intra-university and intra-journal citations from the count.
Monday, November 29, 2010
Priorities
An article by William Patrick Leonard in the Korea Herald discusses how this year's rankings by Shanghai Jiao Tong University, THE and QS, whatever their faults, indicate a long term shift in academic excellence from the USA and Europe to China and other parts of Asia.
He then observes how in the US sport seems to take precedence over education.
An article by William Patrick Leonard in the Korea Herald discusses how this year's rankings by Shanghai Jiao Tong University, THE and QS, whatever their faults, indicate a long term shift in academic excellence from the USA and Europe to China and other parts of Asia.
"All three release their findings as the school year begins. Each employs a similar blend of indicators, which purportedly measure the relative quality of the institutions surveyed. All emphasize institutional reputation expressed in the quality and quantity of faculty publications, peer assessments, faculty/student ratios, budgets and other input quality measures..
These rankings are clearly flawed; for example, it is not evident that the volume of scholarly publications or peer assessments reflects quality in the classroom. Nevertheless, the rankings show that other fast-growing countries are willing to apply their resources to higher education, just as the United States has been doing for years"
He then observes how in the US sport seems to take precedence over education.
"Yet, instead of strengthening our academic programming, some are planning costly recreational diversions. In May, the National Football Foundation & College Hall of Fame announced that “six new college football teams are set to take the field for the first time this season with 11 more programs set to launch between 2011 and 2013.” Such an announcement is simultaneously sad and humorous. The resources spent to implement and subsequently prop up these programs could be used to improve technology, science, and engineering programs. Sadly, some institutions have opted for stadiums over instructional infrastructure."
Saturday, November 20, 2010
Comment on the New York Times Article
This is from Paul Wouters at CWTS, Leiden University
"However, the reason for this high position is the performance of exactly one (1) academic: Mohamed El Naschie, who published 323 articles in the Elsevier journal Chaos, Solitons and Fractals of which he is the founding editor. His articles frequently cite other articles in the same journal by the same author. On many indicators, the Alexandrian university does not score very high, but on the number of citations indicator the university scores 99.8 which puts it as the 4th most highly cited in the world. This result clearly does not make any sense at all. Apparently, the methodology used by the THES is not only problematic because it puts a high weight on surveys and perceived reputation. It is also problematic because the way the THES survey counts citations and makes them comparable across fields (in technical terms, the way these counts are normalized) is not able to filter out these forms of self-promotion by self-citations. In other words: the way the THES uses citation analysis does not meet one of the requirements of sound indicators: robustness against simple forms of manipulation."
BTW, to be a bit pedantic, it's not THES any more.
This is from Paul Wouters at CWTS, Leiden University
"However, the reason for this high position is the performance of exactly one (1) academic: Mohamed El Naschie, who published 323 articles in the Elsevier journal Chaos, Solitons and Fractals of which he is the founding editor. His articles frequently cite other articles in the same journal by the same author. On many indicators, the Alexandrian university does not score very high, but on the number of citations indicator the university scores 99.8 which puts it as the 4th most highly cited in the world. This result clearly does not make any sense at all. Apparently, the methodology used by the THES is not only problematic because it puts a high weight on surveys and perceived reputation. It is also problematic because the way the THES survey counts citations and makes them comparable across fields (in technical terms, the way these counts are normalized) is not able to filter out these forms of self-promotion by self-citations. In other words: the way the THES uses citation analysis does not meet one of the requirements of sound indicators: robustness against simple forms of manipulation."
BTW, to be a bit pedantic, it's not THES any more.
The Influence of Rankings
From an announcement about merit scholarships from The Islamic Development Bank.
"The successful candidate must secure admission to one universities listed in the Times Higher Education Supplement (THES)."
That obviously needs updating but does it mean Alexandria but not Texas?
From an announcement about merit scholarships from The Islamic Development Bank.
"The successful candidate must secure admission to one universities listed in the Times Higher Education Supplement (THES)."
That obviously needs updating but does it mean Alexandria but not Texas?
Wednesday, November 17, 2010
Correction
A previous post, "Debate, Anyone" contained some data about comparative rates of self-citation among various univeristies. The methodology used was not appropriate (calcuting the number of articles that contained self-citations as a percentage of the sum of citations). I am recalculating although the relative level of self-citation among universities is most unlikely to be affected.
A previous post, "Debate, Anyone" contained some data about comparative rates of self-citation among various univeristies. The methodology used was not appropriate (calcuting the number of articles that contained self-citations as a percentage of the sum of citations). I am recalculating although the relative level of self-citation among universities is most unlikely to be affected.
Article in New York Times
The New York Times has a long article, which is being quoted globally, about the THE World University Rankings. So far it has been cited in newspapers in Italy, Spain, France and Egypt and no doubt other places to come.
The New York Times has a long article, which is being quoted globally, about the THE World University Rankings. So far it has been cited in newspapers in Italy, Spain, France and Egypt and no doubt other places to come.
Sunday, November 14, 2010
Another Ranking
An organisation called Eduroute has produced a ranking of universities by "web quality". The idea sounds interesting but we need to be given some more information.The top five are:
1. Harvard
2. MIT
3. Cornell
4. Stanford
5. UC Berkeley
After that there are some surprises with National Taiwan University in 7th place, University of the Basque Country 16th, Sofia University 44th, Yildiz Technical University 57th.
The structure of these rankings is as follows:
Volume 20% "The volume of information published is measure by a set of commands that run on the major search engines. "
Online scientific information 10% "Eduroute measures this aspect through the search engines which specialize in publishing researches and scholarly articles and which search a university's website for all available publications."
Links quantity 30%: "Here Eduroute measures the number of incoming links whether these links are from academic or nonacademic websites."
Quality of links and contents 40%: "it was of great importance to measure this aspect of any website in order to reflect the true size of a university's website on the internet and to measure the degree in which the university is concerned with the quality of content it provides on its website."
This is rather vague and, since only rank order is given, there is no explanation for the high position of the University of the Basque Country, Yildiz Technical University and so on.
The location and personnel of Eduroute also remains mysterious. I did, however, receive this message
"Eduroute is an organization involved in determining rankings for universities. We pride ourselves in offering people with a true collection of information that will assist them when it comes to classifying universities and their rankings.
When coming up with rankings for universities we put into consideration several parameters in order to come up with as accurate a conclusion as possible. This methodology is frequently evaluated and improved on to obtain a solid benchmark that can cast a true reflection of rankings for universities. Eduroute focuses on studying universities’ websites. We believe that the support and investment a university inputs into its website is proportional to the degree of interaction of the website and its users (students, staff and lecturers). The volume and content of the university’s website is analysed while also putting into consideration the traffic flow to the website. The number of external links leading to the university’s website is also a key factor as it is a reflection of how popular the site is. Such parameters and more are useful while determining the rankings for universities. Educational institutions also have the opportunity of registering with us so as to be included in the rankings.
At Eduroute we put all our energies in ensuring the rankings for universities we offer are as accurate as possible. We believe this information is of vital importance both to the general public and to the universities and as such we offer a professional service that satisfies both parties.
Regards,
May Attia
Project Manager
www.eduroute.info"
An organisation called Eduroute has produced a ranking of universities by "web quality". The idea sounds interesting but we need to be given some more information.The top five are:
1. Harvard
2. MIT
3. Cornell
4. Stanford
5. UC Berkeley
After that there are some surprises with National Taiwan University in 7th place, University of the Basque Country 16th, Sofia University 44th, Yildiz Technical University 57th.
The structure of these rankings is as follows:
Volume 20% "The volume of information published is measure by a set of commands that run on the major search engines. "
Online scientific information 10% "Eduroute measures this aspect through the search engines which specialize in publishing researches and scholarly articles and which search a university's website for all available publications."
Links quantity 30%: "Here Eduroute measures the number of incoming links whether these links are from academic or nonacademic websites."
Quality of links and contents 40%: "it was of great importance to measure this aspect of any website in order to reflect the true size of a university's website on the internet and to measure the degree in which the university is concerned with the quality of content it provides on its website."
This is rather vague and, since only rank order is given, there is no explanation for the high position of the University of the Basque Country, Yildiz Technical University and so on.
The location and personnel of Eduroute also remains mysterious. I did, however, receive this message
"Eduroute is an organization involved in determining rankings for universities. We pride ourselves in offering people with a true collection of information that will assist them when it comes to classifying universities and their rankings.
When coming up with rankings for universities we put into consideration several parameters in order to come up with as accurate a conclusion as possible. This methodology is frequently evaluated and improved on to obtain a solid benchmark that can cast a true reflection of rankings for universities. Eduroute focuses on studying universities’ websites. We believe that the support and investment a university inputs into its website is proportional to the degree of interaction of the website and its users (students, staff and lecturers). The volume and content of the university’s website is analysed while also putting into consideration the traffic flow to the website. The number of external links leading to the university’s website is also a key factor as it is a reflection of how popular the site is. Such parameters and more are useful while determining the rankings for universities. Educational institutions also have the opportunity of registering with us so as to be included in the rankings.
At Eduroute we put all our energies in ensuring the rankings for universities we offer are as accurate as possible. We believe this information is of vital importance both to the general public and to the universities and as such we offer a professional service that satisfies both parties.
Regards,
May Attia
Project Manager
www.eduroute.info"
Friday, November 12, 2010
Article by Philip Altbach
Inside Higher Education has a substantial and perceptive article on international university rankings by Philip Altbach.
Towards the end there is a round-up of the current season. Some quotations:
On QS
Forty percent of the QS rankings are based on a reputational survey. This probably accounts for the significant variability in the QS rankings over the years. Whether the QS rankings should be taken seriously by the higher education community is questionable.
On ARWU
Some of AWRU’s criteria clearly privilege older, prestigious Western universities — particularly those that have produced or can attract Nobel prizewinners. The universities tend to pay high salaries and have excellent laboratories and libraries. The various indexes used also heavily rely on top peer-reviewed journals in English, again giving an advantage to the universities that house editorial offices and key reviewers. Nonetheless, AWRU’s consistency, clarity of purpose, and transparency are significant advantages.
On THE
Some of the rankings are clearly inaccurate. Why do Bilkent University in Turkey and the Hong Kong Baptist University rank ahead of Michigan State University, the University of Stockholm, or Leiden University in Holland? Why is Alexandria University ranked at all in the top 200? These anomalies, and others, simply do not pass the "smell test." Let it be hoped that these, and no doubt other, problems can be worked out.
Inside Higher Education has a substantial and perceptive article on international university rankings by Philip Altbach.
Towards the end there is a round-up of the current season. Some quotations:
On QS
Forty percent of the QS rankings are based on a reputational survey. This probably accounts for the significant variability in the QS rankings over the years. Whether the QS rankings should be taken seriously by the higher education community is questionable.
On ARWU
Some of AWRU’s criteria clearly privilege older, prestigious Western universities — particularly those that have produced or can attract Nobel prizewinners. The universities tend to pay high salaries and have excellent laboratories and libraries. The various indexes used also heavily rely on top peer-reviewed journals in English, again giving an advantage to the universities that house editorial offices and key reviewers. Nonetheless, AWRU’s consistency, clarity of purpose, and transparency are significant advantages.
On THE
Some of the rankings are clearly inaccurate. Why do Bilkent University in Turkey and the Hong Kong Baptist University rank ahead of Michigan State University, the University of Stockholm, or Leiden University in Holland? Why is Alexandria University ranked at all in the top 200? These anomalies, and others, simply do not pass the "smell test." Let it be hoped that these, and no doubt other, problems can be worked out.
Subscribe to:
Posts (Atom)