Growth of Academic Publications: Southwest Asia, 2009-2010
One of several surprises in last year's THE rankings was the absence of any Israeli university from the Top 200. QS had three and, as noted earlier on this blog, over a fifth of Israeli universities were in the Shanghai 500, a higher proportion than any other country. It seems that in the case of at least two universities, Tel Aviv and the Hebrew University of Jerusalem, there was a failure of communication that meant that data was not submitted to Thomson Reuters, who collect and analyse data for THE.
The high quality of Israeli universities might seem rather surprising since Israeli secondary school students perform poorly on international tests of scholastic attainment and the national average IQ is mediocre. It could be that part of the reason for the strong Israeli academic performance is the Psychometric Entrance Test for university admission that measures quantitative and verbal reasoning and also includes an English test. The contrast with the trend in the US, Europe and elsewhere towards holistic assessment, credit for leadership, community involvement, overcoming adversity and being from the right post code area is striking.
Even so, Israeli scientific supremacy in the Middle East is looking precarious. Already the annual production of academic papers in Israel has been exceeded by Iran and Turkey.
Meanwhile, the total number of papers produced in Israel is shrinking while that of Iran and Turkey continues to grow at a respectable rate. The fastest growth in Southwest Asia comes from Saudi Arabia and the smaller Gulf states of Qatar and Bahrain.
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Saudi Arabia 35% (3924)
2. Qatar 31% (453)
3. Syria 14% (333)
4. Bahrain 13% (184)
5. Palestine 9% (24)
6. UAE 6% (303)
7. Turkey 5% (26835)
8. Lebanon 4% (2058)
9. Iran 4% (21047)
10. Oman 4% (494)
11. Jordan 1% (1637)
12. Iraq -3% (333)
13. Israel -4% (17719)
14. Yemen -8% (125)
15. Kuwait -13% (759)
(data collected 23/3/11)
This of course may not say very much about the quality of research. A glance at the ISI list of highly cited researchers shows that Israel is ahead, for the moment at least, with 50 compared to 29 for Saudi Arabia and one each for Turkey and Iran.
Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Saturday, March 26, 2011
Thursday, March 24, 2011
Growth in Academic Publications: Southeast Asia 2009-2010
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Malaysia 31% (8603)
2. Laos 30% (96)
3. Indonesia 30% (1631)
4. Brunei 16% (88)
5. Papua New Guinea 5% (67)
6. Vietnam 5% (1247)
7. Singapore 4% (11900)
8. Thailand 2% (2248)
9. Timor Leste 0% (4)
10. Cambodia -5% (158)
11. Myanmar -12% (78)
Singapore is still the dominant research power in Southeast Asia but Malaysia and Indonesia, admittedly with much larger populations, are closing fast. Thailand is growing very slowly and Myanmar is shrinking.
(data collected 23/3/11)
Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)
1. Malaysia 31% (8603)
2. Laos 30% (96)
3. Indonesia 30% (1631)
4. Brunei 16% (88)
5. Papua New Guinea 5% (67)
6. Vietnam 5% (1247)
7. Singapore 4% (11900)
8. Thailand 2% (2248)
9. Timor Leste 0% (4)
10. Cambodia -5% (158)
11. Myanmar -12% (78)
Singapore is still the dominant research power in Southeast Asia but Malaysia and Indonesia, admittedly with much larger populations, are closing fast. Thailand is growing very slowly and Myanmar is shrinking.
(data collected 23/3/11)
Tuesday, March 22, 2011
Comparing Rankings 3: Omissions
The big problem with the Asiaweek rankings of 1999-2000 was that they relied on data submitted by universities. This meant that if enough were dissatisfied they could effectively sabotage the rankings by withholding information, which is in fact what happened.
The THES-QS rankings, and since 2010 the QS rankings, avoided this problem by ranking universities whether they liked or not. Nonetheless, there were a few omissions in the early years: Lancaster, Essex, Royal Holloway University of London and the SUNY campuses at Binghamton, Buffalo and Albany.
In 2010 THE decided that they would not rank universities that did not submit data, a principled decision but one that has its dangers. Too many conscientious objectors (or maybe poor losers) and the rankings would begin to lose face validity.
When the THE rankings came out last year, there were some noticeable absentees, among them the Chinese University of Hong Kong, the University of Queensland, Tel Aviv University, the Hebrew University of Jerusalem, the University of Texas at Austin, the Catholic University of Louvain, Fudan University, Rochester, Calgary, the Indian Institutes of Technology and Science and Sciences Po Paris.
As Danny Byrne pointed out in University World News, Texas at Austin and Moscow State University were in the top100 in the Reputation Rankings but not in the THE World University Rankings. Producing a reputation-only ranking without input from the institutions could be a smart move for THE.
Monday, March 21, 2011
QS comments on the THE Reputation Ranking
In University World News, Danny Byrne from QS comments on the new THE reputation ranking.
In University World News, Danny Byrne from QS comments on the new THE reputation ranking.
So why has THE decided to launch a world ranking based entirely on institutional reputation? Is it for the benefit of institutions like Moscow State University, which did not appear in THE's original top 200 but now appears 33rd in the world?
The data on which the new reputational ranking is based has been available for six months and comprised 34.5% of the world university rankings published by THE in September 2010.
But this is the first time the magazine has allowed anyone to view this data in isolation. Allowing users to access the data six months ago may have attracted less attention, but it would perhaps have been less confusing for prospective students.
The order of the universities in the reputational rankings differs from the THE's overall ranking. But no new insights have been offered and nothing has changed. This plays into the hands of those who are sceptical about university rankings.
Wednesday, March 16, 2011
Worth Reading
Ellen Hazelkorn, 'Questions Abound as the College-Rankings Race Goes Global' in Chronicle of Higher Education
"It is amazing that more than two decades after U.S. News & World Report first published its special issue on "America's Best Colleges," and almost a decade since Shanghai Jiao Tong University first published the Academic Ranking of World Universities, rankings continue to dominate the attention of university leaders. Indeed, the range of people watching them now includes politicians, students, parents, businesses, and donors. Simply put, rankings have caught the imagination of the public and have insinuated their way into public discourse and almost every level of government. There are even iPhone applications to help individuals and colleges calculate their ranks.
More than 50 country-specific rankings and 10 global rankings are available today, including the European Union's new U-Multirank, due this year. What started as small-scale, nationally focused guides for students and parents has become a global business that heavily influences higher education and has repercussions well beyond academe."
Tuesday, March 15, 2011
Bright Ideas Department
This is from today's Guardian:
This is from today's Guardian:
The coalition is considering a Soviet-style central intervention policy to effectively fine individual universities if they impose unreasonable tuition fees next year.Next bright idea? A Gulag for recalcitrant vice-chancellors? Re-education camps for those who don't take their teaching philosophy statements seriously enough?
Vince Cable, the business secretary whose department is responsible for universities, and David Willetts, the universities minister, are looking at allowing colleges that charge a modest fee to expand and constraining those that are charging too much.
The government, through the Higher Education Funding Council, sets the grant and numbers for each university and has the power to fine a university as much as £3,000 per student if it over-recruits in a single year.
Ministers are looking at cutting funding from universities that unreasonably charge the maximum £9,000 fee from 2012-13. They admit it is likely most universities will charge well over £8,000 a year.
One minister said: "A form of dramatic centralisation is under active consideration - a form of Gosplan if you like," a reference to the Russian state planning committee set up in the 1920s.
Saturday, March 12, 2011
Going Global Hong Kong 2011
Speeches about rankings by Martin Davidson, British Council, Phil Baty, THE, John Molony, QS, and others can be seen here.
Speeches about rankings by Martin Davidson, British Council, Phil Baty, THE, John Molony, QS, and others can be seen here.
Thursday, March 10, 2011
A Bit More on the THE Reputation Rankings
There is a brief article in the Guardian with a lot of comments.
Incidentally, I don't see Alexandria, Hong Kong Baptist and Bilkent Universities in the top 100 for reputation despite the outstanding work that gave them high scores for research impact in the 2010 THE WUR. Perhaps I'm not looking hard enough.
There is a brief article in the Guardian with a lot of comments.
Incidentally, I don't see Alexandria, Hong Kong Baptist and Bilkent Universities in the top 100 for reputation despite the outstanding work that gave them high scores for research impact in the 2010 THE WUR. Perhaps I'm not looking hard enough.
The THE Reputation Rankings
Times Higher Education have constructed a reputation ranking from the data collected for last year's World University Rankings. There is a weighting of two thirds for research and one third for postgraduate teaching. The top five are:
1. Harvard
2. MIT
3. Cambridge
4. UC Berkeley
5. Stanford
Scores are given only for the top fifty universities. Then another fifty are sorted in bands of ten without scores. Evidently, the number of responses favouring universities outside the top 100 was so small that it was not worth listing.
This means that the THE reputational survey reveals significant differences between Harvard and MIT or between Cambridge and Oxford but it would be of no help to those wondering whether to study or work at the University of Cape Town or the University of Kwazulu-Natal or Trinity College Dublin or University College Dublin.
The scores for research reputation (top fifty for total reputation scores only) show a moderate correlation with the THE citations indicator (.422) and, perhaps surprisingly, a higher correlation with the citations per faculty score on the QS World University Rankings of 2010 (.538).
looking at the QS academic survey, which asked only about research, we can see that there was an insignificant correlation of .213 between the QS scores and the score for citations per faculty in the QS rankings (THE reputation ranking top 50 only). However, there was a higher correlation between the QS survey and the THE citations indicator of .422, the same as that between the THE research reputation scores and the the THE citations indicator.
Comparing the two research surveys with a third party, the citations indicator in the Scimago 2010 rankings, the THE research reputation survey did better with a correlation of .438 compared to an insignificant .188 for the QS academic survey.
This seems to suggest that the THE reputational survey does a better job at differentiating between the world's elite universities. But once we leave the top 100 it is perhaps less helpful and there may still be a role for the QS rankings
Times Higher Education have constructed a reputation ranking from the data collected for last year's World University Rankings. There is a weighting of two thirds for research and one third for postgraduate teaching. The top five are:
1. Harvard
2. MIT
3. Cambridge
4. UC Berkeley
5. Stanford
Scores are given only for the top fifty universities. Then another fifty are sorted in bands of ten without scores. Evidently, the number of responses favouring universities outside the top 100 was so small that it was not worth listing.
This means that the THE reputational survey reveals significant differences between Harvard and MIT or between Cambridge and Oxford but it would be of no help to those wondering whether to study or work at the University of Cape Town or the University of Kwazulu-Natal or Trinity College Dublin or University College Dublin.
The scores for research reputation (top fifty for total reputation scores only) show a moderate correlation with the THE citations indicator (.422) and, perhaps surprisingly, a higher correlation with the citations per faculty score on the QS World University Rankings of 2010 (.538).
looking at the QS academic survey, which asked only about research, we can see that there was an insignificant correlation of .213 between the QS scores and the score for citations per faculty in the QS rankings (THE reputation ranking top 50 only). However, there was a higher correlation between the QS survey and the THE citations indicator of .422, the same as that between the THE research reputation scores and the the THE citations indicator.
Comparing the two research surveys with a third party, the citations indicator in the Scimago 2010 rankings, the THE research reputation survey did better with a correlation of .438 compared to an insignificant .188 for the QS academic survey.
This seems to suggest that the THE reputational survey does a better job at differentiating between the world's elite universities. But once we leave the top 100 it is perhaps less helpful and there may still be a role for the QS rankings
Wednesday, March 09, 2011
The Second Wave
It seems that another wave of rankings is coming. The new edition of America's best graduate schools will be out soon, QS will be releasing detailed subject rankings and, according to Bloomberg Businessweek, THE's ranking by reputation is imminent. It seems that the academic anglosphere dominates when reputation alone is considered.
It seems that another wave of rankings is coming. The new edition of America's best graduate schools will be out soon, QS will be releasing detailed subject rankings and, according to Bloomberg Businessweek, THE's ranking by reputation is imminent. It seems that the academic anglosphere dominates when reputation alone is considered.
Tuesday, March 08, 2011
Comment on the Paris Ranking
Ben Wildavsky in the Chronicle of Higher Education says:
The Mines ParisTech ranking is an explicitly chauvinistic exercise, born of French unhappiness with the dismal showing of its universities in influential surveys such as the Academic Ranking of World Universities created at Shanghai Jiao Tong University in 2003. When designing the Mines ParisTech ranking, with a view to influencing the architects of the Shanghai methodology, the college says in the FAQ section of its survey results, “we believed it was useful to highlight the good results of French institutions at a time when the Shanghai ranking was widely and is still widely discussed, and not always to the advantage of our own schools and universities.” What’s more, it goes on, “these results constitute a genuine communication tool at an international level, both for the recruitment of foreign students as well as among foreign companies which are not always very familiar with our education system.” Given the genesis of the ranking, it doesn’t seem too surprising that three French institutions made it into this year’s top 10 — École Polytechnique and École Nationale d’Administration joined HEC Paris — while Mines ParisTech itself placed 21st in the world.
Sunday, March 06, 2011
The Paris rankings
The fifth edition of the Professional Ranking of World Universities from Paris Mines Tech has just been published. This is based on one indicator, the number of top corporate CEOs. Here are the top ten:
1. Harvard
2. Tokyo
3. Keio
4. HEC, France
5. Kyoto
5. Oxford
7. Ecole Polytechnique
8. Waseda
9. ENA
10. Seoul National University
The fifth edition of the Professional Ranking of World Universities from Paris Mines Tech has just been published. This is based on one indicator, the number of top corporate CEOs. Here are the top ten:
1. Harvard
2. Tokyo
3. Keio
4. HEC, France
5. Kyoto
5. Oxford
7. Ecole Polytechnique
8. Waseda
9. ENA
10. Seoul National University
Saturday, February 26, 2011
Surveys and Citations
I have just finished calculating the correlation between the scores for the academic survey and citations per faculty on the 2010 QS world University rankings.
Since the survey asked about research and since citations are supposed to be a robust indicator of research excellence we would expect a high correlation between the two.
It is in fact .391, which is on the low side. There could be valid reasons why it is so low. Citations, by definition, must follow publication which follows research which in turn is preceded by proposals and a variety of bureaucratic procedures. A flurry of citations might be indicative of the quality of research begun a decade ago. The responses to the survey might, on the other hand, be based on the first signs of research excellence long before the citations start rolling in.
Still, the correlation does not seem high enough. At first glance one would suspect that the survey is faulty but it could be that citations do not mean very much any more as a measure of excellence.
It would be very interesting to calculate the correlation between the score for research reputation on the Times Higher Education WUR and its citation indicator.
We would expect the THE survey to be more valid since the basic qualification for being included in the survey is being the corresponding author of an article included in the ISI indexes whereas for QS it is signing up for a journal published by World Scientific. But it can no longer be assumed that authorship of any article means very much . Does it always require more initiative and interest to get on the list of co-authors than to sign up for an online subscription?
It should also be noted that there is an overlap between the two surveys as both are supplemented with arts and humanities respondents from the Mardev mailing lists.
I have calculated the correlation between the citations indicator (normalised average citations per paper) in the THE 2010 rankings and the research indicator -- volume ( 4.5 % of the total score) income (6%) and reputation (19.5%)
This is .562, quite a bit better than the QS correlation . However, the research indicator combines a survey with other data.
It would be very interesting if THE and/or Thomson Reuters released the scores of the individual components of the research indicator.
I have just finished calculating the correlation between the scores for the academic survey and citations per faculty on the 2010 QS world University rankings.
Since the survey asked about research and since citations are supposed to be a robust indicator of research excellence we would expect a high correlation between the two.
It is in fact .391, which is on the low side. There could be valid reasons why it is so low. Citations, by definition, must follow publication which follows research which in turn is preceded by proposals and a variety of bureaucratic procedures. A flurry of citations might be indicative of the quality of research begun a decade ago. The responses to the survey might, on the other hand, be based on the first signs of research excellence long before the citations start rolling in.
Still, the correlation does not seem high enough. At first glance one would suspect that the survey is faulty but it could be that citations do not mean very much any more as a measure of excellence.
It would be very interesting to calculate the correlation between the score for research reputation on the Times Higher Education WUR and its citation indicator.
We would expect the THE survey to be more valid since the basic qualification for being included in the survey is being the corresponding author of an article included in the ISI indexes whereas for QS it is signing up for a journal published by World Scientific. But it can no longer be assumed that authorship of any article means very much . Does it always require more initiative and interest to get on the list of co-authors than to sign up for an online subscription?
It should also be noted that there is an overlap between the two surveys as both are supplemented with arts and humanities respondents from the Mardev mailing lists.
I have calculated the correlation between the citations indicator (normalised average citations per paper) in the THE 2010 rankings and the research indicator -- volume ( 4.5 % of the total score) income (6%) and reputation (19.5%)
This is .562, quite a bit better than the QS correlation . However, the research indicator combines a survey with other data.
It would be very interesting if THE and/or Thomson Reuters released the scores of the individual components of the research indicator.
Wednesday, February 23, 2011
Reputation, reputation, reputation!
As the world (or some of it) waits for the ranking survey forms to appear in its mail boxes, both THE and QS are promoting their surveys.
According to Phil Baty of THE:
"But in our consultation with the sector, there was strong support for the continued use of reputation information in the world rankings. Some 79 per cent of respondents to a survey by our rankings data provider Thomson Reuters rated reputation as a “must have” or “nice to have” measure. We operate in a global market where reputation clearly matters."
He then indicates several ways in which the THE survey is an improvement over the THE-QS, now QS, survey.
"We received a record 13,388 usable responses in just three months, making the survey the biggest of its kind in the world.
We promised a transparent approach. The methodology and survey instrument were published in full and this week, the thousands of academics who took part in the survey were sent a detailed report on the respondent profile. It makes reassuring reading:
• Responses were received from 131 countries"
It would, however, be interesting if the number of respondents from all countries were indicated. There are some people who wonder whether THE's sampling technique means that Singapore got the lion's share of responses in Southeast Asia.
Also, will THE publish the scores for the reputation surveys? At the moment they are bundled in with the other teaching and research indicators. What is the correlation between the score for research reputation and the citations indicator? Is there any sign that Alexandria, Bilkant or Hong Kong Baptist University have reputations that match their scores for research impact?
Meanwhile QS also has an item on its survey. They find that there is a similar demand for data on reputations.
"An impressive 79% of respondents, voted reputation for research as one of their top three criteria, with 60% choosing international profile of faculty, essentially another indicator of international reputation for research. This is in stark contrast to the 26% and 30% that prioiritised citations as a key measure.
Furthermore, when breaking these results out by broad faculty area, we can see consistent support across disciplines for the reputation measure but a marked dip in support for citations as a measure amongst respondents in the Arts & Humanities area – which tends to be the area least recognized by traditional measures of research output."
As the world (or some of it) waits for the ranking survey forms to appear in its mail boxes, both THE and QS are promoting their surveys.
According to Phil Baty of THE:
"But in our consultation with the sector, there was strong support for the continued use of reputation information in the world rankings. Some 79 per cent of respondents to a survey by our rankings data provider Thomson Reuters rated reputation as a “must have” or “nice to have” measure. We operate in a global market where reputation clearly matters."
He then indicates several ways in which the THE survey is an improvement over the THE-QS, now QS, survey.
"We received a record 13,388 usable responses in just three months, making the survey the biggest of its kind in the world.
We promised a transparent approach. The methodology and survey instrument were published in full and this week, the thousands of academics who took part in the survey were sent a detailed report on the respondent profile. It makes reassuring reading:
• Responses were received from 131 countries"
It would, however, be interesting if the number of respondents from all countries were indicated. There are some people who wonder whether THE's sampling technique means that Singapore got the lion's share of responses in Southeast Asia.
Also, will THE publish the scores for the reputation surveys? At the moment they are bundled in with the other teaching and research indicators. What is the correlation between the score for research reputation and the citations indicator? Is there any sign that Alexandria, Bilkant or Hong Kong Baptist University have reputations that match their scores for research impact?
Meanwhile QS also has an item on its survey. They find that there is a similar demand for data on reputations.
"An impressive 79% of respondents, voted reputation for research as one of their top three criteria, with 60% choosing international profile of faculty, essentially another indicator of international reputation for research. This is in stark contrast to the 26% and 30% that prioiritised citations as a key measure.
Furthermore, when breaking these results out by broad faculty area, we can see consistent support across disciplines for the reputation measure but a marked dip in support for citations as a measure amongst respondents in the Arts & Humanities area – which tends to be the area least recognized by traditional measures of research output."
Comment on Internationalisation
International Focus, the newsletter of UK HE international Unit, has an article by Jane Knight on myths of internationalisation. The second myth is:
"Myth two rests on a belief that the more international a university is – in terms of students, faculty, curriculum, research, agreements, network memberships – the better its reputation is.
This is tied to the false notion that a strong international reputation is a proxy for quality. Cases of questionable admission and exit standards for universities highly dependent on the revenue and ‘brand equity’ of international students are concrete evidence that internationalisation does not always translate into improved quality or high standards.
This myth is further complicated by the quest for higher rankings on a global or regional league table such as the Times Higher Education or Academic World Ranking of Universities (AWRU). It is highly questionable whether the league tables accurately measure the internationality of a university and more importantly whether the international dimension is always a robust indicator of quality."
Also, it is much easier to be international in Switzerland or Singapore than in Central China or the Midwest of the US.
International Focus, the newsletter of UK HE international Unit, has an article by Jane Knight on myths of internationalisation. The second myth is:
"Myth two rests on a belief that the more international a university is – in terms of students, faculty, curriculum, research, agreements, network memberships – the better its reputation is.
This is tied to the false notion that a strong international reputation is a proxy for quality. Cases of questionable admission and exit standards for universities highly dependent on the revenue and ‘brand equity’ of international students are concrete evidence that internationalisation does not always translate into improved quality or high standards.
This myth is further complicated by the quest for higher rankings on a global or regional league table such as the Times Higher Education or Academic World Ranking of Universities (AWRU). It is highly questionable whether the league tables accurately measure the internationality of a university and more importantly whether the international dimension is always a robust indicator of quality."
Also, it is much easier to be international in Switzerland or Singapore than in Central China or the Midwest of the US.
Tuesday, February 22, 2011
Penn State Law School
Malcolm Gladwell has an article in the current New Yorker about the US News and World Report college rankings. There is quite a lot there that I would like to discuss in another post. For the moment, I will just comment on an anecdote about the appearance of a non-existent law school in a ranking.
Gladwell descibes how Thomas Brennan, who edits a well known ranking of law schools, once sent out a questionnaire to other lawyers asking them to rank law schools and found that Penn State was, as Brennan is quoted as recalling, ranked around fifth. This was strange since there was no law school at Penn State until quite recently (1997 or 2000 in different sources).
This immediately struck me as odd since I remember a similar story about the Princeton Law School, which does not exist and which was also supposed to have made its appearance in a ranking.
The Princeton story is very probably apocryphal and might have begun with a comment by the dean of New York University Law School in the Dartmouth Law Journal that Princeton would appear in the top twenty law schools if a questionnaire was asked about it.
This story was plausible since it was an apparent example of the halo effect with Princeton's general excellence being reflected in the perception of a school that did not exist.
The problem with Brennan's account retold by Gladwell, which does not appear to be supported by documentary evidence, is that it requires that many lawyers should not only have mistakenly thought that Penn State had a law school (getting mixed up with the University of Pennsylvania?) but should have been in error about the general quality of the university. Penn State is nowhere near being a top ten or even a top fifty school.
Could this be another academic legend?
Malcolm Gladwell has an article in the current New Yorker about the US News and World Report college rankings. There is quite a lot there that I would like to discuss in another post. For the moment, I will just comment on an anecdote about the appearance of a non-existent law school in a ranking.
Gladwell descibes how Thomas Brennan, who edits a well known ranking of law schools, once sent out a questionnaire to other lawyers asking them to rank law schools and found that Penn State was, as Brennan is quoted as recalling, ranked around fifth. This was strange since there was no law school at Penn State until quite recently (1997 or 2000 in different sources).
This immediately struck me as odd since I remember a similar story about the Princeton Law School, which does not exist and which was also supposed to have made its appearance in a ranking.
The Princeton story is very probably apocryphal and might have begun with a comment by the dean of New York University Law School in the Dartmouth Law Journal that Princeton would appear in the top twenty law schools if a questionnaire was asked about it.
This story was plausible since it was an apparent example of the halo effect with Princeton's general excellence being reflected in the perception of a school that did not exist.
The problem with Brennan's account retold by Gladwell, which does not appear to be supported by documentary evidence, is that it requires that many lawyers should not only have mistakenly thought that Penn State had a law school (getting mixed up with the University of Pennsylvania?) but should have been in error about the general quality of the university. Penn State is nowhere near being a top ten or even a top fifty school.
Could this be another academic legend?
Sunday, February 20, 2011
Impact Assessment
The use of citations as a measure of research quality was highlighted by the remarkable performance of Alexandria University, Bilkent University, Hong Kong Baptist University and others in the 2010 Times Higher Education World University Rankings. As THE and Thomson Reuters review their methodology, perhaps they could take note of this post in Francis' World Inside Out, that refers to a paper by Arnold and Fowler.
'“Goodhart’s law warns us that “when a measure becomes a target, it ceases to be a good measure.” The impact factor has moved in recent years from an obscure bibliometric indicator to become the chief quantitative measure of the quality of a journal, its research papers, the researchers who wrote those papers and even the institution they work in. The impact factor for a journal in a given year is calculated by ISI (Thomson Reuters) as the average number of citations in that year to the articles the journal published in the preceding two years. It is widely used by researchers deciding where to publish and what to read, by tenure and promotion committees laboring under the assumption that publication in a higher impact-factor journal represents better work. However, it has been widely criticized on a variety of grounds (it does not determine paper’s quality, it is a crude and flawed statistic, etc.). Impact factor manipulation can take numerous forms. Let us follow Douglas N. Arnold and Kristine K. Fowler, “Nefarious Numbers,” Notices of the AMS 58: 434-437, Mach 2011 [ArXiv, 1 Oct 2010].
Editors can manipulate the impact factor by means of the following practices: (1) “canny editors cultivate a cadre of regulars who can be relied upon to boost the measured quality of the journal by citing themselves and each other shamelessly;” (2) “authors of manuscripts under review often were asked or required by editors to cite other papers from the journal; this practice borders on extortion, even when posed as a suggestion;” and (3) “editors raise their journals’ impact factors is by publishing review items with large numbers of citations to the journal.” “These unscientific practices wreak upon the scientific literature have raised occasional alarms. A counterexample should confirm the need for alarm.” '
The use of citations as a measure of research quality was highlighted by the remarkable performance of Alexandria University, Bilkent University, Hong Kong Baptist University and others in the 2010 Times Higher Education World University Rankings. As THE and Thomson Reuters review their methodology, perhaps they could take note of this post in Francis' World Inside Out, that refers to a paper by Arnold and Fowler.
'“Goodhart’s law warns us that “when a measure becomes a target, it ceases to be a good measure.” The impact factor has moved in recent years from an obscure bibliometric indicator to become the chief quantitative measure of the quality of a journal, its research papers, the researchers who wrote those papers and even the institution they work in. The impact factor for a journal in a given year is calculated by ISI (Thomson Reuters) as the average number of citations in that year to the articles the journal published in the preceding two years. It is widely used by researchers deciding where to publish and what to read, by tenure and promotion committees laboring under the assumption that publication in a higher impact-factor journal represents better work. However, it has been widely criticized on a variety of grounds (it does not determine paper’s quality, it is a crude and flawed statistic, etc.). Impact factor manipulation can take numerous forms. Let us follow Douglas N. Arnold and Kristine K. Fowler, “Nefarious Numbers,” Notices of the AMS 58: 434-437, Mach 2011 [ArXiv, 1 Oct 2010].
Editors can manipulate the impact factor by means of the following practices: (1) “canny editors cultivate a cadre of regulars who can be relied upon to boost the measured quality of the journal by citing themselves and each other shamelessly;” (2) “authors of manuscripts under review often were asked or required by editors to cite other papers from the journal; this practice borders on extortion, even when posed as a suggestion;” and (3) “editors raise their journals’ impact factors is by publishing review items with large numbers of citations to the journal.” “These unscientific practices wreak upon the scientific literature have raised occasional alarms. A counterexample should confirm the need for alarm.” '
Looking East
Shanghai is planning to persuade two Ivy League schools, Cornell and Columbia, to set up branch campuses there. They already a branch of New York University.
Would anybody like to make a prediction when a new Oxford or Cambridge college will be established in Shanghai (or Singapore or Hong Kong)?
Or when an entire American university will move to China?
Shanghai is planning to persuade two Ivy League schools, Cornell and Columbia, to set up branch campuses there. They already a branch of New York University.
Would anybody like to make a prediction when a new Oxford or Cambridge college will be established in Shanghai (or Singapore or Hong Kong)?
Or when an entire American university will move to China?
More dumbing Down
De Paul University will make it optional for applicants to submit SAT or ACT scores. Instead they can write short essays that demonstrate non-cognitive traits such as "commitment to service", "leadership" and "ability to meet long term goals".
The university says:
'"Admissions officers have often said that you can't measure heart," said Jon Boeckenstedt, associate vice president for enrollment management. "This, in some sense, is an attempt to measure that heart."
Mr. Boeckenstedt expects the change to encourage applicants with high grade-point averages but relatively low ACT and SAT scores to apply—be they low-income students, underrepresented minorities, or otherwise. Moreover, he and his colleagues believe the new admissions option will allow them to better select applicants who are most likely to succeed—and graduate.'
De Paul's administrators are being extremely naive if they think that these attributes cannot be easily coached or faked. Bluntly, how much effort does it take to teach a student what to say on one of these essays compared to squeezing out a few more points on the SAT?
De Paul University will make it optional for applicants to submit SAT or ACT scores. Instead they can write short essays that demonstrate non-cognitive traits such as "commitment to service", "leadership" and "ability to meet long term goals".
The university says:
'"Admissions officers have often said that you can't measure heart," said Jon Boeckenstedt, associate vice president for enrollment management. "This, in some sense, is an attempt to measure that heart."
Mr. Boeckenstedt expects the change to encourage applicants with high grade-point averages but relatively low ACT and SAT scores to apply—be they low-income students, underrepresented minorities, or otherwise. Moreover, he and his colleagues believe the new admissions option will allow them to better select applicants who are most likely to succeed—and graduate.'
De Paul's administrators are being extremely naive if they think that these attributes cannot be easily coached or faked. Bluntly, how much effort does it take to teach a student what to say on one of these essays compared to squeezing out a few more points on the SAT?
Wednesday, February 16, 2011
Another US News Ranking
This one is about the schools where congressmen received their bachelor degrees.
Here are the top 10. What might be more interesting is the party affiliation of the congressmen. D = Democrat, R = Republican, I = Independent.
1. Harvard D 13, R 2
2. Stanford D 9, R 2
3. Yale D 8, R 1, I 1
4. UCLA D 6, R 3
5= Georgetown D 5, R 2
5= Florida D2, R5
5= Georgia D1, R6
5= Wisconsin - Madison D6, R1
9. North carolina -- Chapel Hill D 5, R 1
10= Brigham Young R5
10= George Washington D2, R5
10= Louisiana State D1, R4
10= Berkeley D4, R1
10= Missouri D4, R1
10= Tennessee D2, r3
This one is about the schools where congressmen received their bachelor degrees.
Here are the top 10. What might be more interesting is the party affiliation of the congressmen. D = Democrat, R = Republican, I = Independent.
1. Harvard D 13, R 2
2. Stanford D 9, R 2
3. Yale D 8, R 1, I 1
4. UCLA D 6, R 3
5= Georgetown D 5, R 2
5= Florida D2, R5
5= Georgia D1, R6
5= Wisconsin - Madison D6, R1
9. North carolina -- Chapel Hill D 5, R 1
10= Brigham Young R5
10= George Washington D2, R5
10= Louisiana State D1, R4
10= Berkeley D4, R1
10= Missouri D4, R1
10= Tennessee D2, r3
The Fortune 500
The US News has produced a ranking of US universities according to the number of degrees awarded to the CEO s of the Fortune 500, the largest American corporations according to gross revenue.
Here are the top five.
1. Harvard
2. Columbia
3. University of pennsylvania
4. Unuiversity of Wisconsin -- Madison
5. Dartmouth College
The US News has produced a ranking of US universities according to the number of degrees awarded to the CEO s of the Fortune 500, the largest American corporations according to gross revenue.
Here are the top five.
1. Harvard
2. Columbia
3. University of pennsylvania
4. Unuiversity of Wisconsin -- Madison
5. Dartmouth College
Sunday, February 13, 2011
Ranking Education Schools
The US News and World Report, publishers of America's Best Colleges, are teaming up with the National Council on Teachers Quality to produce a rating of teacher preparation programs.
Many Education deans are strongly opposed. See here.
The US News and World Report, publishers of America's Best Colleges, are teaming up with the National Council on Teachers Quality to produce a rating of teacher preparation programs.
Many Education deans are strongly opposed. See here.
We are all equal
I have come across an interesting article, "The equality of intelligence", in the philosopher's magazine by Nina Power. It is one of a series, "Ideas of the century" (I am not sure which one).
Power, whose dissertation is entitled From Theoretical Antihumanism to Practical Humanism: The Political Subject in Sartre, Althusser and Badiou and who is a senior lecturer at Roehampton University, refers to the work of Jacques Rancière,
Another example of her writing is Sarah Palin: Castration as Plenitude. Presumably that is potentially understandable by everybody.
I have come across an interesting article, "The equality of intelligence", in the philosopher's magazine by Nina Power. It is one of a series, "Ideas of the century" (I am not sure which one).
Power, whose dissertation is entitled From Theoretical Antihumanism to Practical Humanism: The Political Subject in Sartre, Althusser and Badiou and who is a senior lecturer at Roehampton University, refers to the work of Jacques Rancière,
"who never tires of repeating his assertion that equality is not just something to be fought for, but something to be presupposed, is, for me, one of the most important ideas of the past decade. Although Rancière begins the discussion of this idea in his 1987 text The Ignorant Schoolmaster, it is really only in the last ten years that others have taken up the idea and attempted to work out what it might mean for politics, art and philosophy. Equality may also be something one wishes for in a future to come, after fundamental shifts in the arrangement and order of society. But this is not Rancière’s point at all. Equality is not something to be achieved, but something to be presupposed, universally. Everyone is equally intelligent."Just in case you thought she was kidding:
"In principle then, there is no reason why a teacher is smarter than his or her student, or why educators shouldn’t be able to learn alongside pupils in a shared ignorance (coupled with the will to learn). The reason why we can relatively quickly understand complex arguments and formulae that have taken very clever people a long time to work out lends credence to Rancière’s insight that, at base, nothing is in principle impossible to understand and that everyone has the potential to understand anything."Power seems to be living in a different universe from those of us in the academic periphery. Perhaps she is actually pulling a Sokalian stunt but I suspect not. This sort of thing might be funny to many of us but it seems to be taken seriously in departments of education around the world. Just take a look at the model teaching philosophy statements found on the Internet.
Another example of her writing is Sarah Palin: Castration as Plenitude. Presumably that is potentially understandable by everybody.
Friday, February 11, 2011
More on Citations
A column in the THE by Phil Baty indicates that there might be some change in the research impact indicator in the forthcoming THE World University Rankings. It is good that THE is considering changes but I have a depressing feeling that Thomson Reuters, who collect the citations data, are going to have more weight in this matter than anyone or anything else.
Baty refers to a paper by Simon Pratt who manages the data for TR and THE.
Also, in the real world are there many universities that are excellent in a single field, defined as narrowly as theoretical physics or applied mathematics, while being mediocre or worse in everything else? Anyone who thinks that Alexandria is the fourth best university in the world for research impact because of its uncontested excellence in mathematics should take a look here.
There are also problems with normalising by region. Precisely what the regions are for the purposes of this indicator is not stated. If Africa is a region, does this mean that Alexandria got another boost, one denied to other Middle Eastern universities? Is Istanbul in Europe and Bilkent in Asia? Does Singapore get an extra weighting because of the poor performance of its Southeastern neighbours?
There are two other aspects of the normalisation that are not foregrounded in the article. First, TR apparently use normalisation by year. In some disciplines it is rare for a paper to be cited within a year of publication..In others it is commonplace. An article that is classified as being in a low citation field would get a massive boost if in addition it had a few citations within months of publication.
Remember also that the scores represent averages. A small number of total publications means an immense advantage for a university that has a few highly cited article in low cited fields and is located in a normally unproductive region. Alexandria's remarkable success was due to the convergence of four favourable factors: credit for publishing in a low citation sub-discipline, the frequent citation of recently published papers, being located in a continent whose scholars are not generally noticed and finally the selfless cooperation of hundreds of faculty who graciously refrained from sending papers to ISI indexed journals.
Alexandria University may not be open for the rest of this year and may not take part in the second THE WUR exercise. One wonders though how many universities around the world could benefit from these four factors and how many are getting ready to submit data to Thomson Reuters.
A column in the THE by Phil Baty indicates that there might be some change in the research impact indicator in the forthcoming THE World University Rankings. It is good that THE is considering changes but I have a depressing feeling that Thomson Reuters, who collect the citations data, are going to have more weight in this matter than anyone or anything else.
Baty refers to a paper by Simon Pratt who manages the data for TR and THE.
The issue was brought up again this month in a paper to the RU11 group of 11 leading research universities in Japan. It was written by Simon Pratt, project manager for institutional research at Thomson Reuters, which supplies the data for THE’s World University Rankings.This is correct but perhaps we should also consider whether the number of citations to papers in genetics is telling us something about the value that societies place on genetics rather than on mathematics and perhaps that is something that should not be ignored.
Explaining why THE’s rankings normalise for citations data by discipline, Pratt highlights the extent of the differences. In molecular biology and genetics, there were more than 1.6 million citations for the 145,939 papers published between 2005 and 2009, he writes; in mathematics, there were just 211,268 citations for a similar number of papers (140,219) published in the same period.
Obviously, an institution with world-class work in mathematics would be severely penalised by any system that did not reflect such differences in citations volume.
Also, in the real world are there many universities that are excellent in a single field, defined as narrowly as theoretical physics or applied mathematics, while being mediocre or worse in everything else? Anyone who thinks that Alexandria is the fourth best university in the world for research impact because of its uncontested excellence in mathematics should take a look here.
There are also problems with normalising by region. Precisely what the regions are for the purposes of this indicator is not stated. If Africa is a region, does this mean that Alexandria got another boost, one denied to other Middle Eastern universities? Is Istanbul in Europe and Bilkent in Asia? Does Singapore get an extra weighting because of the poor performance of its Southeastern neighbours?
There are two other aspects of the normalisation that are not foregrounded in the article. First, TR apparently use normalisation by year. In some disciplines it is rare for a paper to be cited within a year of publication..In others it is commonplace. An article that is classified as being in a low citation field would get a massive boost if in addition it had a few citations within months of publication.
Remember also that the scores represent averages. A small number of total publications means an immense advantage for a university that has a few highly cited article in low cited fields and is located in a normally unproductive region. Alexandria's remarkable success was due to the convergence of four favourable factors: credit for publishing in a low citation sub-discipline, the frequent citation of recently published papers, being located in a continent whose scholars are not generally noticed and finally the selfless cooperation of hundreds of faculty who graciously refrained from sending papers to ISI indexed journals.
Alexandria University may not be open for the rest of this year and may not take part in the second THE WUR exercise. One wonders though how many universities around the world could benefit from these four factors and how many are getting ready to submit data to Thomson Reuters.
Monday, February 07, 2011
Training for Academics
The bureaucratisation of higher education continues relentlessly. Times Higher Education reports on moves to make all UK academics undergo compulsory training. This is not a totally useless idea: a bit of training in teaching methodology would do no harm at all for all those unprepared graduate assistants, part-timers and new Ph Ds that make up an increasing proportion of the work force in European and American universities.
But the higher education establishment has more than this in mind.
The bureaucratisation of higher education continues relentlessly. Times Higher Education reports on moves to make all UK academics undergo compulsory training. This is not a totally useless idea: a bit of training in teaching methodology would do no harm at all for all those unprepared graduate assistants, part-timers and new Ph Ds that make up an increasing proportion of the work force in European and American universities.
But the higher education establishment has more than this in mind.
Plans to revise the UK Professional Standards Framework were published by the HEA in November after the Browne Review called for teaching qualifications to be made compulsory for new academics.
The framework, which was first published in 2006, is used to accredit universities' teaching-development activities, but the HEA has admitted that many staff do not see it as "relevant" to their career progression.
Under the HEA's proposals, the updated framework says that in future, all staff on academic probation will have to complete an HEA-accredited teaching programme, such as a postgraduate certificate in higher education. Postgraduates who teach would also have to take an HEA-accredited course.
A "sector-wide profile" on the number of staff who have reached each level of the framework would be published by the HEA annually.A comment by "agreed" indicates just what is likely to happen.
Meanwhile, training courses would have to meet more detailed requirements.
I did one of these course a couple of years ago. I learnt nothing from the "content" that I couldn't have learnt in a fraction of the time by reading a book. The bulk of the course was an attempt to compel all lecturers to adopt fashionable models of teaching with no regard to the need for students to learn content. The example set by the lecturers on the course was apalling: ill prepared, dogmatic, and lacking in substance. A failure to connect with the "students" and a generally patronising tone was just one of the weaknesses. Weeks of potentially productive time were taken up by jumping through hoops and preparing assignments. This is not an isolated case, I know of several other such courses in other institutions that were equally shambolic. I'm all for improving the qulaity of teaching, but this is nonsensical. The only real benefit was the collegial relations with academics from other deaprtments forged through common bonds of disgust and mockery aimed at this ridiculous enterprise (presumably designed to justify the continued employment of failed academics from other disciplines given the role of teaching the reast of us how to teach).
Subscribe to:
Posts (Atom)