Thursday, April 14, 2011

Art Imitates Life

Which of these texts is an April Fool joke? How can you tell? Links will be posted in a few days.

TEXT 1
The Federal Intelligence Service discovered a Ponzi scheme of academic citations lead by an unemployed particle physicist. A house search conducted in Berlin last week revealed material documenting the planning and administration of a profitable business of trading citations for travel reimbursement.

According to the Federal Intelligence Service, the hint came from researchers at Michigan University, Ann Arbor, who were analyzing the structure of citation networks in the academic community. In late 2010, their analysis pointed towards an exponentially growing cluster originating from a previously unconnected researcher based in Germany's capital. A member of the Ann Arbor group, who wants to remain unnamed, inquired about the biography of the young genius, named Al Bert, sparking such amount of activity. The researcher was easily able to find Dr. Bert scheduled for an unusual amount of seminars in locations all over the world, sometimes more than 4 per week. However, upon contacting the respective institutions, nobody could remember the seminars, which according to Prof. Dr. Dr. Hubert at The Advanced Institute is "Not at all unusual." The network researcher from Ann Arbor suspected Dr. Bert to be a fictitious person and notified the university whose email address Dr. Bert was still using.

It turned out Dr. Bert is not a fictitious person. Dr. Bert's graduated in 2006, but his contract at the university run out in 2008. After this, colleagues lost sight of Dr. Bert. He applied for unemployment benefits in October 2008. As the Federal Intelligence Service reported this Wednesday, he later founded an agency called 'High Impact' (the website has since been taken down) that offered to boost a paper's citation count. A user registered with an almost finished, but not yet published, paper and agreed to pay EUR 10 to Dr. Bert's agency for each citation his paper received above the author's average citation count at the time of registration. The user also agreed to cite 5 papers the agency would name. A registered user would earn EUR 10 for each recruitment of a new paper, possibly their own.

This rapidly created a growing network of researchers citing each others papers, and encouraged the authors to produce new papers, certain they would become well cited. Within only a few months, the network had spread from physics to other research fields. With each citation, Dr. Bert made an income. The algorithm he used to assign citations also ensured his own works became top cites. Yet, with many researchers suddenly having papers with several hundred citations above their previously average citation count, their fee went into some thousand dollars. On several instances Dr. Bert would suggest they invite him for a seminar at their institution and locate it in a non-existent room. He would then receive reimbursement for a fraudulent self-printed boarding pass, illegible due to an alleged malfunctioning printer.

Names of researchers subscribed to Dr. Bert's agency were not accessible at the time of writing.


TEXT 2

The Case of IJNSNS

The field of applied mathematics provides an illuminating case in which we can study such impact-factor distortion. For the last several years, the International Journal of Nonlinear Sciences and Numerical Simulation (IJNSNS) has dominated the impact-factor charts in the “Mathematics, Applied” category. It took first place in each year 2006, 2007, 2008, and 2009, generally by a wide margin, and came in second in 2005. However, as we shall see, a more careful look indicates that IJNSNS is nowhere near the top of its field. Thus we set out to understand the origin of its large impact factor.


In 2008, the year we shall consider in most detail, IJNSNS had an impact factor of 8.91, easily the highest among the 175 journals in the applied math category in ISI’s Journal Citation Reports (JCR). As controls, we will also look at the two journals in the category with the second and third highest impact factors, Communications on Pure and Applied Mathematics (CPAM) and SIAM Review (SIREV), with 2008 impact factors of 3.69 and 2.80, respectively. CPAM is closely associated with the Courant Institute of Mathematical Sciences, and SIREV is the flagship journal of the Society for Industrial and Applied Mathematics (SIAM).  Both journals have a reputation for excellence.


Evaluation based on expert judgment is the best alternative to citation-based measures for journals. Though not without potential problems of its own, a careful rating by experts is likely to provide a much more accurate and holistic guide  journal quality than impact factor or similar metrics. In mathematics, as in many fields, researchers are widely in agreement about which are the best journals in their specialties. The Australian Research Council recently released such an evaluation, listing quality ratings for over 20,000 peer reviewed journals across disciplines. The list was developed through an extensive review process involving learned academies (such as the Australian Academy of Science), disciplinary bodies (such as the Australian Mathematical Society), and many researchers and expert reviewers.11 This rating is being used in 2010 for the Excellence in Research Australia assessment initiative and is referred to as the ERA 2010 Journal List. The assigned quality rating, which is intended to represent “the overall quality of the journal,” is one of four values: 

• A*: one of the best in its field or subfield
• A: very high quality
• B: solid, though not outstanding, reputation
• C: does not meet the criteria of the higher tiers.

The ERA list included all but five of the 175 journals assigned a 2008 impact factor by JCR in the category “Mathematics, Applied”. Figure 1 shows the impact factors for journals in each of the four rating tiers. We see that, as a proxy for expert opinion, the impact factor does rather poorly. There are many examples of journals with a higher impact factor than other journals that are one, two, and even three rating tiers higher. The red line is drawn so that 20% of the A* journals are below it; it is notable that 51% of the A journals have an impact factor above that level, as do 23% of the B journals and even 17% of those in the C category. The most extreme outlier is IJNSNS, which, despite its relatively astronomical impact factor, is not in the first or second but, rather, third tier.

The ERA rating assigned its highest score, A*, to 25 journals. Most of the journals with the highest impact factors are here, including CPAM and SIREV, but, of the top 10 journals by impact factor, two were assigned an A, and only IJNSNS was assigned a B. There were 53 A-rated journals and 69 B-rated journals altogether. If IJNSNS were assumed to be the best of the B journals, there would be 78 journals with higher ERA ratings, whereas if it were the worst, its ranking would fall to 147. In short, the ERA ratings suggest that IJNSNS is not only not the top applied math journal but also that its rank should be somewhere in the range 75–150. This remarkable mismatch between reputation and impact factor needs an explanation.

Makings of a High Impact Factor

A first step to understanding IJNSNS’s high impact factor is to look at how many authors contributed substantially to the counted citations and who they were. The top-citing author to IJNSNS in 2008 was the journal’s editor-in-chief, Ji-Huan He, who cited the journal (within the two-year window) 243 times. The second top citer, D. D. Ganji, with 114 cites, is also a member of the editorial board, as is the third, regional editor Mohamed El Naschie, with 58 cites. Together these three account for 29% of the citations counted toward the impact factor.

For comparison, the top three citers to SIREV contributed only 7, 4, and 4 citations, respectively, accounting for less than 12% of the counted citations, nd none of these authors is involved in editing the journal. For CPAM the top three citers (9, 8, and 8) contributed about 7% of the citations and, again, were not on the editorial board.

Another significant phenomenon is the extent to which citations to IJNSNS are concentrated within the two-year window used in the impact factor calculation. Our analysis of 2008 citations to articles published since 2000 shows that 16% of the citations to CPAM fell within that two-year  window and only 8% of those to SIREV did; in contrast, 71.5% of the 2008 citations to IJNSNS fell within the two-year window. In Table 1, we show the 2008 impact factors for the three journals, as well as a modified impact factor, which gives the average number of citations in 2008 to articles the journals published not in 2006 and 2007 but in the preceding six years. Since the cited half-life (thetime it takes to generate half of all the eventual citations to an article) for applied mathematics is nearly 10 years,12 this measure is at least as reasonable as the impact factor. It is also independent, unlike JCR’s 5-Year Impact Factor, as its time period does not overlap with that targeted by the Journal.

Table 1.  2008 impact factor with normal 2006–7 window      Modified 2008 “impact factor” with 2000–5 window

IJNSNS 8.91 1.27
CPAM 3.69 3.46 S
IREV 2.8 10.4

2008 impact factors computed with the usual two-preceding-years window, and with a window going back eight years but neglecting the two immediately preceding.
Note that the impact factor of  JNSNS drops precipitously, by a factor of seven, when we consider a different citation window. By contrast, the impact factor of CPAM stays about the same, and that of SIREV increases markedly. One may simply note that, in distinction to the controls, the citations made to IJNSNS in 2008 greatly favor articles published in precisely the two years that are used to calculate the impact factor. Further striking insights arise when we examine the high-citing journals rather than high-citing authors. The counting of journal self-citations in the impact factor is frequently criticized, and indeed it does come into play in this case. In 2008 IJNSNS supplied 102, or 7%, of its own impact factor citations.

The corresponding numbers are 1 citation (0.8%) for SIREV and 8 citations (2.4%) for CPAM. The disparity in other recent years is similarly large or larger. However, it was Journal of Physics: Conference Series that provided the greatest number of IJNSNS citations. A single issue of that journal provided 294 citations to IJNSNS in the impact factor window, accounting for more than 20% of its impact factor. What was this issue? It was the proceedings of a conference organized by IJNSNS editor-in-chief He at his home university. He was responsible for the peer review of the issue. The second top-citing journal for IJNSNS was Topological Methods in Nonlinear Analysis, which contributed 206 citations (14%), again with all citations coming from a single issue. This was a special issue with Ji-Huan He as the guest editor; his co-editor, Lan Xu, is also on the IJNSNS editorial board. J.-H. He himself contributed a brief article to the special issue, consisting of three pages of text and thirty references. Of these, twenty were citations to IJNSNS within the impact-factor window. The remaining ten consisted of eight citations to He and two to Xu.

Continuing down the list of IJNSNS high-citing journals, another similar circumstance comes to light: 50 citations from a single issue of the Journal of Polymer Engineering (which, like IJNSNS, is published by Freund), guest edited by the same pair, Ji-Huan He and Lan Xu. However, third place is held by the journal Chaos, Solitons and Fractals, with 154 citations spread over numerous issues. These are again citations that may be viewed as subject to editorial influence or control. In 2008 Ji-Huan He served on the editorial board of CS&F, and its editor-in-chief was Mohamed El Naschie, who was also a coeditor of IJNSNS. In a highly publicized case, the entire editorial board of CS&F was recently replaced, but El Naschie remained coeditor of IJNSNS.

Many other citations to IJNSNS came from papers published in journals for which He served as editor, such as Zeitschrift für Naturforschung A, which provided forty citations; there are too many others to list here, since He serves in an editorial capacity on more than twenty journals (and has just been named editor-in-chief of four more journals from the newly formed Asian Academic Publishers). Yet another source of citations came from papers authored by IJNSNS editors other than He, which accounted for many more. All told, the aggregation of such editor-connected citations, which are time-consuming to detect, account for more than 70% of all the citations contributing to the IJNSNS impact factor.

Tuesday, April 12, 2011

QS Engineering Rankings

QS have started to published detailed subject rankings based on citations per paper over five years and their surveys of academics and employers. The first of these is engineering. There are five subfields: Computer Science and Information Systems, Chemical Engineering, Civil and Structural Engineering, Electrical and Electronic Engineering and Mechanical, Aeronautical and Manufacturing.

For Civil and Structural Engineering the weighting is 50% for the academic survey, 30% for the employers' survey and 20 % for citations per paper. For the others it is 40%, 30% and 30%.

MIT, not surprisingly, is top in each of the five engineering fields that are ranked. In general, the upper levels of these rankings seem reasonable. However, a look at the details, especially in the bottom half, 100-200 places, raises some questions.

One basic problem is that as QS make finer distinctions, they have to rely on smaller sets of data. There were 285 respondents to the academic survey for chemical engineering and 394 for civil and structural engineering. For the employer survey there were 836 for computer science. Each respondent to the academic survey was allowed to nominate up to 40 universities but usually the number was much lower than this. Around the 151-200 level the number of responses would surely have been very low. Similarly, the number of papers counted in each field varied considerably from 43,222 in civil and structural engineering to 514,95 in electrical and electronic engineering. We should therefore be rather sceptical about these rankings.

Something that is noticeable is that there is a reasonably high correlation between the scores for the academic survey and the employer survey. For electrical engineering it is .682, chemical engineering .695, civil engineering .695, computer science .722.

But there is no correlation at all between the citations per paper indicator and the surveys. For electrical engineering it is .064 between citations and academic survey and -.004 between citations and the employer survey. It is the same for the other subfields. None of the correlations are statistically significant.

Looking at the top universities for the three indicators, we see the same familiar places in each of the subfields according to the surveys: MIT, Stanford, Cambridge, Berkeley, Oxford, Harvard, Imperial College London, Melbourne, Caltech.

But looking at the top scorers for citations per paper, we find a much more varied and unfamiliar array of institutions: New York University, Wageningen, Dartmouth College, Notre Dame, Aalborg, Athens, Lund, Uppsala, Drexel, Tufts, IIT Roorkee, University of Washington, Rice, University of Massachusetts.

The agreement of employers and academic about the quality of engineering programs, even though they refer to different aspects, research and graduate employability, suggests that the surveys are moderately accurate, at least for the top hundred or so.

However, the lack of any correlation  at all between the citations indicator and the surveys needs to be raised. It could be that citations have identified up and coming superstars. Perhaps  the number of papers is so low in the various subfields that the indicator does not mean very much. Perhaps citations have been so manipulated in recent years -- see the case of Alexandria University -- that they are no longer a robust indicator of quality.

Monday, April 04, 2011

First They Came For the Fire Buffs, Then They Came for the Science Nerds....

An interesting aspect of the 2009 court case brought by firemen in New Haven, Connecticut,  was the not very subtle disdain shown by members of the American academic elite towards the pretensions of those who thought that fire fighting required a degree of knowledge and intelligence. The aggrieved firemen had been denied promotion because the test resulted in white firemen doing better than their African American and Hispanic colleagues. See here for an insightful account.

An article by Nicole Allan and Emily Bazelon, graduate of Yale Law School, granddaughter of a judge of the US Court of Appeals, former law clerk for a judge of the US Court of Appeals and Senior Research Scholar in Law and  Truman Capote Fellow for Creative Writing and Law at Yale Law School, reported without noticeable irony complaints about the unfairness of the test that passed too many white firefighters: it favored "fire buffs" (enthusiasts according to the dictionary) who read firefighting manuals in their spare time or who came from families with lots of firefighters.

One wonders whether Bazelon ever wondered whether she had derived an unfair advantage from her family background or felt guilty because she had had read books about law when she did not have to.

The article concluded:

If New Haven could start over, maybe it could also admit outright that it has more deserving firefighters than it has rewards. The city could come up with a measure for who is qualified for the promotions, rather than who is somehow best. And then it could choose from that pool by lottery. That might not exactly be fair, either. But it would recognize that sometimes there may be no such thing.


That has in fact been done in Chicago.

Meanwhile, admission into North American graduate and professional schools has followed the cities of the US by becoming increasingly less selective as intelligence and knowledge are downgraded and admission is increasingly dependent on vague and unverifiable personality traits.

Educational Testing Service, producers of the Graduate Record Exam, have now introduced a Personal Potential Index that will supplement (and perhaps eventually replace?) the use of the GRE and undergraduate grades for admissions to graduate school.

Basically, this new tool involves applicants nominating five evaluators who will provide assessments of "Knowledge and Creativity; Communication Skills; Teamwork; Resilience; Planning and Organization; and Ethics and Integrity". I can see Newton, Darwin, Einstein and James Watson all  tripping up on at least one of these.

The latest development is that the Medical College Admissions Test will do away with its writing test because it does not add much information beyond undergraduate grades. It will be replaced with a section on "behavioural and social sciences principles."

It seems that the point of this is to increase the number of minorities in medical schools although it is not clear why it is assumed that they will do better in answering questions about the social sciences and critical thinking  than in writing an essay and verbal analysis.

More changes may be coming soon. Already there are pilot projects in which schools are "doing brief interviews of applicants involving various ethical and social scenarios to learn more about would-be students".

It seems that these developments are a response to criticism of the MCAT from organisations like The National Center for Fair and Open Testing:


Robert Schaeffer, public education director of the center, said that the MCAT has been viewed as encouraging "memorization and regurgitation" and is "better at identifying science nerds than candidates who would become capable physicians well-equipped to serve their patients." The changes being proposed appear to be "responding directly" to these critiques, he said.



According to Wikipedia, "Nerd is a term that refers to a social perception of a person who avidly pursues intellectual activities, technical or scientific endeavors, esoteric knowledge, or other obscure interests, rather than engaging in more social or conventional activities."

Will the time come when the likes of Emily Bazelon are denied promotion or appointment because of their inappropriate buffiness or avid pursuit of intellectual activity? Not to worry. They could probably get jobs as firefighters somewhere.

Here is a prediction. As American universities increasingly select students and faculty because they are  communicative, culturally sensitive, resilient and and so on while cleansing themselves of all those buffs and nerds, China, Korea and a few other countries will catch up and then overtake them first in scientific output and then in quality.

Sunday, April 03, 2011

The View from Hong Kong

University World News has an article by Kevin Downing of the City University of Hong Kong. It begins:

Are Asian institutions finally coming out of the shadow cast by their Western counterparts? At the 2010 World Universities Forum in Davos, a theme was China's increasing public investment in higher education at a time when reductions in public funding are being seen in Europe and North America. China is not alone in Asia in increasing public investment in higher education, with similar structured and significant investment evident in Singapore, South Korea and Taiwan.


While in many ways this investment is not at all surprising and merely reflects the continued rise of Asia as a centre of global economic power, it nonetheless raises some interesting questions in relation to the potential benefits of rankings for Asian institutions.


Interest in rankings in Asian higher education is undoubtedly high and the introduction of the QS Asian University Rankings in 2009 served to reinforce this. The publication of ranking lists is now greeted with a mixture of trepidation and relief by many university presidents and is often followed by intense questioning from media that are interested to know what lies behind a particular rise or fall on the global or regional stage.

Friday, April 01, 2011

Best Grad Schools

The US News Graduate School Rankings were published on March 15th. Here are the top universities in various subject areas.

Business: Stanford

Education: Vanderbilt

Engineering:  MIT

Law:  Yale

Medical: Harvard

Biology: Stanford

Chemistry: Caltech, MIT, UC Berkeley

Computer Science; Carnegie-Mellon, MIT, Stanford, UC Berkeley

Earth Sciences: Caltech, MIT

Mathematics: MIT

Physics: Caltech, Harvard, MIT, Stanford

Statistics: Stanford

Library and Information Studies: Illinois at Urbana-Champagne

Criminology: Maryland -- College Park

Economics: Harvard, MIT, Princeton, Chicago

English: UC Berkeley

History: Princeton

Political Science: Harvard, Princeton, Stanford

Psychology: Stanford, UC Berkeley

Sociology: UC Berkeley

Public Affairs: Syracuse

Fine Arts: Rhode Island School of Design

Sunday, March 27, 2011

Say It Loud

Phil Baty has an article in THE, based on a speech in Hong Kong entitled 'Say it loud: I'm a ranker and I'm proud'. Very interesting but personally I prefer the James Brown version.

Saturday, March 26, 2011

Growth of Academic Publications: Southwest Asia, 2009-2010

One of several surprises in last year's THE rankings was the absence of any Israeli university from the Top 200. QS had three and, as noted earlier on this blog, over a fifth of Israeli universities were in the Shanghai 500, a higher proportion than any other country. It seems that in the case of at least two universities, Tel Aviv and the Hebrew University of Jerusalem, there was a failure of communication that meant that data was not submitted to Thomson Reuters, who collect and analyse data for THE.

The high quality of Israeli universities might seem rather surprising since Israeli secondary school students perform poorly on international tests of scholastic attainment and the national average IQ is mediocre. It could be that part of the reason for the strong Israeli academic performance is the Psychometric Entrance Test for university admission that measures quantitative and verbal reasoning and also includes an English test. The contrast with the trend in  the US, Europe and elsewhere towards holistic assessment, credit for leadership, community involvement, overcoming adversity and being from the right post code area is striking.

Even so, Israeli scientific supremacy in the Middle East is looking precarious. Already the annual production of academic papers in Israel has been exceeded by Iran and Turkey.

Meanwhile, the total number of papers produced in Israel is shrinking while that of Iran and Turkey continues to grow at a respectable rate. The fastest growth in Southwest Asia comes from Saudi Arabia and the smaller Gulf states of Qatar and Bahrain.

Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)





1.   Saudi Arabia     35%    (3924)
2.   Qatar                 31%      (453)
3.   Syria                  14%     (333)
4.   Bahrain              13%     (184)   
5.   Palestine             9%        (24)  
6.   UAE                   6%       (303)
7.   Turkey                5%   (26835)
8.   Lebanon             4%      (2058)
9.   Iran                    4%    (21047)
10.  Oman                4%        (494)
11.  Jordan               1%      (1637)
12.  Iraq                   -3%       (333)
13.  Israel                -4%   (17719)
14.  Yemen              -8%        (125)
15.  Kuwait             -13%      (759)

(data collected 23/3/11)


This of course may not say very much about the quality of research. A glance at the ISI list of highly cited researchers shows that Israel is ahead, for the moment at least, with 50 compared to 29 for Saudi Arabia and one each for Turkey and Iran.

Thursday, March 24, 2011

Growth in Academic Publications: Southeast Asia 2009-2010

Countries ranked by percentage increase in publications in the ISI Science, Social Science and Arts and Humanities indexes and Conference Proceedings between 2009 and 2010. (total 2010 publications in brackets)

1.  Malaysia  31%  (8603)
2.  Laos   30%     (96)
3.  Indonesia  30% (1631)
4.  Brunei  16%  (88)
5.  Papua New Guinea  5%  (67)
6.  Vietnam  5%  (1247)
7.  Singapore  4%  (11900)
8.  Thailand    2%   (2248)
9.  Timor Leste  0%  (4)
10.  Cambodia -5% (158)
11.  Myanmar  -12%  (78)

Singapore is still the dominant research power in Southeast Asia but Malaysia and Indonesia, admittedly with much larger populations, are closing fast. Thailand is growing very slowly and Myanmar is shrinking.

(data collected 23/3/11)

Tuesday, March 22, 2011

Comparing Rankings 3: Omissions

The big problem with the Asiaweek rankings of 1999-2000 was that they relied on data submitted by universities. This meant that if enough were dissatisfied they could effectively sabotage the rankings by withholding information, which is in fact what happened.

The THES-QS rankings, and since 2010 the QS rankings, avoided this problem by ranking universities whether they liked or not. Nonetheless, there were a few omissions in the early years: Lancaster, Essex, Royal Holloway University of London and the SUNY campuses at Binghamton, Buffalo and Albany.

In 2010 THE decided that they would not rank universities that did not submit data, a principled decision but one that has its dangers. Too many conscientious objectors (or maybe poor losers) and the rankings would begin to lose face validity.

When the THE rankings came out last year, there were some noticeable absentees, among them the Chinese University of Hong Kong, the University of Queensland, Tel Aviv University, the Hebrew University of Jerusalem, the University of Texas at Austin, the Catholic University of Louvain, Fudan University, Rochester, Calgary, the Indian Institutes of Technology and Science and Sciences Po Paris.

As Danny Byrne pointed out in University World News, Texas at Austin and Moscow State University were in the top100 in the Reputation Rankings but not in the THE World University Rankings. Producing a reputation-only ranking without input from the institutions could be a smart move for THE.

Monday, March 21, 2011

QS comments on the THE Reputation Ranking

In University World News, Danny Byrne from QS comments on the new THE reputation ranking.

So why has THE decided to launch a world ranking based entirely on institutional reputation? Is it for the benefit of institutions like Moscow State University, which did not appear in THE's original top 200 but now appears 33rd in the world?

The data on which the new reputational ranking is based has been available for six months and comprised 34.5% of the world university rankings published by THE in September 2010.

But this is the first time the magazine has allowed anyone to view this data in isolation. Allowing users to access the data six months ago may have attracted less attention, but it would perhaps have been less confusing for prospective students.

The order of the universities in the reputational rankings differs from the THE's overall ranking. But no new insights have been offered and nothing has changed. This plays into the hands of those who are sceptical about university rankings.

Wednesday, March 16, 2011

Worth Reading


Ellen Hazelkorn, 'Questions Abound as the College-Rankings Race Goes Global' in Chronicle of Higher Education

"It is amazing that more than two decades after U.S. News & World Report first published its special issue on "America's Best Colleges," and almost a decade since Shanghai Jiao Tong University first published the Academic Ranking of World Universities, rankings continue to dominate the attention of university leaders. Indeed, the range of people watching them now includes politicians, students, parents, businesses, and donors. Simply put, rankings have caught the imagination of the public and have insinuated their way into public discourse and almost every level of government. There are even iPhone applications to help individuals and colleges calculate their ranks.

More than 50 country-specific rankings and 10 global rankings are available today, including the European Union's new U-Multirank, due this year. What started as small-scale, nationally focused guides for students and parents has become a global business that heavily influences higher education and has repercussions well beyond academe."

Tuesday, March 15, 2011

Bright Ideas Department

This is from today's Guardian:
The coalition is considering a Soviet-style central intervention policy to effectively fine individual universities if they impose unreasonable tuition fees next year.


Vince Cable, the business secretary whose department is responsible for universities, and David Willetts, the universities minister, are looking at allowing colleges that charge a modest fee to expand and constraining those that are charging too much.
The government, through the Higher Education Funding Council, sets the grant and numbers for each university and has the power to fine a university as much as £3,000 per student if it over-recruits in a single year.


Ministers are looking at cutting funding from universities that unreasonably charge the maximum £9,000 fee from 2012-13. They admit it is likely most universities will charge well over £8,000 a year.


One minister said: "A form of dramatic centralisation is under active consideration - a form of Gosplan if you like," a reference to the Russian state planning committee set up in the 1920s.
Next bright idea?  A Gulag for recalcitrant vice-chancellors? Re-education camps for those who don't take their teaching philosophy statements seriously enough?

Saturday, March 12, 2011

Going Global Hong Kong 2011

Speeches about rankings by Martin Davidson, British Council, Phil Baty, THE, John Molony, QS, and others can be seen here.

Thursday, March 10, 2011

A Bit More on the THE Reputation Rankings

There is a brief article in the Guardian with a lot of comments.

Incidentally, I don't see Alexandria, Hong Kong Baptist and Bilkent Universities in the top 100 for reputation despite the outstanding work that gave them high scores for research impact in the 2010 THE WUR. Perhaps I'm not looking hard enough.
The THE Reputation Rankings

Times Higher Education have constructed a reputation ranking from the data collected for last year's World University Rankings. There is a weighting of two thirds for research and one third for postgraduate teaching. The top five are:

1.  Harvard
2.  MIT
3.  Cambridge
4.  UC Berkeley
5.  Stanford

Scores are given only for the top fifty universities. Then another fifty are sorted in bands of ten without scores. Evidently, the number of responses favouring universities outside the top 100 was so small that it was not worth listing.

This means that the THE reputational survey reveals significant differences between Harvard and MIT or between Cambridge and Oxford but it would be of no help to those wondering whether to study or work at the University of Cape Town or the University of Kwazulu-Natal or Trinity College Dublin or University College Dublin.

The scores for research reputation (top fifty for total reputation scores only) show a moderate correlation with the THE citations indicator (.422) and, perhaps surprisingly, a higher correlation with the citations per faculty score on the QS World University Rankings of 2010 (.538).

looking at the QS academic survey, which asked only about research, we can see that there was an insignificant correlation of .213 between the QS scores and the score for citations per faculty in the QS rankings (THE reputation ranking top 50 only). However, there was a higher correlation between the QS survey and the THE citations indicator of .422, the same as that between the THE research reputation scores and the the THE citations indicator.

Comparing the two research surveys with a third party, the citations indicator in the Scimago 2010 rankings, the THE research reputation survey did better with a correlation of .438 compared to an insignificant .188 for the QS academic survey.

This seems to suggest that the THE reputational survey does a better job at differentiating between the world's elite universities. But once we leave the top 100 it is perhaps less helpful and there may still be a role for the QS rankings

Wednesday, March 09, 2011

The Second Wave

It seems that another wave of rankings is coming. The new edition of America's best graduate schools will be out soon, QS will be releasing detailed subject rankings and, according to Bloomberg Businessweek, THE's ranking by reputation is imminent. It seems that the academic anglosphere dominates when reputation alone is considered.

Tuesday, March 08, 2011

Comment on the Paris Ranking

 Ben Wildavsky in the Chronicle of Higher Education says:

The Mines ParisTech ranking is an explicitly chauvinistic exercise, born of French unhappiness with the dismal showing of its universities in influential surveys such as the Academic Ranking of World Universities created at Shanghai Jiao Tong University in 2003. When designing the Mines ParisTech ranking, with a view to influencing the architects of the Shanghai methodology, the college says in the FAQ section of its survey results, “we believed it was useful to highlight the good results of French institutions at a time when the Shanghai ranking was widely and is still widely discussed, and not always to the advantage of our own schools and universities.” What’s more, it goes on, “these results constitute a genuine communication tool at an international level, both for the recruitment of foreign students as well as among foreign companies which are not always very familiar with our education system.” Given the genesis of the ranking, it doesn’t seem too surprising that three French institutions made it into this year’s top 10 — École Polytechnique and École Nationale d’Administration joined HEC Paris — while Mines ParisTech itself placed 21st in the world.

Sunday, March 06, 2011

The Paris rankings

The fifth edition of the Professional Ranking of World Universities from Paris Mines Tech has just been published. This is based on one indicator, the number of top corporate CEOs. Here are the top ten:

1.    Harvard
2.    Tokyo
3.    Keio
4.    HEC, France
5.    Kyoto
5.    Oxford
7.    Ecole Polytechnique
8.    Waseda
9.    ENA
10.  Seoul National University

Saturday, February 26, 2011

Surveys and Citations

I have just finished calculating the correlation between the scores for the academic survey and citations per faculty on the 2010 QS world University rankings.

Since the survey asked about research and since citations are supposed to be a robust indicator of research excellence we would expect a high correlation between the two.

It is in fact .391, which is on the low side. There could be valid reasons why it is so low. Citations, by definition, must follow publication which follows research which in turn is preceded by proposals and a variety of bureaucratic procedures. A flurry of citations might be indicative of the quality of research begun a decade ago. The responses to the survey might, on the other hand, be based on the first signs of research  excellence long before the citations start rolling in.

Still, the correlation does not seem high enough. At first glance one would suspect that the survey is faulty but it could be that citations do not mean very much any more as a measure of excellence.

It would be very interesting to calculate the correlation between the score for research reputation on the Times Higher Education WUR and its citation indicator.

We would expect the THE survey to be more valid since the basic qualification for being included in the survey is being the corresponding  author of an article included in the ISI indexes whereas for QS it is signing up for a journal published by World Scientific. But it can no longer be assumed that authorship of any article means very much . Does it always require more initiative and interest to get on the list of co-authors than to sign up for an online subscription?

It should also be noted that there is an overlap between the two surveys as both are supplemented with arts and humanities respondents from the Mardev mailing lists.

I have calculated the correlation between the citations indicator (normalised average citations per paper)  in the THE 2010 rankings and the research indicator -- volume ( 4.5 % of the total score)  income (6%)  and reputation (19.5%)

This is .562, quite a bit better than the QS correlation . However, the research indicator combines a survey with other data.

It would be very interesting if THE and/or Thomson Reuters released the scores of the individual components of the research indicator.

Wednesday, February 23, 2011

Reputation, reputation, reputation!

As the world (or some of it) waits for the ranking survey forms to appear in its mail boxes, both THE and QS are promoting their surveys.

According to Phil Baty of THE:

"But in our consultation with the sector, there was strong support for the continued use of reputation information in the world rankings. Some 79 per cent of respondents to a survey by our rankings data provider Thomson Reuters rated reputation as a “must have” or “nice to have” measure. We operate in a global market where reputation clearly matters."

He then indicates several ways in which the THE survey is an improvement over the THE-QS, now QS, survey.

"We received a record 13,388 usable responses in just three months, making the survey the biggest of its kind in the world.


We promised a transparent approach. The methodology and survey instrument were published in full and this week, the thousands of academics who took part in the survey were sent a detailed report on the respondent profile. It makes reassuring reading:


• Responses were received from 131 countries"

It would, however, be interesting if the number of respondents from all countries were indicated. There are some people who wonder whether THE's sampling technique means that Singapore got the lion's share of responses in Southeast Asia.

Also, will THE publish the scores for the reputation surveys? At the moment they are bundled in with the other teaching and research indicators. What is the correlation between the score for research reputation and the citations indicator? Is there any sign that Alexandria, Bilkant or Hong Kong Baptist University have reputations that match their scores for research impact?

Meanwhile QS also has an item on its survey. They find that there is a similar demand for data on reputations.

"An impressive 79% of respondents, voted reputation for research as one of their top three criteria, with 60% choosing international profile of faculty, essentially another indicator of international reputation for research. This is in stark contrast to the 26% and 30% that prioiritised citations as a key measure.



Furthermore, when breaking these results out by broad faculty area, we can see consistent support across disciplines for the reputation measure but a marked dip in support for citations as a measure amongst respondents in the Arts & Humanities area – which tends to be the area least recognized by traditional measures of research output."
Comment on Internationalisation

International Focus, the newsletter of UK HE international Unit, has an article by Jane Knight on myths of internationalisation. The second myth is:


"Myth two rests on a belief that the more international a university is – in terms of students, faculty, curriculum, research, agreements, network memberships – the better its reputation is.



This is tied to the false notion that a strong international reputation is a proxy for quality. Cases of questionable admission and exit standards for universities highly dependent on the revenue and ‘brand equity’ of international students are concrete evidence that internationalisation does not always translate into improved quality or high standards.


This myth is further complicated by the quest for higher rankings on a global or regional league table such as the Times Higher Education or Academic World Ranking of Universities (AWRU). It is highly questionable whether the league tables accurately measure the internationality of a university and more importantly whether the international dimension is always a robust indicator of quality."
 
Also, it is much easier to be international in Switzerland or Singapore than in Central China or the Midwest of the US.

Tuesday, February 22, 2011

Penn State Law School

Malcolm Gladwell has an article in the current New Yorker about the US News and World Report college rankings. There is quite a lot there that I would like to discuss in another post. For the moment, I will just comment on an anecdote about the appearance of a non-existent law school in a ranking.

Gladwell descibes how Thomas Brennan, who edits a well known ranking of law schools, once sent out a questionnaire to other lawyers asking them to rank law schools and found that Penn State was, as Brennan is quoted as recalling, ranked around fifth. This was strange since there was no law school at Penn State until quite recently (1997 or 2000 in different sources).

This immediately struck me as odd since I remember a similar story about the Princeton Law School, which does not exist and which was also supposed to have made its appearance in a ranking.
The Princeton story is very probably apocryphal and might have  begun with a comment by the dean of New York University Law School in the Dartmouth Law Journal that Princeton would appear in the top twenty law schools if a questionnaire was asked about it.

This story was plausible since it was an apparent example of the halo effect with Princeton's general excellence being reflected in the perception of a school that did not exist.

The problem with Brennan's account retold by Gladwell, which does not appear to be supported by documentary evidence, is that it requires that many lawyers should not only have mistakenly thought that Penn State had  a law school (getting mixed up with the University of Pennsylvania?) but should have been in error about the general quality of the university. Penn State is nowhere near being a top ten or even a top fifty school.

Could this be another academic legend?

Sunday, February 20, 2011

 Impact Assessment

The use of citations as a measure of research quality was highlighted by the remarkable performance of Alexandria University, Bilkent University, Hong Kong Baptist University and others in the 2010 Times Higher Education World University Rankings. As THE and Thomson Reuters review their methodology, perhaps they could take note of this post in Francis' World Inside Out, that refers to a paper by Arnold and Fowler.

'“Goodhart’s law warns us that “when a measure becomes a target, it ceases to be a good measure.” The impact factor has moved in recent years from an obscure bibliometric indicator to become the chief quantitative measure of the quality of a journal, its research papers, the researchers who wrote those papers and even the institution they work in. The impact factor for a journal in a given year is calculated by ISI (Thomson Reuters) as the average number of citations in that year to the articles the journal published in the preceding two years. It is widely used by researchers deciding where to publish and what to read, by tenure and promotion committees laboring under the assumption that publication in a higher impact-factor journal represents better work. However, it has been widely criticized on a variety of grounds (it does not determine paper’s quality, it is a crude and flawed statistic, etc.). Impact factor manipulation can take numerous forms. Let us follow Douglas N. Arnold and Kristine K. Fowler, “Nefarious Numbers,” Notices of the AMS 58: 434-437, Mach 2011 [ArXiv, 1 Oct 2010].



Editors can manipulate the impact factor by means of the following practices: (1) “canny editors cultivate a cadre of regulars who can be relied upon to boost the measured quality of the journal by citing themselves and each other shamelessly;” (2) “authors of manuscripts under review often were asked or required by editors to cite other papers from the journal; this practice borders on extortion, even when posed as a suggestion;” and (3) “editors raise their journals’ impact factors is by publishing review items with large numbers of citations to the journal.” “These unscientific practices wreak upon the scientific literature have raised occasional alarms. A counterexample should confirm the need for alarm.” '



Looking East

Shanghai is planning to persuade two Ivy League schools, Cornell and Columbia, to set up branch campuses there. They already a branch of New York University.

Would anybody like to make a prediction when a new Oxford or Cambridge college will be established in Shanghai (or Singapore or Hong Kong)?

Or when an entire American university will move to China?
More dumbing Down

De Paul University will make it optional for applicants to submit SAT or ACT scores. Instead they can write short essays that demonstrate non-cognitive traits such as "commitment to service", "leadership" and "ability to meet long term goals".

The university says:

'"Admissions officers have often said that you can't measure heart," said Jon Boeckenstedt, associate vice president for enrollment management. "This, in some sense, is an attempt to measure that heart."



Mr. Boeckenstedt expects the change to encourage applicants with high grade-point averages but relatively low ACT and SAT scores to apply—be they low-income students, underrepresented minorities, or otherwise. Moreover, he and his colleagues believe the new admissions option will allow them to better select applicants who are most likely to succeed—and graduate.'

De Paul's administrators are being extremely naive if they think that these attributes cannot be easily coached or faked. Bluntly, how much effort does it take to teach a student what to say on one of these essays compared to squeezing out a few more points on the SAT?

Wednesday, February 16, 2011

Another US News Ranking

This one is about the schools where congressmen received their bachelor degrees.

Here are the top 10. What might be more interesting is the party affiliation of the congressmen. D = Democrat, R = Republican, I = Independent.

1.    Harvard                                      D 13, R 2
2.    Stanford                                      D 9, R 2
3.    Yale                                            D 8, R 1, I 1
4.    UCLA                                        D 6, R 3
5=   Georgetown                                D 5, R 2
5=   Florida                                        D2, R5
5=   Georgia                                       D1, R6
5=   Wisconsin - Madison                   D6, R1
9.     North carolina -- Chapel Hill       D 5, R 1 
10=  Brigham Young                           R5
10=  George Washington                    D2, R5
10=  Louisiana State                           D1, R4
10=  Berkeley                                    D4, R1
10=  Missouri                                     D4, R1
10= Tennessee                                   D2, r3
The Fortune 500

The US News has produced a ranking of US universities according to the number of degrees awarded to the CEO s of the Fortune 500, the largest American corporations according to gross revenue.

Here are the top five.

1.  Harvard
2.  Columbia
3.  University of pennsylvania
4.  Unuiversity of Wisconsin -- Madison
5.  Dartmouth College

Sunday, February 13, 2011

Ranking Education Schools

The US News and World Report, publishers of America's Best Colleges, are teaming up with the National Council on Teachers Quality to produce a rating of teacher preparation programs.

Many Education deans are strongly  opposed. See here.
We are all equal

I have come across an interesting article, "The equality of intelligence", in the philosopher's magazine by Nina Power. It is one of a series, "Ideas of the century" (I am not sure which one).

Power, whose dissertation is entitled From Theoretical Antihumanism to Practical Humanism: The Political Subject in Sartre, Althusser and Badiou and who is a senior lecturer at Roehampton University, refers to the work of Jacques Rancière,


"who never tires of repeating his assertion that equality is not just something to be fought for, but something to be presupposed, is, for me, one of the most important ideas of the past decade. Although Rancière begins the discussion of this idea in his 1987 text The Ignorant Schoolmaster, it is really only in the last ten years that others have taken up the idea and attempted to work out what it might mean for politics, art and philosophy. Equality may also be something one wishes for in a future to come, after fundamental shifts in the arrangement and order of society. But this is not Rancière’s point at all. Equality is not something to be achieved, but something to be presupposed, universally. Everyone is equally intelligent."
Just in case you thought she was kidding:

"In principle then, there is no reason why a teacher is smarter than his or her student, or why educators shouldn’t be able to learn alongside pupils in a shared ignorance (coupled with the will to learn). The reason why we can relatively quickly understand complex arguments and formulae that have taken very clever people a long time to work out lends credence to Rancière’s insight that, at base, nothing is in principle impossible to understand and that everyone has the potential to understand anything."


Power seems to be living in a different universe from those of us in the academic periphery. Perhaps she is actually pulling a Sokalian stunt but I suspect not. This sort of thing might be funny to many of us but it seems to be taken seriously in departments of education around the world. Just take a look at the model teaching philosophy statements found on the Internet.

Another example of her writing is Sarah Palin: Castration as Plenitude. Presumably that is  potentially understandable by everybody.

Friday, February 11, 2011

More on Citations

A column in the THE by Phil Baty indicates that there might be some change in the research impact indicator in the forthcoming THE World University Rankings. It is good that THE is considering changes but I have a depressing feeling that  Thomson Reuters, who collect the citations data, are going to have more weight in this matter than anyone or anything else.

Baty refers to a paper by Simon Pratt who manages the data for TR and THE.
The issue was brought up again this month in a paper to the RU11 group of 11 leading research universities in Japan. It was written by Simon Pratt, project manager for institutional research at Thomson Reuters, which supplies the data for THE’s World University Rankings.


Explaining why THE’s rankings normalise for citations data by discipline, Pratt highlights the extent of the differences. In molecular biology and genetics, there were more than 1.6 million citations for the 145,939 papers published between 2005 and 2009, he writes; in mathematics, there were just 211,268 citations for a similar number of papers (140,219) published in the same period.


Obviously, an institution with world-class work in mathematics would be severely penalised by any system that did not reflect such differences in citations volume.
This is correct but perhaps we should also consider whether the number of citations to papers in genetics is telling us something about the value that societies place on genetics rather than on mathematics and perhaps that is something that should not be ignored.


Also, in the real world are there many universities that are excellent in a single field, defined as narrowly as theoretical physics or applied mathematics, while being mediocre or worse in everything else? Anyone who thinks that Alexandria is the fourth best university in the world for research impact because of its uncontested excellence in mathematics should take a look here.

There are also problems with normalising by region. Precisely what the regions are for the purposes of this indicator is not stated. If Africa is a region, does this mean that Alexandria got another boost, one denied to other Middle Eastern universities? Is Istanbul in Europe and Bilkent in Asia? Does Singapore get an extra weighting because of the poor performance of its Southeastern neighbours?

There are two other aspects of the normalisation that are not foregrounded in the article. First, TR apparently use normalisation by year. In some disciplines it is rare for a paper to be cited within a year of publication..In others it is commonplace. An article that is classified  as being in a low citation field would get a massive boost if in addition it had a few citations within months of publication.

Remember also that the scores represent averages. A small number of total publications means an immense advantage for a university that has a few highly cited article in low cited fields and is located in a normally unproductive region. Alexandria's remarkable success was due to the convergence of four favourable factors: credit for publishing in a low citation sub-discipline, the frequent citation of recently published papers, being located in a continent whose scholars are not generally noticed and finally the selfless cooperation of hundreds of faculty who graciously refrained from sending papers to ISI indexed journals.

Alexandria University may not be open for the rest of this year and may not take part in the second THE WUR exercise. One wonders though how many universities around the world could benefit from these four factors and how many are getting ready to submit data to Thomson Reuters.

Monday, February 07, 2011

Training for Academics

The bureaucratisation of higher education continues relentlessly. Times Higher Education reports on moves to make all UK academics undergo compulsory training. This is not a totally useless idea: a bit of training in teaching methodology would do no harm at all for all those unprepared graduate assistants, part-timers and new Ph Ds that make up an increasing proportion of the work force in European and American universities.

But the higher education establishment has more than this in mind.



Plans to revise the UK Professional Standards Framework were published by the HEA in November after the Browne Review called for teaching qualifications to be made compulsory for new academics.
The framework, which was first published in 2006, is used to accredit universities' teaching-development activities, but the HEA has admitted that many staff do not see it as "relevant" to their career progression.
Under the HEA's proposals, the updated framework says that in future, all staff on academic probation will have to complete an HEA-accredited teaching programme, such as a postgraduate certificate in higher education. Postgraduates who teach would also have to take an HEA-accredited course.
A "sector-wide profile" on the number of staff who have reached each level of the framework would be published by the HEA annually.


Meanwhile, training courses would have to meet more detailed requirements.

A comment by "agreed" indicates just what is likely to happen.
 
I did one of these course a couple of years ago. I learnt nothing from the "content" that I couldn't have learnt in a fraction of the time by reading a book. The bulk of the course was an attempt to compel all lecturers to adopt fashionable models of teaching with no regard to the need for students to learn content. The example set by the lecturers on the course was apalling: ill prepared, dogmatic, and lacking in substance. A failure to connect with the "students" and a generally patronising tone was just one of the weaknesses. Weeks of potentially productive time were taken up by jumping through hoops and preparing assignments. This is not an isolated case, I know of several other such courses in other institutions that were equally shambolic. I'm all for improving the qulaity of teaching, but this is nonsensical. The only real benefit was the collegial relations with academics from other deaprtments forged through common bonds of disgust and mockery aimed at this ridiculous enterprise (presumably designed to justify the continued employment of failed academics from other disciplines given the role of teaching the reast of us how to teach).

Thursday, February 03, 2011

Comparing Rankings 2

Number of Indicators

A ranking that contained only a single indicator would not be very interesting. Providing that the indicators are actually measuring different things, rankings with many indicators would contain more information. On the other hand, the more indicators there are the more likely it is that some will be redundant.

At the moment, the THE World University Rankings are in first place with 13 indicators and Paris Mines is last with only one. We should note, however, that the THE indicators are combined into 5 super-indicators and scores are given only for the latter.

So we have the following order.

1. THE World University Rankings: 13 indicators (scores are given for only 5 indicator groups)

2.  HEEACT: 8 indicators

3= Academic Ranking of World Universities (Shanghai):   6 indicators

3= QS World University Rankings: 6 indicators

5.  Leiden: 5 (strictly speaking, 5 separate rankings)

6=  Webometrics: 4

6=  Scimago Institutions Ranking: 4 (1 used for ranking)

8.  Paris Mines Tech: 1

Thursday, January 27, 2011

All Ears

Times Higher Education and Thomson Reuters are considering changes to their ranking methodology. It seems that the research impact indicator (citations) will figure prominently in their considerations.  Phil Baty writes:

In a consultation document circulated to the platform group, Thomson Reuters suggests a range of changes for 2011-12.


A key element of the 2010-11 rankings was a "research influence" indicator, which looked at the number of citations for each paper published by an institution. It drew on some 25 million citations from 5 million articles published over five years, and the data were normalised to reflect variations in citation volume between disciplines.


Thomson Reuters and THE are now consulting on ways to moderate the effect of rare, exceptionally highly cited papers, which could boost the performance of a university with a low publication volume.


One option would be to increase the minimum publication threshold for inclusion in the rankings, which in 2010 was 50 papers a year.


Feedback is also sought on modifications to citation data reflecting different regions' citation behaviour.


Thomson Reuters said that the modifications had allowed "smaller institutions with good but not outstanding impact in low-cited countries" to benefit.
It would be very wise to do something drastic about the citations indicator. According to last year's rankings, Alexandria university is the fourth best university in the world for research impact, Hong Kong Baptist University is second in Asia, Ecole Normale Superieure Paris best in Europe with Royal Holloway University of London fourth, University of California Santa Cruz fourth in the USA and the University of Adelaide best in Australia.

If anyone would like to justify these results they are welcome to post a comment.

I would like to make these suggestions for modifying the citations indicator.

Do not count self-citations, citations to the same journal in which a paper is published or citations to the same university. This would reduce, although not completely eliminate, manipulation of the citation system. If this is not done there will be massive self citation and citation of friends and colleagues. It might even be possible to implement a measure of net citation by deducting citations from an institution from the citations to it, thus reduce the effect of tacit citation agreements.

Normalisation by subject field is probably going to stay. It is reasonable that some consideration should be given to scholars who work in fields where citations are delayed and infrequent. However, it should be recognised that the purpose of this procedure is to identify pockets of excellent and research institutions are not built around a few pockets or even a single one. There are many ways of measuring research impact and this is just one of them. Others that might be used include total citations, citations per faculty, citations per research income and h-index.

Normalisation by year is especially problematical and should be dropped. It means that a handful of citations  to an article classified as being in a low-citation discipline in the same year could dramatically multiply the score for this indicator. It also introduces an element of potential instability. Even if the methodology remains completely unchanged this year, Alexandria and Bilkent and others are going to drop scores of places as papers go on receiving citations but get less value from them as the benchmark number rises.

Raising the threshold of number of publications might not be a good idea. It is certainly true that Leiden University have a threshold of 400 publications a year but Leiden is measuring only research impact while THE and TR are measuring a variety of indicators. There are already too many blank spaces in these rankings and their credibility will be further undermined if universities are not assessed on an indicator with such a large weighting.

Tuesday, January 25, 2011

More Rankings

With acknowledgements to Registrarism, here is news about more new rankings.

Universities are being ranked according to the popularity of their Twitter accounts. According  to an article in the Chronicle of Higher Education:

Stanford earned a Klout score of 70, with Syracuse University, Harvard University, and the University of Wisconsin at Madison all following with a score of 64.


The top 10 is rounded out by University of California at Berkeley, Butler University, Temple University, Tufts University, the University of Minnesota, the University of Texas at Austin, and Marquette University.
Another is the Green Metric Ranking of World Universities, compiled by Universitas Indonesia. The criteria are  green statistics, energy and climate change, waste, water and transportation.

The top five are:

1.  Berkeley (Surprise!)
2.  Nottingham
3.  York (Canada)
4.  Northeastern
5.  Cornell

Universiti Putra Malaysia is 6th and Universitas Indonesia 15th.
Cambridge and Harvard

After the break with THE, QS decided to continue the old methodology of the 2004-2009 rankings. At least, that is what they said. It was therefore surprising to see that, according to data provided by QS,  there were in fact a number of noticeable rises and falls between 2009 and 2010 although nothing like as much as in previous years.


For example the University of Munich fell from 66th place to 98th place, the Free University of Berlin from 70th to 94th and Stockholm University from 168th to 215th while University College Dublin rose from 114th to 89th and Wurzburg from 309th to 215th.

But perhaps the most remarkable news was that Cambridge replaced Harvard as the world's best university. In every other ranking Harvard is well ahead.

So how did it happen? According to Martin Ince, “Harvard has taken more students since the last rankings were compiled without an equivalent increase in the number of academics.”

In other words there should have been a lower faculty student ratio and therefore a lower score for this indicator. This in fact happened. Harvard’s score went from 98 to 97.

Ince also says that there was an “improvement in staffing levels”, at Cambridge, presumably meaning that there was an increase in the number of faculty relative to the number of students. Between 2009 and 2010 Cambridge’s score for the student faculty remained the same at 100 which is consistent with Ince’s claim.

In addition to this, there was a "significant growth in the number of citations per faculty member" for Cambridge. It is not impossible that the number of citations racked up by Cambridge had  risen relative to Harvard but the QS indicator counts citations over a five year so even a substantial increase in publications or citations would take a few years to have an equivalent effect on this indicator. Also note that this indicator is citations per faculty and it appears that the number of faculty at Cambridge has gone up relative to Harvard. So we would expect any increase in citations to be cancelled out by a similar increase in faculty.

It looks a little odd then that for this indicator the Cambridge score rose from 89 to 93, four points, which is worth 0.8 in the weighted total score. That, by the way, was the difference between Harvard and Cambridge in 2009.

The oddity is compounded when we look at other high ranking universities. between 2009 and 2010 Leiden's score for citations per faculty rose from 97 to 99, Emory from 90 to 95, Oxford from 80 to 84, Florida from 70 to 75.

It would at first sight appear plausible that if Harvard, the top scorer in both years, did worse on this indicator then everybody or nearly everybody else would do better. But if we look at universities further down the table, we found the opposite. Between 2009 and 2010 for this indicator  Bochum fell from 43 to 34, Ghent from 43 to 37, Belfast from 44 to 35 and so on.

Could it be that there was some subtle and unannounced change in the method by which the raw scores were transformed into  indicator scores. Is it just a coincidence that the change was sufficient to erase the difference between Harvard and Cambridge?








http://www.wiziq.com/tutorial/90743-QS-World-University_Rankings-top-500

Thursday, January 20, 2011

Comparing Rankings

International university rankings are now proliferating in much the same way that American rankings have multiplied over the last few decades although so far there is no global equivalent to top party schools or best colleges for squirrels.

It is now time for a cursory review and comparison of the major international rankings. I will omit recent rankings, those that look as though they may not be repeated or those that provide insufficient information about methodology.

The list is as follows:

Academic Ranking of World Universities

Higher Education Evaluation and Accreditation Council of Taiwan

International Professional Ranking of Higher Education Institutions (Paris Mines Tech)

Leiden Ranking

QS World University rankings

Scimago Institutions Ranking

THE World University Rankings

Webometrics Ranking of World Universities

The first attribute to be considered is simply the number of universities ranked. A ranking might have an impeccable methodology and analyse a score  of indicators with the strictest attention to current bibliometric theory and statistical technique. If, however, it only ranks a few hundred universities it is of no use to those interested in the thousands left outside the elite of the ranked.

I am counting the number of universities in published rankings. Here the winner is clearly Webometrics, followed by Scimago.

Webometrics 12,300

Scimago 1,955

QS WUR 616

ARWU 500

HEEACT 500
 
Leiden 500

THE WUR 400

Paris Mines 376

Monday, January 17, 2011

Shanghai Ranks Macedonia

I am not sure how accurate the folowing report is. The whole of Macedonia has never had a Nobel or Fields award winner or an ISI highly cited researcher, has published fewer articles than Loughborugh University and has no articles in Nature or Science. It is difficult to see just what a team of seven from Shanghai would evaluate, especially since ARWU is reluctant to get involved with teaching quality or publications in the arts and humanities. Still, it is perhaps indicative that a European country has turned to China to evaluate its universities.

"Shanghai Jiao Tong University, which analyzes the top universities in the world on quality of faculty, research output quality of education and performance, has been selected to evaluate the public and private institutions for higher education in Macedonia, Minister of Education and Science Nikola Todorov told reporters on Sunday.
The ranking team included the Shanghai University Director, Executive and six members of the University's Center, Todorov said, pointing out that Macedonia is to be the first country from the region to be part of the the Academic Ranking of World Universities (ARWU), commonly known as the Shanghai ranking.


"The Shanghai ranking list is the most relevant in the world, and being part of it is a matter of prestige. We shall be honored our institutions for higher education to be evaluated by this university. This is going to be a revolution in the education sector, as for the first time we are offered an opportunity to see where we stand in regard to the quality," Todorov said"





Sunday, January 16, 2011

Full QS Rankings 2010

QS have published  full details including indicator scores of the top 400 universities in their 2010 rankings. In the transparency stakes this brings them level with THE who have an iphone/ipad app that provides these details for the main indicators but not the sub-indicators.

Thursday, January 13, 2011

Microsoft Academic Search

Microsoft has developed a computer science research ranking. Organisations, mainly but not entirely universities, are ranked according to number of citations and there is also data on publications and the h-index.

The top five in the world are Stanford, MIT, Berkeley, Carnegie-Mellon and Microsoft. Harvard is seventh and Cambridge 18th.

Top regional universities are:
Africa -- Cape Town
Asia and Oceania -- Tel Aviv
Europe -- Cambridge
North America --  Stanford
South America --  Universidade de Sao Paulo

Monday, January 10, 2011

The Disposable Academic

An article in the Economist  (print edition, 18-31/12/2010, 146-8) analyses the plight of many of the world's Ph Ds. Many can expect nothing more a succession of miserable post-doc fellowships, short-term contracts or part-time jobs teaching remedial or matriculation classes. And those are the lucky ones who actually get their diploma.

It seems that the financial return for a Ph D is only marginally higher than that for a master's. Since there are undoubtedly variations by institution and discipline, it follows that for many joining a doctoral program is a losing proposition in every way.

One wonders whether countries like South Africa and some in Southeast Asia are creating future problems in the drive to boost the production of Ph Ds.

Friday, January 07, 2011

Value for Money

An article by Richard Vedder describes how the publication of data by Texas A and M University shows enormous variation in the cost of faculty members per student taught.

I recently asked my student research assistant to explore this data by choosing, more or less at random, 40 professors of the university's main campus at College Station — including one highly paid professor with a very modest teaching load in each department and one instructor who is modestly paid but teaches many students. The findings were startling, even for a veteran professor like myself.


The 20 high-paid professors made, on average, over $200,000 each, totaling a little over $5 million annually to the university. These professors collectively taught 125 students last year, or roughly $40,000 per student. Since a typical student takes about 10 courses a year, the average cost of educating a student exclusively with this group of professors would be about $400,000, excluding other costs beyond faculty salaries.
There are of course questions to be asked about whether the data included the supervision of dissertations and the difficulty of the courses taught. Even so, the results deserve close scrutiny and might even be a model for some sort of international comparison.

Tuesday, January 04, 2011

Dumbing Down of University Grades

An article in the London Daily Telegraph shows that the number of first and upper second class degrees awarded by British universities has risen steadily over the last few decades. Their value to employers as an indicator of student quality has accordingly diminished.

David Barrett reports that:


The latest data shows that the criteria for awarding degrees has changed dramatically - despite complaints from many universities that grade inflation at A-level has made it hard for them to select candidates.

Traditionally, first class honours have been awarded sparingly to students who show exceptional depth of knowledge and originality.


But the new figures add further weight to a report by MPs last year which found that "inconsistency in standards is rife" and accused vice-chancellors of "defensive complacency".

We might note that the THE-QS rankings until 2009 and the QS rankings of last year  have probably done quite a lot to encourage complacency by consistently overrating British universities especially Oxbridge and the London colleges.