This is perhaps the best that is technically and/or commercially feasible at this moment but it is far from satisfactory. Some of these publications are dealing with the most basic questions about the nature of physical reality and it is a serious distortion not to include them in the ranking methodology. There have been complaints about this. Pavel Krokovny's comment was noted in a previous post while Mete Yeyisoglu argues that:
"Fractional counting is the ultimate solution. I wish you could have worked it out to use fractional counting for the 2015-16 rankings.
The current interim approach you came up with is objectionable.
Why 1,000 authors? How was the limit set? What about 999 authored-articles?
Although the institution I work for will probably benefit from this interim approach, I think you should have kept the same old methodology until you come up with an ultimate solution.
This year's interim fluctuation will adversely affect the image of university rankings."
Baty provides a reasonable answer to the question why the cut-off point is 1,000 authors.
But there is a fundamental issue developing here that goes beyond ranking procedure. The concept of authorship of a philosophy paper written entirely by a single person or a sociological study from a small research team is very different from that of the huge multi-national capital and labour intensive publications in which the number of collaborating institutions exceeds the number of paragraphs and there are more authors than sentences.
Fractional counting does seem to be the only fair and sensible way forward and it is now apparently on THE's agenda although they have still not committed themselves.
The objection could be raised that while the current THE system gives a huge reward to even the least significant contributing institution, fractional counting would give major research universities insufficient credit for their role in important research projects.
A long term solution might be to draw a distinction between the contributors to and the authors of the mega papers. For most publications there would be no need to draw such a distinction but for those with some sort of input from dozens, hundreds or thousands of people it might be feasible for to allot half the credit to all those who had anything to do with the project and the other half to those who meet the standard criteria of authorship. There would no doubt be a lot of politicking about who gets the credit but that would be nothing new.
Duncan Ross, the new Data and Analytics Director at THE, seems to be thinking along these lines.
"In the longer term there are one technical and one structural approach that would be viable. The technical approach is to use a fractional counting approach (2932 authors? Well you each get 0.034% of the credit). The structural approach is more of a long term solution: to persuade the academic community to adopt metadata that adequately explains the relationship of individuals to the paper that they are ‘authoring’. Unfortunately I’m not holding my breath on that one."The counting of citations to mega papers is not the only problem with the THE citations indicator. Another is the practice of giving a boost to universities in underperforming countries. Another item by Phil Baty quotes this justification from Thomson Reuters, THE's former data partner.
“The concept of the regional modification is to overcome the differences between publication and citation behaviour between different countries and regions. For example some regions will have English as their primary language and all the publications will be in English, this will give them an advantage over a region that publishes some of its papers in other languages (because non-English publications will have a limited audience of readers and therefore a limited ability to be cited). There are also factors to consider such as the size of the research network in that region, the ability of its researchers and academics to network at conferences and the local research, evaluation and funding policies that may influence publishing practice.”
THE now appear to agree that this is indefensible in the long run and hope that a more inclusive academic survey and the shift to Scopus, with broader coverage than the Web of Science, will lead to this adjustment being phased out.
It is a bit odd that TR and THE should have introduced income, in three separate indicators, and international outlook, in another three, as markers of excellence, but then included a regional modification to compensate for limited funding and international contacts.
THE are to be congratulated for having put fractional counting and phasing out the regional modification on their agenda. Let's hope it doesn't take too long.
While we are on the topic, there are some more things about the citation indicator to think about . First, to repeat a couple of points mentioned in the earlier post.
- Reducing the number of fields or doing away with normalisation by year of citation. The more boxes into which any given citation can be dropped the greater the chance of statistical anomalies when a cluster of citations meets a low world average of citations for that particular year of citations, year of publication and field (300 in Scopus?)
- Reducing the weighting for this indicator. Perhaps citations per paper normalized by field is a useful instrument for comparing the quality of research of MIT, Caltech, Harvard and the like but it might be of little value when comparing the research performance of Panjab University and IIT Bombay or Istanbul University and Bogazici.
Some other things THE could think about.
- Adding a measure of overall research impact, perhaps simply by counting citations. At the very least stop calling field- and year- normalised regionally modified citations per paper a measure of research impact. Call it research quality or something like that.
- Doing something about secondary affiliations. So far this seems to have been a problem mainly for the Highly Cited Researchers indicator in the Shanghai ARWU but it may not be very long before more universities realise that a few million dollars for adjunct faculty could have a disproportion impact on publication and citation counts.
- Also, perhaps THE should consider excluding self-citations (or even citations within the same institution although that would obviously be technically difficult). Self-citation caused a problem in 2010 when Dr El Naschie's diligent citation of himself and a few friends lifted Alexandria University to fourth place in the world for research impact. Something similar might happen again now that THE are using a larger and less selective database.
No comments:
Post a Comment