Sunday, January 13, 2013

A Bit More on the THE Citations Indicator

I have already posted on the citations (research influence) indicator in the Times Higher Education World University Rankings and how it can allow a few papers to have a disproportionate impact. But there are other features of this indicator that affect its stability and can produce large changes even if there is no change in methodology.

This indicator has a weighting of 30 percent. The next most heavily weighted indicator is the research reputation survey which carries a weighting of 18 percent and is combined with number of publications (6 percent) and research income (6 percent) to produce a weighting of 30 percent for research: volume, income and reputation.

It might  be argued that the citations indicator accounts for only 30 percent of the total weighting so that anomalously high scores given to obscure or mediocre institutions for citations would be balanced or diluted by scores on the other indicators which have a weighting of 70 percent.

The problem with this is that  the scores for the citation indicator are often substantially higher than the scores for other indicators, especially in the 300 - 400 region of the rankings so that the impact of this indicator is correspondingly increased. Full data can be found on the 2012-13 iPhone app.


For example, the University of Ferrara in 378th place with a score of 58.5 for citations has a total score of 31.3 so that nearly 60% of its total score comes from the citations indicator. King Mongkut's Unversity of Technology, Thonburi,  in 389th place has a score of 68.4 for citations but its total score is 30.3 so that two thirds of its total score comes from citations. Southern Methodist University in 375th place gets 67.3 for citations which after weighting comes close to providing two thirds of its overall score of 31.6. For these universities a proportional change in the final processed score for citations would have a greater impact than a similar change in any of the other indicators.

Looking at the bottom 25 universities in the top 400, in eight cases the citation indicator provides half or more of the total score and in 22 cases it provides a third or more. Thus, the indicator could have  more impact on total scores than its weighting of 30 percent would suggest.
 


It is also  noticeable that the mean score for citations of the THE top 400 universities is much higher than that for research, about 65 compared to about 41.This disparity is especially large as we reach the 200s and the 300s.

So we find Southern Methodist University has a score of 67.3 for citations  but 9.0 for research.Then  the University of Ferrara  has a score of 58.5 for citations and 13.0 for research. King Mongkut's University of Technology, has a score of 68.4 for citations and 10.2 for research.


One reason why the scores for the citations indicator are so high is the "regional modification" introduced by Thompson Reuters in 2011. To simplify, this means that the number of citations to a university in a given year and a a given field is divided by the square root of the average number of citations in the field and year for all universities in the country

So if a university in country A receives 100 citations in a certain year of publication and in a certain field and the average impact for that year and field for all universities is 100 then the university will get a score of 10 (100 divided by 10). If a university in country B receives 10 citations in the same year and the same field then but the average impact from all universities in the country is 1 then the citations score for that field year would also be 10 (10 divided by 1).

This drastically reduces the gap for citations between countries that produce many citations and those that produce few. Thompson Reuters justify this by saying that in some countries it is easier to get grants and to travel and  join the networks that lead to citations and international collaborations than in others. The problem with this is that it can produce some rather dubious results.

Let us consider the University of Vigo and the University of Surrey. Vigo has an overall score of 30.7 and is in 383rd place. Surrey is just behind with a score of 30.5 and is 388th place.

But, with the exception of citations Surrey is well  ahead of Vigo for everything: teaching (31.1 to 19.4), international outlook (80.4 to 26.9), industry income (45.4 to 36.6) and research (25.0 to 9.5).

Surrey, however, does comparatively badly for citations with a score of only 21.6. It does have a contribution to the massively cited 2008 review of particle physics but the university has too many publications for this review to have much effect. Vigo however has a score of 63.7 for citations which may be because of a much cited paper containing a new algorithm for genetic analysis but also presumably because it received, along with other Spanish and Portuguese-speaking universities, a boost from the regional modification.

There are several problems with this modification. First, it can contribute another element of instability. If we observe a university's score for citations has declined it could be because its citations have decreased overall or in key fields or because a much cited paper has slipped out of the five year  period of assessment. It could also be that the number of publications has increased without a corresponding increase in citations.

Applying the regional modification could mean that a university's score would be affected by the fluctuations in the impact of the country's universities as a whole. If there was an increase in the number of citations or reduction in publications nationally then this would reduce the citations score of a particular university since the university's score would be divided by the square root of a larger number.

This could lead to the odd situation where stringent austerity measures lead to the emigration of talented researchers and eventually a fall in citations but some universities in the country may improve since they are being compared to a smaller national average.

The second problem is that it can lead to misleading comparisons. It would be a mistake to conclude that Vigo is a better university than Surrey or about the same or even that its research influence is more significant. What has happened is that is that Vigo is more ahead of the Spanish average than Surrey is ahead of the British.

Another problematic feature of the citations indicator is that its relationship with the research indicator is rather modest. Consider that 18 out of 30 points for research are from the reputation survey whose respondents are drawn from those researchers whose publications are in the ISI databases while the citations indicator counts citations in precisely those papers. Then another 6 percent goes to research income which we would expect to have some relationship with the the quality of research.

Yet the correlation between the scores for research and citations for the top 400 universities is modest at .409 which calls into question the validity of one of the indicators or both of them.

A further problem is that this indicator only counts the impact of papers that make it into an ISI database. A university where most of the faculty do not publish in ISI indexed journals would do no worse than one where there was a lot of publications but not many citations or not many citations in low cited fields.

To conclude, the current construction of the citations indicator has the potential to produce anomalous results and to introduce a significant degree of instability into the rankings.

Sunday, January 06, 2013

QS Stars Again

The New York Times has an article by Don Guttenplan on the QS Stars ratings which award universities one to five stars according to eight criteria, some of which are already assessed by the QS World University Rankings and some of which are not. The criteria are teaching quality, innovation and knowledge transfer, research quality, specialist subject, graduate employability, third mission, infrastructure and internationalisation. 

The article has comments from rankings experts Ellen Hazelkorn, Simon Marginson and Andrew oswald.

The QS Stars system does raise issues about commercial motivations and conflict of interests. Nonetheless, it has to be admitted that it does fill a gap in the current complex of international rankings. The Shanghai, QS and Times Higher Education rankings may be able to distinguish between Harvard and Cornell or Oxford and Manchester but they rank only a fraction of the world's universities. There are national ranking and rating systems but so far anyone wishing to compare middling universities in different countries has very little information available.

There is, however, the problem that making distinction among little known and mediocre universities, a disproportionate number of which are located in Indonesia,  means a loss of discrimination at the top or near top. The National University of Ireland Galway, Ohio State University and Queensland University of Technology get the same five stars as Cambridge and Kings College London.

The QS Stars has potential to offer a broader assessment of university quality but it would be better for everybody if it was kept completely separate from the QS World University Rankings.


Sunday, December 23, 2012

The URAP Ranking


 Another ranking that has been neglected is the University Ranking by Academic Performance [www.urapcenter.org] started by the Middle East Technical University in 2009. This has six indicators: number of articles (21%), citations (21%), total documents (10%), journal impact total (18%), journal citation impact total (15%) and international collaboration (10%).

A distinctive feature is that these rankings provide data for 2000 universities,much more than the current big three.

The top ten are:

1.  Harvard
2.  Toronto
3.  Johns Hopkins
4.  Stanford
5.  UC Berkeley
6.  Michigan Ann Argot
7.  Oxford
8.  Washington Seattle
9.  UCLA
10. Tokyo

 These rankings definitely favour size over quality as shown  by the strong performance of Toronto and Johns Hopkins and the lowly position of Caltech in 51st place and Princeton in 95th. Still, they could be very helpful for countries with few institutions in the major rankings.

Saturday, December 15, 2012

The Taiwan Rankings

It is unfortunate that the "big three" of the international ranking scene -- ARWU (Shanghai), THE and QS -- receive a disproportionate amount of public attention while several research-based rankings are largely ignored. Among them is the National Taiwan University Ranking which until this year was run by the Higher Education Evaluation and Acceditation Council of Taiwan.

The rankings, which are based on the ISI databases, assign a weighting of 25%  to research productivity (number of articles over the last 11 years, number of articles in the current year), 35% to research impact (number of citations over the last 11 years, number of citations in the current year, average number of citations over the last 11 years) and 40 % to research excellence (h-index over the last 2 years, number of highly cited papers, number of articles in the current year in highly cited journals).

Rankings by field and subject are also available.

There is no attempt to assess teaching or student quality and publications in the arts and humanities are not counted.

These rankings are a valuable supplement to the Shanghai ARWU. The presentation of data over 11 and 1 year periods allows quick comparisons of changes over a decade.

Here are the top ten.

1. Harvard
2. Johns Hopkins
3. Stanford
4. University of Washington at Seattle
5. UCLA
6. University of Washington Ann Arbor
7. Toronto
8. University of California Berkeley
9. Oxford
10. MIT

High-flyers in other rankings do not do especially well here. Princeton is 52nd, Caltech 34th, Yale 19th, Cambridge 15th most probably because they are relatively small or have strengths in the humanities.

Sunday, November 18, 2012

Article in University World News

online hub
    s View Printable VersionEmail Article To a Friend
GLOBAL
Ranking’s research impact indicator is skewed

Saturday, November 03, 2012


Apology

In a recent article in University World News I made a claim that Times Higher Education in their recent World University Rankings had introduced a methodological change that substantially affected the overall ranking scores. I acknowledge that this claim was without factual foundation. I withdraw the claim and apologise without reservation to Phil Baty and Times Higher education.

Saturday, October 27, 2012

More on MEPhI

Right after putting up the post on Moscow State Engineering Physics Institute and its "achievement" in getting the maximum score for research impact in the latest THE - TR World University Rankings, I found this exchange on Facebook.  See my comments at the end.

  • Valery Adzhiev So, the best university in the world in the "citation" (i.e. "research influence") category is Moscow State Engineering Physics Institute with maximum '100' score. This is remarkable achivement by any standards. At the same time it scored in "research" just 10.6 (out of 100) which is very, very low result. How on earth that can be?
  • Times Higher Education World University Rankings Hi Valery,

    Regarding MEPHI’s high citation impact, there are two causes: Firstly they have a couple of extremely highly cited papers out of a very low volume of papers.The two extremely highly cited papers are skewing what would ordinarily be a very g
    ood normalized citation impact to an even higher level.

    We also apply "regional modification" to the Normalized Citation Impact. This is an adjustment that we make to take into account the different citation cultures of each country (because of things like language and research policy). In the case of Russia, because the underlying citation impact of the country is low it means that Russian universities get a bit of a boost for the Normalized Citation Impact.

    MEPHI is right on the boundary for meeting the minimum requirement for the THE World University Rankings, and for this reason was excluded from the rankings in previous years. There is still a big concern with the number of papers being so low and I think we may see MEPHI’s citation impact change considerably over time as the effect of the above mentioned 2 papers go out of the system (although there will probably be new ones come in).

    Hope this helps to explain things.
    THE
  • Valery Adzhiev Thanks for your prompt reply. Unfortunately, the closer look at that case only adds rather awkward questions. "a couple of extremely highly cited papers are actually not "papers": they are biannual volumes titled "The Review of Particle Physics" that ...See More
  • Valery Adzhiev I continue. There are more than 200 authors (in fact, they are "editors") from more than 100 organisation from all over the world, who produce those volumes. Look: just one of them happened to be affiliated with MEPhI - and that rather modest fact (tha...See More
  • Valery Adzhiev Sorry, another addition: I'd just want to repeat that my point is not concerned only with MEPhI - Am talking about your methodology. Look at the "citation score" of some other universities. Royal Holloway, University of London having justt 27.7 in "res...See More
  • Alvin See Great observations, Valery.
  • Times Higher Education World University Rankings Hi Valery,

    Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for
    ...See More
  • Andrei Rostovtsev This is in fact rather philosofical point. There are also a number of very scandalous papers with definitively negative scientific impact, but making a lot of noise around. Those have also high contribution to the citation score, but negative impact t...See More

    It is true that two extremely highly cited publications combined with a low total number of publications skewed the results but what is equally or perhaps more important is that  these citations occur in the year or two years after publication when citations tend to be relatively infrequent compared to later years. The 2010 publication is a biennial review, like the 2008 publication, that will be cited copiously for two years after which it will no doubt be superseded by the 2012 edition.

    Also, we should note that in the ISI Web of Science, the 2008 publication is classified as "physics, multidisciplinary". Papers listed as multidisciplinary generally get relatively few citations so if the publication was compared to other multidisciplinary papers it would get an even larger weighting. 
    Valery has an excellent point when he points out that these publications have over 100 authors or contributors each (I am not sure whether they are actual researchers or administrators). Why then did not all the other contributors boost their instutitions' scores to similar heights? Partly because they were not in Russia and therefore did not get the regional weighting but also because they were publishing many more papers overall than MEPhI.  

    So basically, A. Romaniouk who contributed 1/173rd of one publication was considered as having more research impact than hundreds of researchers at Harvard, MIT, Caltech etc producing hundreds of papers cited hundreds of times.  Sorry, but is this a ranking of research quality or a lottery?

    The worse part of THE's reply is this:

    Thanks again for your thorough analysis. The citation score is one of 13 indicators within what is a balanced and comprehensive system. Everything is put in place to ensure a balanced overall result, and we put our methodology up online for all to see (and indeed scrutinise, which everyone is entitled to do).

    We welcome feedback, are constantly developing our system, and will definitely take your comments on board.

    The system is not balanced. Citations have a weighting of 30 %, much more than any other  indicator. Even the research reputation survey has a weighting of only 18%.  And to describe as comprehensive an indicator which allows a fraction of one or two publications to surpass massive amounts of original and influential research is really plumbing the depths of absurdity.

    I am just about to finish comparing the scores for research and research impact for the top 400 universities. There is a statistically significant correlation but it is quite modest. When research reputation, volume of publications and research income show such a modestcorrelation with research impact it is time to ask whether there is a serious problem with this indicator.

    Here is some advice for THE and TR.

    • First, and surely very obvious, if you are going to use field normalisation then calculate the score for discipline groups, natural sciences, social sciences and so on and aggregate the scores. So give MEPhI a 100 for physical or natural sciences if you think they deserve it but not for the arts and humanities.
    • Second, and also obvious, introduce fractional counting, that is dividing the number of citations by the number of authors of the cited paper.
    • Do not count citations to summaries, reviews or compilations of research.
    • Do not count citations of commercial material about computer programs. This would reduce the very high and implausible score for Gottingen which is derived from a single publication.
    • Do not assess research impact with only one indicator. See the Leiden ranking for the many ways of rating research.
    • Consider whether it is appropriate to have a regional weighting. This is after all an international ranking.
    • Reduce the weighting for this indicator.
    • Do not count self-citations. Better yet do  not count citations from researchers at the same university.
    • Strictly enforce your rule about  not including single subject institutions in the general rankings.
    • Increase the threshold number of publications for inclusion in the rankings from two hundred to four hundred.


Friday, October 26, 2012

Stellenbosch

Credit when due. Times Higher Education has now put Stellenbosch University in Africa.
Dancing in the Street in Moscow

"Jubilant crowds poured into the streets of Moscow when it was announced that Moscow State Engineering Physics Institute had been declared to be the joint top university in the world, along with Rice University in Texas, for research impact".

Just kidding about the celebrations.

But the Times Higher Education - Thomson Reuters World University Rankings have given the "Moscow State Engineering Physics Institute" a score of 100 for research impact, which is measured by the number of citations per paper normalised by field, year of publication and country.

There are a couple of odd things about this.

First, "Moscow State Engineering Physics Institute " was reorganised in 2009 and its official title is now  National Research Nuclear University MEPhI. It still seems to be normal to refer to MEPhI or Moscow State Engineering Physics Institute so I will not argue about this. But I wonder if there has been some confusion in TR's data collection.

Second, THE says that institutions are not ranked if they teach only a single narrow subject. Does the institution teach more than just physics?

So how did MEPhI do it ?  The answer seems to be because of a couple of massively cited review articles. The first was by C Amsler et (many many) alia in Physics Letters B of September 2008, entitled Review of Particle Physics. It was cited 1278 times in 2009 and 1627 times in 2010 according to the Web of Science, even more according to Google Scholar.

Here is the abstract.

"Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions., plus 2778 new measurements from 645 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors., probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, top quark, muon anomalous magnetic moment, extra dimensions, particle detectors, cosmic background radiation, dark matter, cosmological parameters, and big bang cosmology".

I have not counted the number of authors but there are113 institutional affiliations of which MEPhI is 84th.

The second paper is by K. Nakamura et alia.  It is also entitled Review of Particle Physics and was published in the Journal Of Physics G-Nuclear and Particle Physics in July 2010 . It was cited 1240 times in 2011. This is the abstract.
 
"This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2158 new measurements from 551 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on neutrino mass, mixing, and oscillations, QCD, top quark, CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, fragmentation functions, particle detectors for accelerator and non-accelerator physics, magnetic monopoles, cosmological parameters, and big bang cosmology".

There are 119 affiliations of which MEPhI is 91st..

Let me stress that there is nothing improper here. It is normal for papers in the physical sciences to include summaries or reviews of research at the beginning of a literature review. I also assume that the similarity in the wording of the abstracts would be considered appropriate standardisation within the discipline rather than plagiarism.

TR 's method counts the numbers of citations of a paper compared to the average for that field in that year in that country. MEPhI would not get very much credit for a publication in physics which is a quite highly cited discipline, but it would get some for being in Russia where citations in English are relatively sparse and a massive boost for exceeding the average for citations within one or two years of publication many times over.

There is one other factor. MEPhI was only one of more than 100 institutions contributing to each of these papers but it got such an unusually massive score because its citations, which were magnified by region and period of publication, were divided by a comparatively small number of publications.

This is not as bad as Alexandria University being declared the fourth best university for research impact in 2010. MEPhI is a genuinely excellent institution which Alexandria, despite a solitary Nobel laureate and an historic library, was not.  But does it really deserve to be number one for research impact or even in the top 100? TR's methods are in need of very thorough revision.

And I haven't heard about any celebrations in Houston either.

Tuesday, October 09, 2012

What are the latest THE rankings telling us?


 This post has been taken down pending some fact checking.
.

Saturday, October 06, 2012

What Happened in Massachusetts?

The University of Massachusetts has fallen from 64th place in the Times Higher Education World University Rankings to 72nd.. Was this the result of savage spending cuts, the flight of international students or the rise of Asian universities?

Maybe, but there is perhaps another, less interesting reason.

Take a look at the THE reputation rankings for 2011 and 2012. The University of Massachusetts has fallen from 19th  position in 2011 to 39th in 2012. How did that happen when there is no other remotely comparable change in the reputation rankings?

The obvious answer is that THE, like QS before them, are sometimes a bit hazy about American state university systems. They most probably counted votes for all five campuses of the university system last year but only the flagship campus at Amherst  this year. The decline in the reputation score produced a smaller fall in the overall score. If there is another explanation I would be happy to hear it.
Update on My Job Application to Thomson Reuters

Looking at the Times Higher Education Reputation Rankings for 2012, I have just noticed that number 31 has gone missing, leaving 49 universities in the rankings.

Maybe not very important, but if you go around telling everybody how sophisticated and robust you are, a little bit more is expected.
My Job Application to Thomson Reuters

Looking at the 2011 Times Higher Education Reputation Ranking, prepared by Thomson Reuters, purveyors of robust and dynamic datasets, I noticed that there are only 48 universities listed (there are supposed to be 50) because numbers 25 and 34 have been omitted.



Friday, October 05, 2012

Observation on the THE Ranking

There will be some comments on the latest Times Higher Education World University Rankings over the next few days.

For the moment, I would just like to point to one noticeable feature of the rankings. The scores have improved across the board.

In 2011 the top university (Caltech) had an overall score of 94.8. This year it was 95.5.

In 2011 the 50th ranked university had a score of 64.9. This year it was 69.4.

In 2011 the 100th ranked university had a score of 53.7. This year it was 57.5 for the universities jointly ranked 99th..

In 2011 the 150th ranked university had a score of 46.7. This year it was 51.6.

In 2011 the 200th ranked university had a score of 41.4. This year it was 46.2.

The overall score is a combination of 13 different indicators all of which are benchmarked against the highest scorer in each category, which receives a score of 100. Even if universities throughout the world were spending more money, improving staff - student ratios, producing more articles, generating more citations and so on, this would not in itself raise everbody's, or nearly everybody's score.

There are no methodological changes this year  that might explain what happened.

Wednesday, October 03, 2012

Dumbing Down Watch

Is any comment needed?

Students starting university for the first time this autumn will be given a detailed breakdown of their academic achievements, exam results, extra-curricular activities and work placements, it was revealed.

More than half of universities in Britain will issue the new "Higher Education Achievement Report", with plans for others to adopt it in the future.

University leaders said the document would initially list students’ overarching degree classification.

But Prof Sir Robert Burgess, vice-chancellor of Leicester University and chairman of a working group set up to drive the reforms, said it was hoped that first, second and third-class degrees would eventually be phased out altogether.
University Migration Alert

At the time of posting, The new THE rankings listed Stellenbosch University in South Africa as one of the top European universities.

Remind me to apply for a job with Thomson Reuters sometime.

THE Rankings Out

The Times Higher Education World University Rankings 2012 are out. The top ten are :

1.  Caltech  (same as last year)
2.  Oxford  (up 2 places)
3.  Stanford (down 1)
4.  Harvard  (down 2)
5.  MIT (up 2)
6.  Princeton  (down 1)
7.  Cambridge (down 1)
8.  Imperial College London (same)
9.  Berekeley (up 1)

At the top, the most important change is that Oxford has moved up two places to replace Harvard in the number two spot.

Monday, October 01, 2012

Well,  they would, wouldn't they?

Times Higher Education has published the results of a survey by IDP, a student recruitment agency:

The international student recruitment agency IDP asked globally mobile students which of the university ranking systems they were aware of. The Times Higher Education World University Rankings attracted more responses than any other ranking - some 67 per cent.This was some way ahead of any others. Rankings produced by the careers information company Quacquarelli Symonds (QS) garnered 50 per cent of responses, and the Shanghai Academic Rankings of World Universities (ARWU) received 15.8 per cent.Asked which of the global rankings they had used when choosing which institution to study at, 49 per cent of students named the THE World University Rankings, compared to 37 per cent who named QS and 6.7 per cent who named the ARWU and the Webometrics ranking published by the Spanish Cybermetrics Lab, a research group of the Consejo Superior de Investigaciones Científicas (CSIC).


At the top of the page is a banner about IDP being proudly associated with the THE rankings. Also, IDP, which is in the student recruitment trade, is a direct competitor of QS.

The data could be interpreted differently. More respondents were aware of the THE rankings and had not used them than knew of the QS rankings and had not used them.

Saturday, September 29, 2012

Forgive me for being pedantic...

My respect for American conservatism took a deep plunge when I read this in an otherwise enjoyable review by Matthew Walther of Kingsley Amis's Lucky Jim:
Its eponymous hero, Jim Dixon, is a junior lecturer in history at an undistinguished Welsh college. Dixon’s pleasures are simple: he smokes a carefully allotted number of cigarettes each day and drinks a rather less measured amount of beer most nights at pubs. His single goal is to coast successfully through his two-year probation period and become a permanent faculty member in the history department.


It is well known, or ought to be, that the institution in the novel was based on University College, Leicester, which is a long way from Wales. The bit about the "Honours class over the road", a reference to the Welford Road municipal cemetery, is a dead giveaway.

Walther can be forgiven though since he reminded me of this description of Lucky Jim's history article;

“It was a perfect title, in that it crystallized the article’s niggling mindlessness, its funereal parade of yawn-enforcing facts, the pseudo-light it threw upon non-problems. Dixon had read, or begun to read, dozens like it, but his own seemed worse than most in its air of being convinced of its own usefulness and significance.”

:
Dumbing Down Watch

The New York Fire Department has announced the results of a new entrance exam. The passmark of 70 was reached by 95.72% of applicants. Previous tests had been thrown out because insufficient numbers of African-Americans and Hispanics were able to pass.

The new exam appears to be extremely easy and seems to assume that firefighting is a job that requires minimal intelligence. Effectively, the new policy for the New York Fire Department is to select at random from those able to get themselves to a testing center and answer questions that should pose no challenge to the average junior high school student.

The change was in response to the directives of Judge Nichols Garaufis, a graduate of Columbia Law School, which would seem to be as lacking in diversity as the NYFD. One suspects that the judge's disdain for the skills and knowledge, not to mention physical courage, of the firefighters is rooted in blatant class prejudice.

When is someone going to file a disparate impact suit against the top law schools?