Sunday, February 03, 2013

Article in the Chronicle of Higher Education

The Chronicle of Higher Education has an article by Debra Houry on university rankings. She makes some pertinent comments although her recommendations at the end are either impractical or likely to make things worse.

She points out that several American colleges have been found to have submitted inflated data to the US News and World Report in order to boost their standing in the rankings and notes that "there is an inherent conflict of interest in asking those who are most invested in the rankings to self-report data."

This is true and is even more true of international rankings. One reason why the Shanghai rankings are more credible than those produced by QS and Times Higher Education is that they rely entirely on reasonably accessible public data. Using information provided by institutions is a risky business which, among other things, could lead to universities refusing to cooperate, something which ended the promising Asiaweek rankings in 2001.

She then argues that measures of student quality such as high school class rank and SAT scores should be abandoned because they "discourage colleges from selecting a diverse student body. An institution that begins accepting more African-American students or students from low-income families—two groups that have among the lowest SAT scores, according to the College Board—might see its ranking drop because the average SAT score of its freshmen has gone down."

True, but on the other hand an institution that puts more emphasis on standardized test scores might rise in the rankings and might also increase its intake of Asian students and so become more diverse. Are Asian students less diverse than African- Americans? They are certainly likely to be far more varied in terms of mother tongue, political opinions or religious affiliation.

She also points out that it is now a bit late to count printed books in the law school rankings and wonders about using ratemyprofessor to assess teaching quality.

Then there is a familiar criticism of the QS Stars rating systems.

Professor Houry also makes the common complaint that the rankings do not capture unique features of institutions such as "a program called Living-Learning Communities, which gives upperclassmen at Emory incentives to live on campus and participate in residential learning. But you would never learn about that from the ranking formulas."

The problem is that a lot of people are interested in how smart graduates are or how much  research, if any, faculty are doing or how much money is flowing in. But seriously, what is so interesting about upperlassmen living on campus? In any case if this is unique would you expect  any measure to "capture" it.

Finally she concludes "ranking organizations should develop more-meaningful measures around diversity of students, job placement, acceptance into professional schools, faculty membership in national academies, and student engagement. Instead of being assigned a numerical rank, institutions should be grouped by tiers and categories of programs. The last thing students want is to be seen as a number. Colleges shouldn't want that, either."

But all of these raise more problems than solutions. If we really want diversity of students shouldn't we counting counting conservative students  or evangelical Christians? Job placement raises the possibility, already found in law school rankings, of counting graduates employed in phony temporary jobs or glorified slave labor (internships). Membership in national academies? A bit elitist, perhaps?






1 comment: