Friday, May 11, 2007

More about Student-Faculty Ratios

I have just discovered a very good site by Ben Wilbrink, Prestatie-indicatoren (indicator systems). He starts off with "Een fantastisch document voor de kick-off", referring to a monograph by Sharon L. Nichols and David C. Berliner (2005), The Inevitable Corruption of Indicators and Educators Through High-Stakes Testing. Education Policy Studies Laboratory, Arizona State University pdf (180 pp.).

The summary of this study reports that:

"This research provides lengthy proof of a principle of social science known as Campbell's law: "The more any quantitative social indicator is used for social decisionmaking, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." "

This insight might well be applied to current university ranking systems. We have seen, for example, some US universities making it optional for applicants to submit their SAT results. It is predictable that good scores will be submitted to admissions officers, but not bad ones. Universities will then find that the average scores of their applicants will rise and therefore so will their scores on rankings that include SAT data.

I would like to propose a new law, an inversion of Gresham's. Good scores drive out bad.

Wilbrink has some good comments on the THES-QS rankings but I would like to focus on what he says about the student-faculty ratio.

"The faculty/student score (20%)The scores in this rubric are remarkable, to say the least. I do not think the student/staff ratio is less reliable than the other indicators, yet the relation to the world rank score seems to be nil. The first place is for (13) Duke, the second for (4=) Yale, the third for (67) Eindhoven University of Technology. Watch who have not made it here in the top twenty: Cambridge is 27th, Oxford 31st, Harvard 37th, Stanford 119, Berkeley 158. This is one more illustration that universities fiercely competing for prestige (see Brewer et al.) tend to let their students pay at least part of the bill.
"We measure teaching by the classic criterion of staff-to-student ratio." Now this is asking for trouble, as Ince is well aware of. Who is a student, who is a teacher? In the medieval universities these were activities, not persons. Is it much different nowadays? How much? ...


Every administration will creatively fill out the THES/QS forms asking them for the figures on students and teachers, this much is absolutely certain. If only because they will be convinced other administrations will do so. Ince does not mention any counter-measure, hopefully the THES/QS people have a secret plan to detect fraudulent data."

It is possible to test whether Wilbrink's remarks are applicable to the student-faculty scores for the 2006 THES-QS rankings. THES have published a table of student-faculty ratios at British universities from the University and College Union that is derived from data from the Higher Education Statistics Agency (HESA). These include further education students and exclude research-only staff. These results can be compared to the data in the THES-QS rankings


In 2006 QS reported that the top scorer for student-faculty ratio was Duke. Looking at QS's website we find that this represents a ratio of 3.48 students per faculty. Cross-checking shows that QS used the data on their site to construct the scores on the 2006 rankings. Thus, the site reports that Harvard had 3,997 faculty and 24,648 students , a ratio of 6.17 students per faculty, ICL 3,090 faculty and 12,185 students, a ratio 0f 3.94, Peking 5,381 faculty and 26,972 students, a ratio of 5.01, Cambridge 3,886 faculty and 21,290 students, a of ratio of 5 .48. These ratios yielded scores of 56, 88, 69 and 64 on the student-faculty component of the 2006 rankings.


Now we can compare the QS data with those from HESA for the period 1005-06. Presumably ,this represents the period covered in the rankings. If Wilbrink is correct, then we would expect the ratios of the rankings to be much lower and more favourable than those provided by HESA.

That in fact is the case. Seven British universities have lower ratios in the HESA statistics. The se are Cranfield, Lancaster, Warwick, Belfast, Swansea, Strathclyde and Goldsmith's College. In 35 cases the THES-QS score was much better. The most noticeable differences were ICL, 3.95 and 9.9, Cambridge , 5,48 and 12,.30, Oxford 5.70 and 11.9, LSE 6.57 and 13, Swansea, 8.49 and 15.1 and Edinburgh, 8.29 and 14.

It is possible that the differences are the result of different consistent and principled conventions. Thus one set of data might specifically include people excluded by the other. The HESA data, for example, includes further education students, presumably meaning non-degree students, but the THES-QS data apparently does not. This would not, however, seem to make much of a difference between the two sets of data for places like Oxford and LSE.

Both HESA`and QS claim not to count staff engaged only in research.

It is possible then that the data provided by universities to QS has been massaged a bit to give favourable scores. I suspect that this does not amount deliberate lying. It is probably more a case of choosing the most beneficial option whenever there is any ambiguity.

Overall, the ratios provided by QS`are much lower, 11.37 compared to 14.63.

No comments:

Post a Comment