Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Tuesday, August 21, 2007
According to the Princeton Review they are:
1. Whitman College (Walla Walla, Washington)
2. Brown
3. Clemson university (South Carolina)
4. Princeton
5. Stanford
6. Tulsa
7. College of new Jerseay
8. Bowdoin (Maine)
9. Yale
10. Thomas Aquinas College (California)
I am sure that PR's methods could be argued about but it is striking that four of the universities on this list are also in the top ten of selective universities.
The Princeton Review has come out with a variety of rankings. One of them ranks US colleges by how hard they are to get into, which many would think is a good proxy for general quality. Here are the top ten.
1. Harvard
2. Princeton
3. MIT
4. Yale
5. Stanford
6. Brown
7. Columbia
8. Pennsylvania
9. Washington in St Louis
10. Caltech
Monday, August 20, 2007
The US News and World Report rankings of American colleges is out. A full report is here.
Briefly this is how they are produced:
"To rank colleges and universities, U.S. News first assigns schools to a group of their peers, based on the basic categories developed by the Carnegie Foundation for the Advancement of Teaching in 2006. Those in the National Universities group are the 262 American universities (164 public and 98 private) that offer a wide range of undergraduate majors as well as master's and doctoral degrees; many strongly emphasize research.
In each category, data on up to 15 indicators of academic quality are gathered from each school and tabulated. Schools are ranked within categories by their total weighted score."
The top ten are:1. Princeton
2. Harvard
3. Yale
4. Stanford
5. Pennsylvania
6. Caltech
7. MIT
8. Duke
9. Columbia
9. Chicago
Sunday, August 19, 2007
The full rankings, released earlier this year, are here.
Below are the top twenty.
1. Harvard
2. Berkeley
3. Princeton
4. Cambridge
5. Caltech
6. MIT
7. Stanford
8. Tokyo
9. UCLA
10. Oxford
11.Cornell
12. Columbia
13. Chicago
14. Colorado--Boulder
15. ETH Zurich
16. Kyoto
17. Wisconsin -- Madison
18. UC Santa Barbara
19. UC San Diego
20.Illinois -- Urbana Champaign
Friday, August 17, 2007
Shanghai Jiao Tong University has also released, earlier this year, research rankings in broad subject areas. First here are the top twenty for the social sciences. The full ranking is here. These rankings should be taken with a fair bit of salt since they are heavily biased towards economics and business studies. Credit is given for Nobel prizes although these are only awarded for economics while psychology and psychiatry are excluded. There are two categories of highly cited researchers, Social Sciences -- general and Economics/Business.
It would, I think, be better to refer to this as an economics and business ranking.
1. Harvard
2. Chicago
3. Stanford
4. Columbia
5. Berkeley
6. MIT
7. Princeton
8. Pennsylvania
9. Yale
10.Michigan -- Ann Arbor
11. New York Univ
12. Minnesota -- Twin Cities
13. Carnegie-Mellon
14. UCLA
15. Northwestern
16. Cambridge
17. Duke
18. Maryland -- College Park
19. Texas -- Austin
19 . Wisconsin-- Madison
Note that there is only one non-US university in the top twenty, Cambridge at number 16. The best Asian university is the Hebrew University in Jerusalem at 40. The best Australian university is ANU at 77-104 . There is no mainland Chinese university in the top 100. This is dramatically different from the picture shown by the THES peer review in 2006.
Thursday, August 16, 2007
The London Times (not the Times Higher Education Supplement) has just produced its Good University Guide for British Universities. It is based on eight criteria: student satisfaction, research quality, student staff ratio, services and facilities spend, entry standards, completion, good honours and graduate prospects.
Here are the top ten.
1. Oxford
2. Cambridge
3. Imperial College London
4. London School of Economics
5. St Andrews
6. University College London
7. Warwick
8. Bristol
9. Durham
10. Kings College London
I shall try and comment later but for the moment it's worth pointing out that there are some spectacular rises, Kings College, Exeter and City University. This immediately raises questions of the stability of the methods and the validity of the data.
Wednesday, August 15, 2007
Shanghai Jiaotong University has just released its 2007 Academic Ranking of World universities. The top 100 can be found here. The top 500 are here.
I shall add a few comments in a day or so. Meanwhile here are the top 20.
1 Harvard Univ
2 Stanford Univ
3 Univ California - Berkeley
4 Univ Cambridge
5 Massachusetts Inst Tech (MIT)
6 California Inst Tech
7 Columbia Univ
8 Princeton Univ
9 Univ Chicago
10 Univ Oxford
11 Yale Univ
12 Cornell Univ
13 Univ California - Los Angeles
14 Univ California - San Diego
15 Univ Pennsylvania
16 Univ Washington - Seattle
17 Univ Wisconsin - Madison
18 Univ California - San Francisco
19 Johns Hopkins Univ
20 Tokyo Univ
Tuesday, August 14, 2007
There is a web site, College Ranking Service, that is produced by "a non-profit organization dedicated to providing rankings of colleges in a manner suitable for students, university leaders, and tuition paying parents."
The home page says:
"We take our rankings seriously. Each college is painstakingly analyzed, as if under a microscope, for its flaws and degree of polish. The rankings found on www.rankyourcollege.com represent thousands of hours of research, and are updated annually or at the discretion of the Director.
The Board of the College Ranking Service, composed of Nobel Prize Winners and Captains of Industry, remains anonymous to ensure the integrity of the rankings.
The Director is also anonymous, however, rest assured that he is a prominent member of the academy and a professor of the highest regard at one of the most prestigious universities in the world."
There is also a disclaimer:"There is no such thing as the "College Ranking Service." But the hyperbole and baloney contained in this web site are not that different from equally silly, but maddeningly serious college ranking publications and web sites offered by the media.
It is a sham and a scam to try to rank the quality of universities like sports franchises. Media publications that do this should be laughed out of existence. They simply measure wealth ("The Classic Method" on this web site), which is something that is at best obtusely related to quality.
Regardless of their lack of validity, media-based college rankings are having a negative influence on higher education. Tuition paying parents and their children are swayed by the false prestige these rankings imply. The push to get into a "top ten" school has created added pressure on students to stuff their high school years with lofty sounding, but often meaningless accomplishments. It has been partly responsible for the rise of a college application industry that provides services (like SAT prep classes and college application consulting) of dubious worth."
CRS also describes its methodology:
"In the course of developing our methodology, we found that our rankings had unique properties. First, we noted a phenomenon well known in particle physics, but unheard of heretofore in ranking systems: a college, like a subatomic particle, could be two or more places at once. In other words, individual colleges could have multiple rankings!
Second, we noted the well known and by now passe Heisenberg phenomenon in our rankings: our rankings were influenced by our evaluation. The more we looked at them in great detail, the more variability we saw. Finally, we found a butterfly effect: small perturbations in our extensive data base resulted in significant changes in our rankings.
The combined influences of these phenomena we term the Kanoeddel effect, in honor of the Director's mother's Passover matzah balls, which even though they were made at the same time, had a wide range in density (from that of cotton balls to that of granite pebbles). In Yiddish, the word for "matzah ball" is "kanoeddel."
Because of the Kanoeddel effect, we note that our rankings are not static. Hitting the refresh button on your web browser will cause the Mighty Max to recompute the rankings, resulting in a slightly different order."
In the Guide to the World's Top Universities, we find a perfect example of the first property with the Technical University of Munich occupying two different places in the rankings and also, in one case, being located in Dortmund.The butterfly effect is illustrated perfectly by the data entry or transfer error that led to an incorrect figure for student faculty ratio for every university in the Guide.
Sunday, August 12, 2007
This blog was originally supposed to be about university ranking in general but is danger of turning into a catalogue of THES and QS errors. I shall try to move on to more varied topics in the future but here is an elaboration of an earlier post on the faculty student ratios in the book Guide to the World's Top Universities, published by QS Quacquarelli Symonds Ltd. (QS) and written by Nunzio Quacquarelli, a director of that company, and John O'Leary and Martin Ince, former and current THES`editors.
In the first column below I have arranged the universities in the THES-QS rankings in alphabetical order. The middle column consists of the student faculty ratio included in the 2006 World University Rankings published in the THES (top 200) and on the topuniversities website. The figure is derived from converting the scores out of 100 in the rankings to ratios and cross-checking with QS's figures for faculty and students at topuniversities. The right-hand column contains the student faculty ratio in the Guide's directory and in the profile of the top 100 universities.
The two figures are completely different. But if you go down three rows you will find the figure is identical or almost identical.
Thus Aachen has a ratio of 14.7 in the Guide. Go down three rows and you will find that in the rankings and at topunivsersities Aberystwyth has a ratio of 14.7.
So, presumably what happened is that someone was pasting data between files and slipped three rows. This simple mistake has resulted in over 500 errors in the Guide.
University | ranking | Guide |
Aachen | 12.4 | 14.7 |
Aarhus | 10.5 | 24.1 |
Aberdeen | 10.5 | 14.8 |
Aberystwyth | 14.7 | 15.1 |
Adelaide | 24.9 | 18.1 |
Adolfo Ibanez | 14.8 | 20.5 |
Airlangga | 15.1 | 12.3 |
Alabama | 18.1 | 10.4 |
Alberta | 20.5 | 13.1 |
Amsterdam | 12.4 | 16.1 |
Antwerp | 10.4 | 24.3 |
Aoyama Guiken | 13.1 | 13.6 |
Aristotelian | 16.1 | 15.4 |
Arizona State | 24.3 | 14.3 |
I have written to Ben Sowter, director of research at QS and to the author's at Martin Ince's address. So far the only response is an automated message indicating that the latter is away.
Friday, August 10, 2007
Another site worth reading on law school rankings is MoneyLaw. There is an interesting post on the US News and World Report's correction policy. It seems that if USNWR makes a mistake the rankings are corrected but if the school is responsible the underlying data but not the rankings are corrected.
MoneyLaw comments:
"I wouldn't call that grossly unfair. Academic research would perhaps demand more attention to setting the record straight, granted. But USN&WR's rankings hardly constitute academic research."
I would add that this policy seems dramatically better than that of most other rankings
State universities and colleges have come up with a plan to publish essential information on their web sites.
The National Association of State Universities and Land-Grant Colleges and the American Association of State Colleges and Universities are , according to the Wall Street Journal,
" designing a template for college Web sites that, for those that opt to use it, shows in standard format: (1) details about admission rates, costs and graduation rates to make comparisons simple; (2) results from surveys of students designed to measure satisfaction and engagement, and (3) results of tests given to a representative sample of students to gauge not how smart they were when they arrived, but how much they learned about writing, analysis and problem-solving between freshman and senior years.
The last one is the biggie. Participating schools will use one of three tests to gauge the performance of students with similar entering SAT scores at tasks that any college grad ought to be able to handle. One test, the Collegiate Learning Assessment, gives students some circumstance and a variety of information about it, and asks for short essays (no multiple choice) on solving a problem or analyzing a scenario. Under the state schools' proposed grading scale, 70% of the schools will report that students did "as expected," given their SATs. An additional 15% will report they did better or much better than expected, and 15% will report students did worse or much worse than expected."
This seems like a good idea. It could even go some way towards making commercial rankings redundant.
Saturday, August 04, 2007
I have just noticed, via Wikipedia, a world university ranking by the Research Centre for China Science Evaluation at Wuhan University that seems to be based on current research productivity. Since the website does not have an English version, it is not possible to comment very much about it at the moment. According to Wikipedia it " is based on Essential Science Indicators (ESI), which provides data of journal article publication counts and citation frequencies in over 11,000 journals around the world in 22 research fields". If anyone can look at the website and tell me what the research fields are and what period is covered I'd be grateful.
I noticed some errors in the rankings. Laval University is located in France not Canada, York in the US, not Canada, and Bern in Sweden. Ljubljana is listed as being in "Jugoslavia", a few years out of date.
If the rankers have assessed a broad range of subjects and if they have looked at a recent period and if their methods are valid they may have produced a ranking of research achievement that is more current than the Shanghai index which includes decades-old Nobel and Fields prize winners. The ranking gives low positions to Cambridge and Oxford confirming suspicions that their high rating by THES -QS is unjustified. Princeton and Yale (strengths in the humanities?) have relatively low places. So do Chicago (strength in the social sciences?) and Caltech.
There are some more surprises. Texas is at number 2. Maybe this represents a genuine advance or perhaps the presence of a large medical school has something to do with it. "Univ Washington" is at number 3. This most probably means Washington University in St Louis. Before getting too excited about this result I would like to be sure that there has been no confusion with The University of Washington, Washington State University and George Washington University.
Here are is the top 20 along with the scores. Harvard at the top gets 100. The full ranking can be found here.
1. Harvard 100
2. Texas 87.49
3. "Univ Washington" 72.39
4. Stanford 71.91
5. Johns Hopkins 71.45
6. UC Berkeley 70.76
7. UCLA 70.38
8. Michigan 69.11
9. MIT 68.62
10. Toronto 66,90
11. Wisconsin 64.83
12. Columbia 64.71
13. UC San Diego 64.54
14. Pennsylvania 64.42
15. Cambridge 62.93
16. Minnesota 62.80
17. Yale 62.20
18. Cornell 62.19
19 UC San Francisco 61.52
20. Duke 60.60
Friday, August 03, 2007
QS Quacquarelli Symonds seems to have bad luck with international students in Malaysia. Their topuniversities site has a piece on "Study Abroad in Malaysia" which states
"On the back of its enduring economic and industrial boom, Malaysia is trying hard to position itself as the Asian destination of choice for international students seeking to study abroad, and with some success. Currently there are around 50,000 students from 100 countries in Malaysian tertiary education, forming 20-30% of the student body - and the country wants to promote a multicultural image that reflects the country itself. "
The total number ot registered students in tertiary education in Malaysia is in fact about 732,000. International students therefore are well under ten per cent of tertiary students
According to the Kuala Lumpur New Straits Times, the vice-chancellor of Universiti Malaya has said that the university's international ranking
"should not be a target. Instead, UM’s main aim was to produce
quality work, she added"
Wednesday, August 01, 2007
Another blog worth looking at is Agoraphilia which, among other things, has posts on the US News and World Report law school rankings. One of them deals with University of Florida's receiving an erroneous and over-favourable rating by the USN&WR, apparently because it reported the LSAT scores and GPAs only for the fall 2005 intake and did not include those for the spring intake.
What most impresses me about this is that the Dean and Associate Dean of Florida's law school and Robert J. Morse, Director of Data Research at USN&WR, have replied promptly and at length to questions about ranking methods.
Friday, July 27, 2007
This is a very good page produced by Boston College with links to sites and articles on university rankings. For a start take a look at 'Playing with Numbers' by Nicholas Thompson.
Wednesday, July 25, 2007
One of the more interesting elements in the Guide to the World's Top Universities by John O'Leary, Nunzio Quacquarelli and Martin Ince, published by QS Quacquarelli Symonds at the end of 2006, is the information about student faculty ratio provided in the directory of over 500 universities and the profiles of the world's top 100 universities.
These are, even at first sight, not plausible: 590.30 students per faculty at Pretoria, 43.30 at Colorado State University, 18.10 at Harvard, 3.50 at Dublin Institute of Technology.
Scepticism is increased when the the Guide's data for student faculty ratio is correlated with that derived from the scores out of 100 for this measure in the 2006 rankings and cross-checked with the data on individual universities on QS's topuniversities site. The correlation for 517 universities is negligible at .057 and statistically insignificant (2-tailed .195).
Comparing the two sets of data on student faculty ratio for the British universities in the rankings shows that the problem is with the information in the Guide, not that in the rankings. The rankings data correlates highly with that provided by the Higher Education Statistics Agency (HESA: see earlier post) (.712, sig = .000) and that taken from the web site williseemy tutor (.812, sig = .000). There is no significant correlation between the data in the Guide and the HESA data (.133, sig = .389) and that derived from williseemytutor (.179, sig = .250).
So, where did the Guide's student faculty data come from?
First, here are the most favourable student faculty ratios calculated from the scores in the rankings (they can be cross-checked at the topuniversities site) and rounded to the first decimal place.
Duke 3.5
Yale 3.7
Eindhoven University of Technology 3.8
Rochester 3.8
London Imperial College 3.9
Paris Sciences Po 4.0
Tsing Hua 4.1
Emory 4.1
Geneva 4.3
Vanderbilt 4.3
Now, here are the most favourable ratios given in the Guide.
Dublin Institute of Technology 3.5
Wollongong 3.7
Ecole Polytechnique 3.8
Rio de Janeiro 3.8
Llubljanja 3.9
Oulu 4.0
Trento 4.1
Edinburgh 4.1
Fudan 4.3
Utrecht 4.3
Notice that the ratio of 3.5 is assigned to Duke university in the rankings and to Dublin IT in the Guide. If the universities are arranged alphabetically these two would be in adjacent rows. Likewise, the other scores listed above are assigned to universities that would be next to each other or nearly so in an alphabetical listing.
Next are the least favourable ratios derived from the rankings data.
Pune 580
Delhi 316
Tor Vergata 53
Bologna 51
Cairo 49
Concordia 42
Now the ratios in the Guide.
Pretoria 590
De La Salle 319
RMIT 53
Bilkent 51
Bucharest 49
Colorado 42
Notice again that, except for Tor Vergata and RMIT, the ratio in the two data sets is shared by universities that are close or next to each other alphabetically.
The conclusion is unavoidable. When the Guide was being prepared somebody created a new file and made a mistake, going down one or two or a few rows and inserting the rankings data in the wrong rows. So, every university in the Guide's directory acquired a new and erroneous student faculty ratio.
Since this piece of information is the one most likely to interest future undergraduate students, this is not a trivial error.
Is this error any less serious than QS's getting the two North Carolina business schools mixed up?
Sunday, July 22, 2007
The third Asia Pacific Professional Leaders in Education conference was held in Hong Kong recently. The conference was organised by QS Quacquarelli Symonds (QS), consultants for the THES rankings, and a substantial part of the proceedings seems to have been concerned with international university rankings. There is a report by Karen Chapman in the Kuala Lumpur Star. There are hints that the methods of the THES-QS rankings may be revised and improved this year. The QS head of research, Ben Sowter, has referred to a revision of the questionnaires and to an audit and validation of information. Perhaps the deficiencies of previous rankings will be corrected.
There is also a reference to a presentation by John O'Leary, former editor of the THES, who is reported as saying that
“Peer review is the centrepiece of the rankings as that is the way academic value is measured.”
The second part of this sentence is correct but conventional peer review in scientific and academic research is totally different from the survey that is the centrepiece of the THES rankings.
Peer review means that research is scrutinised by researchers who have been recognised as authorities in a narrowly defined research field. However, inclusion in the THES-QS survey of academic opinion has so far required no more expertise than the ability to sign on to the mailing list of World Scientific, a Singapore-based academic publisher. Those who are surveyed by QS are, in effect, allowed to give their opinions about subjects of which they may know absolutely nothing. Possibly, the reference to redesigning the survey means that it will become more like a genuine peer review.
It cannot be stressed too strongly or repeated too often that, on the basis of the information released so far by QS, the THES-QS survey is not a peer review.
There is an excellent post by Eric Beerkens at Beerkens' Blog reporting on an article by Wendy Nelson Espeland and Michael Sauder in the American Journal of Sociology. The article, 'Rankings and reactivity: How public measures recreate social worlds', describes how the law school rankings of the US News and World Report affect the behaviour of students, university administrators and others.
Beerkens argues that international university rankings also have several consequences
1. Firstly, rankings affect external audiences. Trivial differences between institutions may lead to large differences in the quality and quantity of applicants.
2. Rankings may amplify differences in reputations. If researchers or administrators are asked to assess universities of which they have no knowledge they are likely to rely on the results of previous rankings.
3. Resources such as grants distributed on the basis of rankings .
4. Universities will give up objectives that are not measured in rankings and try to become more like those who achieve high scores.
Saturday, July 21, 2007
There is a Spanish-language blog on university rankings and other academic matters by Alejandro Pisanty that is well worth looking at.
Tuesday, July 17, 2007
Matt Rayner has posted an interesting question on the QS topuniversities site. He has noticed that in the Guide to the World's Top Universities, published by QS, Cambridge is supposed to have a student faculty ratio of 18.9 and a score of 64 for this part of the 2006 World Rankings while Glasgow, with an almost identical ratio of 18.8, gets a score of 35.
As already noted, this anomaly is not confined to Cambridge and Glasgow. The student faculty ratios provided in the data about individual universities in the Guide are completely different from those given in the rankings.
There is in fact no significant relationship, as a quick correlation done by SPSS will show, between the two sets of data.
It will be even more interesting to see when and how QS reply to Matt's question
Sunday, May 13, 2007
The
Varsitarian, the newspaper of the University of Santo Tomas (UST) in the
The complaint appears to be valid although the newspaper makes several errors about the rankings.
Alberto Laurito, assistant to the rector for planning and development at UST, has claimed that QS got the number of students wrong. The consultants reported 11, 764 students whereas the correct number is 32,971. The university’s figure seems to be correct. An article by Guzman and Torres in the Asia Pacific Education Review reports 32,322 students in 2002-3. However, QS’s deflating of student numbers, if it were the only mistake, would work to UST’s advantage in a number of ways. Firstly fewer students mean fewer students per faculty, if the number of the latter is constant, and hence a lower score on the student–faculty ratio component of the rankings. Secondly, if the number of international students is the same, fewer students overall means a higher percentage of international students.
However, this is not QS’s only error. They report that UST has 524 faculty, making a student faculty ratio of 22.45. According to the article, in 2002-3 UST had 1500 faculty. With 32,322 students, this would mean a faculty student ratio of 21.55. QS has made two errors and they have pretty much cancelled each other out.
Laurito then complained:
“that THES-QS research on peer review was also irregular, considering that it was worth 40 per cent of the entire survey when only 1,600 universities turned in their responses or about one per cent of the 190,000 needed”
The low response rate does of course invalidate the “peer review” but it was individual academics who were surveyed, not universities.
Laurito then points out that UST got a zero for research citations:
“The score is obtained through a research collation database maintained by Thomson, an information-based solutions provider, called Essential Science Indicators (ESI). For every citation given to a university researcher or professor, the university would acquire a point.”
The procedure is not like this at all. Laurito continues:
“Based also on the survey, UST received the lowest grade on international outlook (meaning UST has no international students or faculty) when the University actually has seven international professors and 300 international students.”
Again, not quite. UST gets a score from QS of 3.6 for international faculty and 0.6 for international students, representing 12 international faculty members and 47 international students.
Laurito has got the wrong end of several sticks but the basic point still remains that QS got the data for students, faculty and international students wrong.
The newspaper then quotes Laurito as saying:
“We were told by the research representative (of THES-QS) that the data they used were personally given to them by a University personnel, but they were not able to present who or from what office it came from”
If Laurito is reported correctly and if this is what the “research representative” told him, there is something very strange here.
IF QS have a documentary record of an e-mail or a phone call to UST how could the record not indicate the person or office involved?
If they do not, how can QS be sure that the information came from an official university source or that there was any contact at all?
Friday, May 11, 2007
I have just discovered a very good site by Ben Wilbrink, Prestatie-indicatoren (indicator systems). He starts off with "Een fantastisch document voor de kick-off", referring to a monograph by Sharon L. Nichols and David C. Berliner (2005), The Inevitable Corruption of Indicators and Educators Through High-Stakes Testing. Education Policy Studies Laboratory, Arizona State University pdf (180 pp.).
The summary of this study reports that:
"This research provides lengthy proof of a principle of social science known as Campbell's law: "The more any quantitative social indicator is used for social decisionmaking, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." "
This insight might well be applied to current university ranking systems. We have seen, for example, some US universities making it optional for applicants to submit their SAT results. It is predictable that good scores will be submitted to admissions officers, but not bad ones. Universities will then find that the average scores of their applicants will rise and therefore so will their scores on rankings that include SAT data.
I would like to propose a new law, an inversion of Gresham's. Good scores drive out bad.
Wilbrink has some good comments on the THES-QS rankings but I would like to focus on what he says about the student-faculty ratio.
"The faculty/student score (20%)The scores in this rubric are remarkable, to say the least. I do not think the student/staff ratio is less reliable than the other indicators, yet the relation to the world rank score seems to be nil. The first place is for (13) Duke, the second for (4=) Yale, the third for (67) Eindhoven University of Technology. Watch who have not made it here in the top twenty: Cambridge is 27th, Oxford 31st, Harvard 37th, Stanford 119, Berkeley 158. This is one more illustration that universities fiercely competing for prestige (see Brewer et al.) tend to let their students pay at least part of the bill.
"We measure teaching by the classic criterion of staff-to-student ratio." Now this is asking for trouble, as Ince is well aware of. Who is a student, who is a teacher? In the medieval universities these were activities, not persons. Is it much different nowadays? How much? ...
Every administration will creatively fill out the THES/QS forms asking them for the figures on students and teachers, this much is absolutely certain. If only because they will be convinced other administrations will do so. Ince does not mention any counter-measure, hopefully the THES/QS people have a secret plan to detect fraudulent data."
It is possible to test whether Wilbrink's remarks are applicable to the student-faculty scores for the 2006 THES-QS rankings. THES have published a table of student-faculty ratios at British universities from the University and College Union that is derived from data from the Higher Education Statistics Agency (HESA). These include further education students and exclude research-only staff. These results can be compared to the data in the THES-QS rankings
In 2006 QS reported that the top scorer for student-faculty ratio was Duke. Looking at QS's website we find that this represents a ratio of 3.48 students per faculty. Cross-checking shows that QS used the data on their site to construct the scores on the 2006 rankings. Thus, the site reports that Harvard had 3,997 faculty and 24,648 students , a ratio of 6.17 students per faculty, ICL 3,090 faculty and 12,185 students, a ratio 0f 3.94, Peking 5,381 faculty and 26,972 students, a ratio of 5.01, Cambridge 3,886 faculty and 21,290 students, a of ratio of 5 .48. These ratios yielded scores of 56, 88, 69 and 64 on the student-faculty component of the 2006 rankings.
Now we can compare the QS data with those from HESA for the period 1005-06. Presumably ,this represents the period covered in the rankings. If Wilbrink is correct, then we would expect the ratios of the rankings to be much lower and more favourable than those provided by HESA.
That in fact is the case. Seven British universities have lower ratios in the HESA statistics. The se are Cranfield, Lancaster, Warwick, Belfast, Swansea, Strathclyde and Goldsmith's College. In 35 cases the THES-QS score was much better. The most noticeable differences were ICL, 3.95 and 9.9, Cambridge , 5,48 and 12,.30, Oxford 5.70 and 11.9, LSE 6.57 and 13, Swansea, 8.49 and 15.1 and Edinburgh, 8.29 and 14.
It is possible that the differences are the result of different consistent and principled conventions. Thus one set of data might specifically include people excluded by the other. The HESA data, for example, includes further education students, presumably meaning non-degree students, but the THES-QS data apparently does not. This would not, however, seem to make much of a difference between the two sets of data for places like Oxford and LSE.
Both HESA`and QS claim not to count staff engaged only in research.
It is possible then that the data provided by universities to QS has been massaged a bit to give favourable scores. I suspect that this does not amount deliberate lying. It is probably more a case of choosing the most beneficial option whenever there is any ambiguity.
Overall, the ratios provided by QS`are much lower, 11.37 compared to 14.63.
Wednesday, May 09, 2007
A blog by MBA student Shawn Snyder remarks:
"So CNN recently published its "Top 50 Business Schools to get Hired in 2007" and I was glad to see Maryland's Smith school listed, but I was confused to see the George Washington University right above Smith. After all, by their own ranking the GW grads had one less job offer and starting salary almost $10,000 lower. Umm, maybe recruiters think that George Washington is a better deal because they can snag grads for cheap, but from a business student perspective (the people reading the rankings) wouldn't Smith be the better choice? And why wouldn't it rank higher? Business rankings are crap in my opinion....and yet I still read all of them as if it matters. Maybe I have the problem."
And there is a comment by Dave:
" I too noticed some discrepancies in the ratings on CNN.com. Specifically, UNC Kenan-Flagler is not in the top 50! I dug a bit deeper and looked at the data from topmba.com - the website where the list came from - and found some startling errors. UNC KFBS average salary is listed as $76k when the actual average is $89k! I wrote a letter to TopMBA.com and found that not only did they screw up the salaries, but they did not distinguish between University of North Carolina and North Carolina State U in the recruiter rankings! It's really incredible the garbage that these people are allowed to print. What ever happened to 'trust but verify'?"
There is an interesting post at Accepted Admissions Almanac about the QS-Kenan-Flagler affair. The writer remarks:
"It's safe to say that this mess is a nightmare for QS, CNNMoney, and Fortune. Providing and publishing rankings so sloppily slapped together is beneath criticism for an industry that even when the data is accurate has more than its share of critics and is deserving of skepticism. The CNNMoney/QS fiasco is about as bad as it gets for rankings."
I am afraid that it gets very much worse for QS. They have made errors as bad as this in the compilation of the THES-QS World University rankings -- a response rate of less than 1 per cent to an online survey, counting ethnic minority students in Malaysia as international students, renaming Peking University Beijing University, boosting Duke University's score for student-faculty ratio by counting undergraduates as faculty and so on.
But nobody seems to mind very much when it comes to the THES rankings. Is it something about the brand name?
The post concludes with a very appropriate comment:
"When accurate, unlike the removed QS/CNNMoney version, they are sources of information. Sometimes valuable information. Databanks. I use the data, and so should you. If you want to know the average salaries of graduates from particular schools or their average entering test scores, the rankings will have that information compiled in one place. Like a library, they are sources of information. They are not an excuse for decision-making; using them mindlessly could be the equivalent of a lobotomy. And an expensive one at that."