Discussion and analysis of international university rankings and topics related to the quality of higher education. Anyone wishing to contact Richard Holmes without worrying about ending up in comments can go to rjholmes2000@yahoo.com
Friday, November 09, 2007
This year QS has introduced several "methodological enhancements" into the THES-QS rankings. One is the use of Z-scores. Basically, this means that the mean for all universities is deducted from the raw score and the result is then divided by the standard deviation. In effect, the score represents not an absolute number but how far each university is from the average. One consequence of using Z-scores is that differences at the very top are reduced.
In principle this is not a bad idea and other rankers do it but it has produced some odd results in this case.
In the survey of academic opinion, for example, the following universities all get a maximum score of 100: Harvard, Cambridge, Oxford, Yale, Caltech, MIT, Columbia, McGill, Australian National University, Stanford, Cornell, Berkeley, Melbourne, British Columbia, National University of Singapore, Peking and Toronto.
Do THES and QS really expect us to believe that Melbourne, British Columbia and Peking are just as good at research as Harvard ? Especially since Harvard is far ahead on every one of the subject rankings?
THES has a headline about fine tuning revealing distinctions. Really?
The National University of Singapore is among the best in Asia and has always been ranked highly by THES-QS. This year, however, it has fallen from 19th to 33rd.
THES suggest that Malaysian and Singaporean universities have suffered because the "peer review" no longer allows respondents to pick their own institutions. This would, not however, seem to apply to NUS -- and I wonder whether it applies to Malaysian universities either -- which got the maximum score of 100 (along with Oxford, Harvard and Caltech) on the survey. What happened was that NUS scored very poorly on the faculty student section.
It got 100 for "peer review", international faculty and international students, 93 for recruiter review, 84 for citations per faculty and 34 for faculty student ratio.
NUS has a self-reported ratio of about 17 students per faculty. Peking reports about 10 but QS gives it a score of 98, almost the same as Caltech at 100 with a well known ratio of about three.
There is something about this that needs some explanation.
The Kuala Lumpur New Straits Times has a report on the performance of Malaysian universities on the latest THES-QS rankings
Malaysian universities are on a slippery slope. None of them made it to the top 200 placing in the Times Higher Education Supplement (THES)-Quacquarelli Symonds (QS) World University Rankings this year.
This poor showing comes on the back of a recent government survey of local public universities which found that none deserved a place in the outstanding category. Last year, Universiti Kebangsaan Malaysia and Universiti Malaya made it to the top 200 in the THES-QS rankings. UKM ranked 185th, up from the 289th spot in 2005, beating well-known universities like University of Minnesota in the United States and University of Reading, Britain. This year, it has fallen to 309th. Similarly UM, which was ranked among the world's top 100 universities three years ago, was in 169th position in 2005 and tied with University of Reading in the 192nd spot last
year. It has dropped to the 246th spot. Universiti Sains Malaysia has fallen
to 307 from 277 last year.
UKM and UM vice-chancellors attributed their fall to the new methodology used to calculate rankings this year."Even the National University of Singapore (NUS) has dropped to the 33rd spot when it was always within the top 10," Universiti Malaya vice-chancellor Datuk Rafiah Salim said."The way I look at it, smaller countries like Malaysia are bound to lose out as THES has introduced new criteria which is peer review and has changed the citation and list of publications."Rafiah said with more
than 3,000 universities getting ranked by THES annually, Malaysian
universities had to improve if they wanted to remain on top of the list."If
we want to compete with some of the top universities in the world, first we
have to be in the same league. "Right now, we are not. One way to overcome
that is through adequate funding."She said NUS received an annual funding of
S$1.2 billion (RM2.7 billion) a year compared to UM's RM400 million annual
budget.
There is no mention of Universiti Putra Malaysia or Universiti Teknologi Malaysia both of which were on the list of universities sent out by QS this year.
It is impossible to be sure until the full data is released but I suspect that the "decline" of Malaysian universities has nothing to do with any real change but with QS preventing survey respondents from voting for their own institutions this year.
Thursday, November 08, 2007
The THES-QS Top 200 universities list is available here.
There is a press release here.
Some Highlights
The two Malaysian universities, UM and USM, are out of the top 200. Most probably this is because of new procedures for the "peer review".
Berkeley, National University of Singapore, Peking (well done QS for getting the name right), and LSE have fallen dramatically.
The IITs and IIMs are out of the top 200, maybe out of the rankings altogether.
Two Brazilian universities have risen dramatically.
Changes such as these could not possibly result from real changes but are most likely the consequence of "methodological enhancements", errors or the correction of errors.
The Sydney Morning Herald Herald reports on the THES-QS`rankings. Macquarie has fallen from 82 to 168.
AUSTRALIAN universities have slipped in one of the most respected world rankings. The most dramatic drop was suffered by Macquarie University - jeopardising a $100,000 bonus for its vice-chancellor, Steven Schwartz.
His bonus depends on improving Macquarie University's ranking in the Australian sector, but it has plummeted from 82 to 168 in the Times Higher Education Supplement's annual survey, released in Britain overnight. It has dropped from seventh to ninth among local universities.
Professor Schwartz, an American academic who had previously been head of Brunel University in Britain, replaced Di Yerbury in a messy coup last year. There was a bitter dispute between Macquarie and Professor Yerbury over ownership of paintings and other material she had accumulated over 19 years.
We will have to wait until the online results are available but the fall of Macquarie and perhaps of Steven Schwartz may have something to do with a reported change in the percentage of international faculty or possibly the introduction of z scores in the rankings. Last year Macquarie held top place for international faculty but QS did not reveal how they got the information and Macquarie did not confirm what the correct number was. Given the money at stake, it would not be totally astonishing if the 2006 figure for international faculty had been massaged a little bit.
The Economic Times of India has a report on the THES-QS`rankings:
Three Latin American universities make it to the world’s top 200, while even Africa makes a debut, with Cape Town ranked at 200. IIMs and IITs are not universities.
According to Martin Ince, who compiles and edits the survey, “The 2007 THES-QS World University Rankings are the most rigorous and complete so far. They show that the US and the UK model of independent universities supported with significant state funding produces great results.”
UK universities are closing in on their American counterparts, with University College, London, making it into the top 10 for the first time, and Imperial College, London, moving up from 9th to 5th this year. Chicago too, is a first time entrant into the top 10.
While the top 10 list is still restricted to the US and the UK universities, in the top 50, the addition to the Netherlands, 12 countries are featured in the top 50 compared to 11 in 2006.
Universities of Tokyo, Hong Kong, Kyoto, National University of Singapore, Peking, Chinese University of Hong Kong, Tsinghua and Osaka lead Asian higher education, all featuring in the top 50. The top 100 sees the number of Asian universities increase to 13 (12 in 2006), while the number of European Universities has dropped to 35 (41 in 2006).
North America strengthened its tally to 43 Universities (37 in 2006). McGill tops in Canada, and a number of universities from New Zealand and Australia have also joined the top 50 list.
The increasing trend in internationalisation is also borne out by the fact that 143 of the top 200 universities reported an increase in their percentage of international faculty to total faculty, while 137 of the top 200 universities reported an increase in their percentage of international students to total students.
The last comment is rather interesting. Is this genuine internationalisation or simply a manipulation of data provided by universities?
Education Guardian
BBC
Chronicle of Higher Education
Beerkens Blog has the top 100. Here are the top 20.
Rank | Name | Country |
1 | HARVARD University | United States |
2= | University of CAMBRIDGE | United Kingdom |
2= | YALE University | United States |
2= | University of OXFORD | United Kingdom |
5 | Imperial College LONDON | United Kingdom |
6 | PRINCETON University | United States |
7= | CALIFORNIA Institute of Technology (Caltech) | United States |
7= | University of CHICAGO | United States |
9 | UCL (University College LONDON) | United Kingdom |
10 | MASSACHUSETTS Institute of Technology (MIT) | United States |
11 | COLUMBIA University | United States |
12 | MCGILL University | Canada |
13 | DUKE University | United States |
14 | University of PENNSYLVANIA | United States |
15 | JOHNS HOPKINS University | United States |
16 | AUSTRALIAN National University | Australia |
17 | University of TOKYO | Japan |
18 | University of HONG KONG | Hong Kong |
19 | STANFORD University | United States |
20= | CORNELL University | United States |
20= | CARNEGIE MELLON University | United States |
The Canadian newspaper The Gazette has a report on the performance of Canadian universities in the 2007 rankings. McGill has risen from 21st place to 12th. The Gazette reports that:
McGill University is the cream of Canadian schools, the best public university in North America and ranks 12th among the world's top 200 universities, according to a prestigious global survey.
Released today, the Times Higher Education Supplement has McGill bounding up from last year's 21st place showing based on such factors as emphasis on science programs, the strong contingent of international students and faculty, student/faculty ratios, and publications by faculty and graduate researchers. Harvard placed first on the list, while Oxford, Cambridge and Yale tied for second spot.
The report puts McGill ahead of such research-intensive powerhouses as Duke, Johns Hopkins, Stanford and Cornell. Findings are based on a combination of facts and opinions, with more than 5,000 academics around the world invited to rate a given institution. A key change in methodology this year made it impossible for professors to rate their own school.
"I'm really thrilled," said McGill principal Heather Munroe-Blum, who sees the results as a vindication of McGill's disciplined approach to academic planning, targeted hiring of 800 new professors and efforts to enhance both research and the undergraduate experience.
There are even more spectacular rises by Montreal (181 to 108), Queens (176 to 88), Waterloo (204 to 112), Western Ontario (215 to 126) and Simon Fraser (266 to 166).
Changes like this are most unlikely to be produced by real improvements by the universities concerned. Either the methodological changes introduced by QS`are having a greater impact than expected or some serious errors have occurred.
Also, if the statement about 5,000 academics originated from QS does this mean that this year QS sent out only 5,000 e-mails instead of nearly 200,000 as they claimed they did last year or that they received 5,000 forms or that they counted 5,000?
Three British universities in the THES-QS Top Five
Bits and pieces about the THES-QS 2007 rankings are appearing in online newspapers. Here is a quotation from the London Times
Cambridge and Oxford are the second best universities in the world according to the latest rankings, and British universities are closing the gap with those in the United States.
Oxford and Cambridge share the number two spot with Yale, with Harvard ranked number one in the latest league tables from The Times Higher Education Supplement.
The findings will bring cheer to the higher education sector in Britain at a time of growing concern among vice-chancellors and employers that British universities will lose students to better-financed institutions abroad and that business will then follow them with jobs and investment.
The commercial implications of the rankings are made very clear:
Professor Rick Trainor, the president of Universities UK, representing vice-chancellors, added: “Our competitors are increasingly marketing themselves more aggressively so it is vital that the UK remains among the foremost destinations for international students and staff.”
Harvard, whose endowment of $35 billion (£16.6 billion) is roughly equal to the combined annual funding for all English universities, tops the table, but its lead over its closest rivals has fallen from 3.2 to 2.4 points. Nunzio Quacquarelli, the managing director of QS, the careers and education group that compiled the rankings, said: “In an environment of increasing student mobility, the UK is putting itself forward as a top choice for students worldwide.“They are taking a closer look at the quality of faculty, international diversity and, of course, to the education they will receive.”
A detailed analysis will have to wait until the component scores are available but the continued closing of the gap between Oxbridge and Harvard and the rise of University College London from 25th to 9th and Imperial College London from 8th to 5th are rather suspicious.
The top ten are
1 Harvard University US
2 University of Cambridge UK
2 University of Oxford UK
2 Yale University US
5 Imperial College, London UK
6 Princeton University US
7 California Institute of Technology (Caltech) US
7 University of Chicago US
9 University College London (UCL) UK
10 Massachusetts Institute of Technology (MIT) US
Tuesday, November 06, 2007
Changes in the THES-QS Rankings
QS Quacquarelli Symonds have announced that the 2007 World Universities Rankings will be published on November 9th and that there will be a number of changes.
Firstly, QS will not allow respondents to their academic survey to vote for their own institutions. I am not sure how this can could be enforced if QS send out over a quarter of a million e-mails to World Scientific subscribers but it would in principle appear to a be a sensible change. However, this in itself will not affect other more serious problems with the “peer review” such as its marked regional bias and a suspiciously and unbelievably low response rate.
The second change is that QS will now use Scopus rather than the Web of Science for data about citations. .This will favour universities outside the
I suspect that the difference will not be very great. The dominance of the
Thirdly, QS will give Full Time Equivalent (FTE) counts for numbers of students and faculty rather than headcounts. This would eliminate some of the worst errors in previous rankings such as those relating to Ecole Polytechnique and Ecole Normale Superieure. However, there could be problems if the procedure is not applied consistently. QS say that where an FTE number has not been supplied, one will be calculated from the relationship between headcount and FTE numbers at other institutions in the same country or region.
This raises questions about the country or region that is used for benchmarking and whether QS will indicate how the ratio between headcount and FTE`is derived. Also, it seems rather dangerous to allow universities to submit their own data.
Finally, QS will calculate z scores for all components. Basically a z score is calculated by subtracting the population mean from the raw score and then dividing by the standard deviation. The effect of this will be to flatten the curves for each component and to ensure that similar changes will have similar effects on each section of the ranking.
QS are to be commended for introducing these changes providing that they implemented transparently and competently. There is no point in calculating z scores if you enter data for every university in the wrong row, as someone did for the student faculty ratio in QS’s book Guide to the World’s Top Universities, and create hundreds of errors
I have a further reservation. QS seems to have done nothing about using a database for the “peer review” that is provided by an Asian based and orientated publishing company, explaining how they could get an unprecedentedly low response rate without filtering the data in some way or .giving a large weighting to such an obviously biased and suspect set of data. It will be interesting to recalculate the scores to see what they look like without the peer review.
The combined effect of these changes is likely to be that some countries outside the top 100 may go down several places, even though nothing has really changed, leading to anguished debates about declining standards.
Wednesday, October 17, 2007
This is from Plagiarism in Colleges in USA, a page by Donald B. Sandler. The similarities between the two cases of "citation infraction" by doctoral candidates, M. Jamil Hanifi and Glenn (or Glendal) Poshard, are striking but the difference in the fate of the two dissertation submitters is glaring. I wonder if the faculty panel who produced the report on Glenn Poshard's dissertation could read this without blushing.
Hanifi
M. Jamil Hanifi plagiarized material from a book and an essay in his doctoral dissertation at Southern Illinois University in 1969. Hanifi later published "his" dissertation in a book, of which "three of the nine substantive chapters ... were plagiarized." The author of the essay discovered the plagiarism in 1976, the author of the book discovered the plagiarism in 1977. Southern Illinois University learned of the plagiarism in 1981. At that time, Hanifi was a professor of anthropology at Northern Illinois University, who was being considered as a new chairman of the department. Tersely summarizing a long recital in the court's opinion, Hanifi was given the choice of resigning or being fired, Hanifi chose to resign. Hanifi then filed litigation that alleged that his resignation had been coerced. Hanifi v. Board of Regents, 1994 WL 871887 (Ill.Ct.Cl. 1994).
The court said the following regarding plagiarism:
- John LaTourette, the current president of Northern Illinois University, who was the vice-president and provost of that university in 1981, acknowledged that plagiarism is "probably the most serious charge against a faculty member that one could imagine." The president of the university in 1981, William Monat, similarly acknowledged that plagiarism is "probably one of the greatest offenses that can occur in the academic community." Mr. Hanifi, himself, has written to others and admitted during his testimony that plagiarism involves "a complete lapse in professional judgment, moral sense and respect for academic ethics," "a most serious violation with dishonor, shame and guilt," "unethical conduct," "dishonorable and unprofessional conduct," and "dishonorable act and reprehensible and condemnable," "a violation of basic scholarly activity and serious misconduct," "a despicable act and a serious mistake." Mr. Hanifi acknowledged that the plagiarism is not erasable.
- Id. at *2.
- The court concluded that Hanifi had failed to prove that his resignation had been coerced. Note the court's final sentence about the bad character of a plagiarist:
- From a thorough review of the evidence in this case, we find that the Claimant has failed to prove that his resignation was involuntary, coerced or the product of duress. The testimony of Claimant and Respondent's witnesses is at loggerheads. To believe Claimant's testimony as to coercion, duress and involuntariness, we would have to disbelieve numerous other witnesses and find some grand conspiracy among the top officials at Northern Illinois University to injure Claimant, which would include mass perjury. Claimant has presented no compelling evidence to corroborate his testimony and therefore in light of the credible testimony disputing his claim, we find his testimony incredible. Frankly, we do not believe this admitted plagiarizer when he claims his will was overcome and he did not know what he was doing.
- Id. at *6.
"a complete lapse in professional judgment, moral sense and respect for academic ethics," "a most serious violation with dishonor, shame and guilt," "unethical conduct," "dishonorable and unprofessional conduct," and "dishonorable act and reprehensible and condemnable," "a violation of basic scholarly activity and serious misconduct," "a despicable act and a serious mistake."
Compare these words with President Poshard's.
Sunday, October 14, 2007
Inside Higher Ed reports that Southern Illinois University (SIU) president Glenn Poshard has been cleared of deliberate plagiarism. It seems that a panel appointed by the university found that the president did not intentionally plagiarise his doctoral dissertation awarded by SIU in 1984.
The seven-person committee of senior faculty, whose report on Poshard was unveiled Thursday, recommended that the university take no action against the president. It calls for the dissertation to be withdrawn from the university library and be replaced with a corrected copy prepared by Poshard, and for the president to write a statement that expands on why his errors occurred and speaks more broadly about the “culture of integrity” at the university. (The panel noted that allegations of Poshard plagiarizing his master’s thesis follow the same pattern as those in the dissertation, so it chose to focus on the latter.)Still, the report is far from a ringing endorsement of Poshard’s past work. The committee notes that there are many cases in the dissertation in which “the words of others are present in a continuous flow with Student Poshard’s own words, so that readers cannot distinguish between those sources.” Given the modern-day definition of plagiarism at Southern Illinois’s Graduate School as “representing the work of another as one’s own work,” the report says the allegations against Poshard would be “sufficiently supported,” were it not for the historical context of the case.
But that context is vital, the report notes. At the time when Poshard was a graduate student at Southern Illinois, the graduate school’s student handbook lacked a definition of plagiarism. The panel found that Poshard had used an “informal style” of citing sources that was commonly embraced by other graduate students. Faculty members advising him on the dissertation approved the style then, and no one asked him to clarify anything at the time of submission, the report finds.
The mistakes were most likely products of “carelessness” and fall into the category of “inadvertent plagiarism,” according to the committee.
Poshard has denied the allegations of intentional wrongdoing but left open the possibility that he made accidental errors. During a meeting with the faculty panel, the president said his dissertation committee had no qualms with his style of citation, which often included scant inclusion of quotation marks.
“Even though the Review Committee says these mistakes were unintentional and inadvertent, they are my mistakes. And I take full responsibility for them,” he said at a news conference Thursday. “They are not the fault of my committee, my department, my college or my university.” He added that “whether one wants to argue whether what I did constitutes plagiarism depends on how you feel about me.”
The main points that emerge from the panels findings would seem to be that:
- a large part of Poshard's literature review consisted of bits of other people's writings some of which were cited but not quoted and some of which were neither quoted nor cited
- but, while corrections should be made, Poshard is excused because it was common for graduate students at SIU to use an "informal style" of citation (does that mean anything other than copying?) with the approval of their instructors and supervisors, because the graduate student handbook did not have a definition of plagiarism and because his dissertation committee did not find anything wrong with his citation style.
I hope that what follows is an accurate summary of the committee's findings. Poshard committed technical errors but was not seriously at fault because graduate students at SIU routinely constructed dissertations by copying pieces from other works.
I hope that somebody will examine a sample of dissertations from the Southern Illinois Education Faculty and find out whether the faculty has in fact been condoning mass plagiarism over the years.
It also might be an idea to take a close look at the dissertations and see whether SIU also had an "informal style of data collection".
This report is a remarkable document. If taken seriously it ought to make graduates of SIU over the last two and a half decades unemployable and call its accreditation into question.
Sunday, September 30, 2007
The Leiden Rankings
The Centre for Science and Technology Studies at the University of Leiden has developed a ranking system based on bibliometrics, that is the quantity of academic publication. The Centre has presented results for the largest 100 European (including Israel) universities.
The rankings use a multiple- indicator approach, using several methods of presenting the same data. The Centre explains:
The latter point is very important: on the basis of the same data and the same technical and methodological starting points, different types of impact-indicators can be constructed, for instance one focusing entirely on impact, and another in which also scale (size of the institution) is taken in to account. Rankings based on these different indicators are not the same, although they originate from exactly the same data. Moreover, rankings are strongly influenced by the size-threshold used to define the set of universities for which the ranking is calculated. This is very clearly illustrated by comparison of the top-100 versus the top-50 European universities.
The Centre provides four different Rankings. The top ten in the yellow list, number of publications, is as follows.
1. Cambridge
2. University College London
3. Oxford
4. Imperial College London
5. Munich
6. Pierre and Marie Curie (Paris 6)
7. Milan
8. Utrecht
9. Catholic University Leuven (Flemish)
10. Manchester
The green ranking is by "size-independent, field-normalized average impact "
1. Oxford
2. Cambridge
3. ETH Zurich
4. Lausanne
5. Geneva
6. Edinburgh
7. University College London
8. Erasmus Rotterdam
9. Imperial College London
10. Basel
The blue ranking is by citations per publication
1. Lausanne
2. Geneva
3. Oxford
4. Basel
5. Karolinska Institute Sweden
6. Erasmus Rotterdam
7. University College London
8. Cambridge
9. Edinburgh
10. Zurich
The orange ranking is by "size-independent, field-normalized average impact "
1. Cambridge
2. Oxford
3. University College London
4. Imperial college London
5. Utrecht
6. Helsinki
7. ETH Zurich
8. Catholic University of Leuven (Flemish)
9. Munich
10. Karolinska
See the Centre's web site for an explanation of the technical terminology.
What is interesting about these rankings is that continental European universities are closing in on Oxford, Cambridge and the London colleges and when it comes to the citation of academic publications are beginning to move ahead.
As a measure of current research performance these rankings are superior to Shanghai Jiao Tong which counts long-dead Nobel prize winners.
Thursday, September 27, 2007
The Journal of Blacks in Higher Education has published a ranking of 26 competitive American universities according to the numbers and success of African American students and faculty.
Unlike other ranking efforts in the field of higher education, our statistics, without exception, are highly quantitative. This is in sharp contrast to highly impressionistic institutional rankings such as those compiled by U.S. News & World Report in which 25 percent or more of the total ranking score is derived from subjective surveys of university reputations as determined by presidents, provosts, and deans of admissions at other institutions.
All JBHE data is obtained from our own in-house surveys of the colleges and universities as well as from government sources. Each year JBHE surveys university and college admissions offices to obtain data on applicants, acceptances, first-year enrollments, and black student yield. On a regular basis we also survey deans of faculty at these universities for statistical information on their numbers and percentages of black faculty and black tenured faculty.
While one may disagree over what measuring factors are most important, the data we collect is broad-based, solid, quantifiable, and not subject to dispute.
5. Vanderbilt
6. North Carolina at Chapel Hill
7. Georgetown
8. Harvard
9. Virginia
10. Brown
Tuesday, September 25, 2007
Imperial College London has announced that the Principal of its Faculty of Medicine has been appointed to a joint position as Principal of the Faculty of Medicine and Chief Executive of the Imperial College Healthcare NHS Trust.
The report notes that Imperial College "is ranked fourth in the world for biomedical research in the Times Higher Education Supplement world university rankings".
It does not note that, according to the THES's own data on citations per paper for medical research, the college is ranked 28th. Nor that according to the Shanghai Jiao Tong rankings it is ranked 25th for clinical medicine and pharmacy.
Nor that the survey that produced the fouth place ranking used a database provided by a publishing company with close ties to Imperial and that the data on research was collected by a company headed by a former faculty member.
Monday, September 24, 2007
A short article in the Sydney Morning Herald has some useful comments about university rankings:
Shanghai's Jiao Tong University and Britain's The Times Higher Education Supplementproduce the two most well-known global rankings.These two rankings produce different results, which is not surprising since they use different measures. The Shanghai rankings assess research performance whereas the Times supplement looks at employer opinions and the number of international students (among others).
There is no doubt that rankings affect the behaviour of potential applicants. When a famous US university fell two places in the rankings, it had a 5 per cent fall in student applications.
Rankings also affect the behaviour of institutions. For example, if universities are ranked according to the number of first-class honours they award, they may decide to give more firsts just to climb up the rankings.
Recognising the futility of summarising a complex institution such as a university using a single number, some rankings, such as Canada's Macleans magazine, describe universities using a variety of criteria.
The Macleans approach seems closer to reality. Universities differ and students are all individuals. Some students may prefer a university with good sporting facilities and extensive offerings in the fine arts; others may be looking for night classes and low fees.
In each case the question that students should be trying to answer is not which university is the greatest but which university is best for them.
Double Standards Watch
Several newspapers, including the Observer, have published reports of a sensational discovery by Bernard Lamb, Reader in Genetics at Imperial College London. He has found out that British students are not very good at spelling and even make more mistakes than students from
Some of the errors include 'sun' instead of 'son', 'sewn' instead of 'sown' and 'rouge' instead of 'rogue'.
Shocking, isn’t it? Especially the last one.
Shocking isn’t it?
Incidentally, it would be unwise to assume nowadays that students from Singapore are not native speakers of English.Wednesday, September 19, 2007
There is report in USA Today about a survey of historical knowledge among American undergraduates. Apparently they don't know much when they come and don't know much more when they leave. What is also interesting is the ranking of the various institutions. Here is the percentage of correct scores among seniors.
Harvard 69.56
Yale 65.85
private secular non-Ivy universities 64.10
non-Ivy universities 60.1
Cornell 56.95
Protestant universities 56.60
state flagship universities 54.40
Catholic universities 48.30
state non-flagship universities 47.40
Eastern Conn State 46.49
St Thomas University 32.5
Tuesday, September 18, 2007
This is a new site. The idea is that students around the world evaluate their universities, which are then ranked. At the moment it seems largely a British affair but if it ever takes off it could become interesting. There are obvious issues about multiple voting and so on that will have to be dealt with.
Anyway here are the top five .
1. University of East Anglia
2. Korea Advanced Institute of Science and Technology
3. Leeds Metropolitan University
4. University of Kent
5. Oxford University
Monday, September 17, 2007
In earlier posts I referred to the apparent oddity of administrators, parents, teachers, and researchers campaigning against the use of standardized tests like the SAT, GRE or LSAT in selection for universities and employment. Such people almost certainly rose to high positions in academia and elsewhere largely or partly through their performance on such tests, which are essentially measures of mental ability. Why then should they deny to others, including perhaps their own children, the opportunities that they themselves have enjoyed?
Similarly, the campaign against selection for secondary education in Britain was led by academics and supported by broad sections of the educated middles class who had benefited disproportionately from meritocratic education. I can remember my surprise on learning that the first area in the United Kingdom to abolish selection and send all secondary school students, except those whose parents could afford private education, to comprehensive schools was not a socialist stronghold in South Wales or Tyneside but the solidly suburban middle class county of Hertfordshire. Today the one place in the United Kingdom where selective secondary education survives is the thoroughly proletarian province of Northern Ireland
Looking around the net recently I found a remarkable and widespread dislike for the standardized testing of intelligence in education. Some objections are not entirely unjustified but others seem to be dramatically off the mark. There are, for example, many criticisms of the American SAT aptitude tests but the one that stands out is that they are socially and economically biased towards those who can afford expensive coaching programmes. This was a reason given by the head of Sarah Lawrence College in New York, whose tuition fees are close to those of the Ivy League.
But nobody as far as I know has seriously suggested that we admit everybody to exactly the university or give him or her exactly the job that they want. Nor is anyone proposing to introduce admission or appointment by wholesale lottery. One way or another, in any conceivable world, there will be some sort of selection for secondary schooling and even more so for higher education. If not on the basis of measurable cognitive skills then it will be done some other way. This might be through tests of achievement in academic subjects, personality traits, interviews, recommendations of secondary school teachers and counsellors, personal essays or tests of political loyalty.
The problem – and for some it might not really be a problem at all – is that any feasible alternative to the SAT is ultimately far more dependent on parental ability to pay, or perhaps somebody’s ability to pay. So, it would seem strange that the opponents of the SAT should in effect be advocating a method or methods of selection that would seemingly be against the best interest of their children, who would surely have acquired the intelligence of their parents and would deny them the opportunity for success and prosperity enjoyed by those parents.
No doubt many academics are childless and many are unmarried. But surely there are enough who would have some concern for their children. The desire to provide for the future welfare of children is the most fundamental in human nature and history. Virtually every society goes through a stage of hereditary monarchy and even republics like North Korea and the USA show a strong inclination to presidential filialism. That American and European academics and other beneficiaries of selection should be so lacking in such a basic and widespread human impulse seems very odd. Why should they be immune to something so constant and universal?
The question becomes even more pointed if we consider that well educated liberal middle class parents in North America and Europe do in fact go to immense efforts to provide their children with the attributes that appear to be valued by prestigious schools and universities. Immense effort is expected to get children into good schools and universities, even good kindergartens, to transport them to resume – compliant volunteering and athletics, to provide a variety of tuition and counseling. We even see the middle class Marxists (probably post – or neo- by now) of the Labour Party gritting their teeth and showing up for Sunday services to get their children into a church school.
Why then are so many in the middle classes so hostile to standardized tests that would allow their children at little cost to have access to valuable educational and occupational opportunities while driving themselves to the edge of bankruptcy and exhaustion to provide their children with the skills and socialization needed to gain entrance to elite institutions and to succeed academically and socially.
There is, in fact, more to the story than this and that is something called regression to the mean.
Basically, this involves the simple and obvious principle that extremes are more unusual than mediocrity. There are fewer very tall people and very short people than people of medium height. There are fewer very intelligent and very unintelligent people than people of average intelligence and so for almost any human attribute you can think of.
Another simple and obvious principle is that if any trait is wholly or largely inherited genetically then parents who have that trait to an extreme will have it to a greater degree than their offspring who in turn will have it more than the general population. To illustrate, very tall people will have children who are taller than average but not as tall as their parents. Typically, Harvard graduates will occasionally have children who will get into Harvard but more will be only bright enough for Duke, Cornell or a middling liberal arts college.
Or, to go off at a bit of a tangent, musical, literary and athletic ability are significantly inherited. But rarely do supreme achievers pass all their abilities to their children. Zack Starr, Ziggy Marley, Julian Lennon, Nancy Sinatra, Martin Amis, Auberon Waugh, Marvin Frazier, and all those Bachs are better than average but certainly very inferior to their fathers.
Backing up a little, let’s go through these points. The SAT measures intelligence quite well. So does the GRE and LSAT. Intelligence is the single most important factor in academic and career success. Many of the people who control American higher education today and have dominant position in the corporate and government hierarchies do so in large part because standardized tests could identify and measure their intelligence.
Intelligence is overwhelmingly, perhaps even, excluding environmental trauma, almost entirely hereditary. But precisely because of this persons who are extremely intelligent will never be able to pass all of their advantage to their children.
I suspect that for many professors of education, university administrators and so on there must have been traumatic moments when they realised that their children were never going to be quite as clever as they were. Perhaps it was trouble with maths or science in science in high school or perhaps the results of a trial SAT. It would be very tempting to conclude that standardized tests were an inadequate basis for judging intelligence, that it could not be defined anyway, that there were different kinds of intelligence, that other qualities were required to succeed in life, that holistic assessment could find hidden reserves of ability.
Also no doubt, we would find such people investing large amounts of time and money in tuition classes, summer camp, courses in writing admission essays and so on. To meet the opaque and unstable demands of a holistic admissions process ultimately demands more time and money that swotting for a few tests.
It is likely that more and more colleges and universities will follow Sarah Lawrence College to drop the SAT and replace it with expensive holistic selection. The cognitive elite thus will to some extent succeed in passing on its social advantages to its offspring. There will be a price. The cognitive elite will become an elite of sensitivity, personality and political correctness.
The current war against academic selection and standardized testing and the drive for holistic and alternative admissions, school based assessment, course work in place of exams is then in part an attempt by a newly arrived elite to trade in its intellectual superiority for extreme and expensive socialization.
There is nothing unusual about this. Throughout history new elites have tried to ensure the prosperity of the offspring by creating status systems that will favour those who will benefit from elaborate training and socialization. A case in point would be the uncouth entrepreneurs of the industrial revolution who turned their children into expensively educated gentlemen and ladies.
I would like to propose a general law of social development. Any group that rises to power and affluence through a quality that is substantially hereditable will endeavor to change the social system to ensure that its children will succeed them despite the remorseless logic of regression to the mean.
There are no doubt other forces that contribute to the loathing for standardized tests, such as the need to compensate African-Americans for generations of discrimination. But the war on testing and for holistic selection is in large part a device for the perpetuation of class privilege. There is nothing progressive about it.
The University of Michigan business school is not in this years's Forbes rankings because it declined to participate in the survey. Forbes did not make a mistake.
Forbes MBA Survey: Why Is Ross Unranked?
You may have noticed that Ross is not included in this year's Forbes ranking of full-time MBA programs. There's a simple reason for that: We declined to participate in the survey.Unlike surveys managed by such publications as BusinessWeek, The Wall Street Journal, and The Financial Times, the Forbes survey is concerned almost exclusively with a financial calculation of return-on-investment. We are certainly aware of the financial commitment made by our students. We also recognize that ROI is a factor to consider when evaluating an MBA program. However, we don't believe that an ROI calculation, by itself, is the most meaningful way to measure the full value of an MBA program. A world-class education is more than an economic transaction. The excellence of our full-time MBA program manifests itself in many ways: the distinctiveness of our action-based learning programs; the quality of instruction; the breadth and depth of our research; the esteem in which top recruiters hold our graduates; and, of course, the talent and commitment of our students and alumni.
Fair enough. I'm still wondering about the Economist though.
The recent Forbes ranking of business schools has the Broad School of Business at Michigan State University at number 19 in the US but does not mention the University of Michigan in the top 100.
The Economist has the Stephen M. Ross School at the University of Michigan in 9th place for the US but does not mention Michigan State University in the world's top 100.
Is it possible that somebody has got the two Michigan schools mixed up?
I notice that the Financial Times Global MBA rankings has University of Michigan in 19th place and Michigan State University in 38th. Could it be that both Forbes and the Economist have made a mistake?
This one is from the Economist. The top 10 US schools are
1. Dartmouth
2. Stanford
3. Chicago
4. Northwestern
5. Harvard
6. New York University
7. University of Michigan
8. Berkeley
9. Columbia
10. Virginia
The top 5 outside the US are:
1. IESE (Spain)
2. IMD (Switzerland)
3. Cambridge
4. Henley
5. IE (Spain)
There are some noticeable differences between these and the Forbes rankings. Forbes puts the University of Pennsylvania in 5th place for the US but the Economist has it in 12th. The Marriott School of Business at Brigham Young University is 18th in the US in Forbes but does not appear in the Economist.
It is possible that this is because the Economist rankings measure a much broader range of criteria including, for example, the number of staff with Ph Ds and the number of foreign students.