Thursday, August 11, 2016

Value Added Ranking


There has been a lot of talk about ranking universities by factors other than the usual mix of contributions to research and innovation, reputation surveys and inputs such as spending, teaching resources or student quality.

The emerging idea is that universities should be assessed according to their ability to teach students or to inculcate desirable skills or attributes.

Much of this is powered by the growing awareness that American and European secondary schools are failing to produce sufficient numbers of students with the ability  to undertake and complete anything that could realistically be called a university education. It is unlikely that this is the fault of the schools. The unavoidable  verdict of recent research is that the problem with schools has very little to do with institutional racism, a lack of grit, resilience or the current X factor or the failure to adopt Finnish, Chinese or Singaporean teaching methods. It is simply that students entering the school system are on average less intelligent than they were and those leaving are consequently also less intelligent.

There is now a market for rankings that will measure the quality of universities not by their resources, wealth or research output but by their ability to add value to students and to prepare them for employment or to enable them to complete their courses.

This could, however, lead to massively perverse consequences. If universities are assessed according to the percentage of entrants who graduate within a certain period or their employability then there could be a temptation to dilute graduation requirements .

Nevertheless, the idea of adding value is one that is clearly becoming more popular. It can be seen in the attempt to introduce a national rating system in the US and in the UK to use the proposed Teaching Excellence Framework (TEF) to rank universities.

One UK ranking that includes a value added measure is the Guardian University Guide. This includes eight indicators, three of which three of which measure student satisfaction. Other indicators are staff student ratio and spending per student. There is also  a measure of student outcomes, that is graduate level employment or entry into a postgraduate course after six months, one of the quality of students measured by A level qualifications and one a measure of value added, that is the difference between the students entry level exam results and their eventual degree results.

It is therefore possible to get a rough idea of what factors might actually produce positive student outcomes.

The overall ranking for 2015-16 starts by being quite conventional with the top three places going to Cambridge, Oxford and St Andrews. Some might be surprised by Exeter in 9th place and Loughborough in 11th,  ahead of LSE and UCL.

Measuring student quality by exam scores produces unsurprising results at the top. Cambridge is first followed by Oxford and Imperial. For staff student ratio the top three are UCL, Oxford and SOAS and for spending per student Oxford, Cambridge and the University of the Arts London.

For student satisfaction with courses, Bath, Keele and UEA are in the lead while Oxford is 5th and Cambridge 12th. It's when we look at the Value Added that we find some really unusual results. The top three are Gloucester, Edinburgh and Abertay.

After plugging the indicator scores into an SPSS file we can calculate the correlations between the desired outcome, that is graduate level employment or postgraduate study and a variety of possible associated factors.

Here in descending order are the correlations with career prospects:

average entry tariff .820
student staff ratio .647
spending per student .569
satisfaction with course  .559
satisfaction with teaching   .531
value added .335
satisfaction with feedback -.171.

It would seem that if you want to know which university is best for career prospects then the most important piece of data is the average academic ability of the students. The student staff ratio and money spent are also significant as is satisfaction with courses and teaching. 

The correlation between value added and career prospects is much less and rather modest.

The universities were divided into thirds according to average entry tariff. In the top third of universities there was a strong correlation between career prospects and average entry level tariff, .628, and a modest one with spending, .355. Nothing else was associated with career success.

In the middle third the factor most associated with career prospects was course satisfaction, .498, followed by average entry tariff, .449, staff student ratio, .436, and satisfaction with teaching, .362. Satisfaction with feedback and value added were insignificant.

However, for the least selective third of universities, the picture was rather different. The factor most strongly associated with career success was satisfaction with feedback, .493, followed by valued added, .479, course satisfaction, .470, satisfaction with teaching, .439, and average entry tariff, .401. The relationship with spending and staff student ratio was insignificant .

The evidence of the Guardian rankings is that value added would only be of interest to students at or applying to the least selective third of UK universities. For the rest it is of no importance. It is debatable whether it is worth making it the centre of a new set of rankings.












Worth Reading 7




Searching for the Gold Standard: The Times Higher Education World University Rankings, 2010-2014
Richard Holmes
ABSTRACT
This paper analyses the global university rankings introduced by Times Higher Education (THE) in partnership with Thomson Reuters in 2010 after the magazine ended its association with its former data provider Quacquarelli Symonds. The distinctive features of the new rankings included a new procedure for determining the choice and weighting of the various indicators, new criteria for inclusion in and exclusion from the rankings, a revised academic reputation survey, the introduction of an indicator that attempted to measure innovation, the addition of a third measure of internationalization, the use of several indicators related to teaching, the bundling of indicators into groups, and, most significantly, the employment of a very distinctive measure of research impact with an unprecedentedly large weighting. The rankings met with little enthusiasm in 2010 but by 2014 were regarded with some favour by administrators and policy makers despite the reservations and criticism of informed observers and the unusual scores produced by the citations indicator. In 2014, THE announced that the partnership would come to an end and that the magazine would collect its own data. There were some changes in 2015 but the basic structure established in 2010 and 2011 remained intact.
Forthcoming  in Asian Journal of University Education, December 2015. Prepublication copy can be accessed here.

Wednesday, August 03, 2016

Ghost Writers




An article by Chris Havergal in Times Higher Education reports on research by Lisa Lines , a lecturer at the University of New South Wales, in Teaching in Higher Education (behind a pay wall) that suggests that the output of ghost written student essays is probably greater than expected.

The researcher had ordered undergraduate and master's essays in history and then had them marked by "leading academics." Of the 13 undergraduate essays only two received a failing grade while six out of 13 master's failed and seven passed

Lines says that the quality of the purchased essays was surprisingly high.

Possibly. Or you could say that the standard of marking was surprising low. Note that this was at a university that is in the top 150 in the world according to ARWU.

Havergal quotes Lines as saying:

“It is clear that this type of cheating is virtually undetectable by academics when students take precautions against being caught,” she concludes.

“This fact, coupled with the study’s findings that the quality of essays available for purchase is sufficient to receive a passing grade or better, reveals a very troubling situation for universities and poses a real threat to academic integrity and standards, and public perceptions of these.”

The problem lies not with dishonest student or crooked essay writers but with  corrupt selection practices  that admit academically incompetent students and a dysfunctional employment system. If you have students that cannot write and intelligent graduates who cannot find work then ghost writing is inevitable.



Monday, July 25, 2016

Looks Like THE isn't Big in Japan Anymore





It has taken a long time but it seems that Japanese universities are getting a little irritated about the Times Higher Education (THE) World and Asian University Rankings.

I have commented on the THE Asian rankings here, here and here.

According to the Nikkei Asian Review,

  Research University 11, a consortium of Japan's top 11 universities, issued a statement earlier this month that the Times Higher Education ranking should not be used to determine national policy or as an achievement indicator.
Another umbrella group, the Research University Network of Japan, which includes universities and research institutions, has opposed the ranking every year Japanese universities have taken big tumbles.
and

So achieving a higher ranking does not necessarily correlate with providing better educations and research opportunities.
For some universities, there is another worry -- politics. The Japanese government in 2013 said it would aim to ensure that Japanese universities rank among the world's top 100 over the following decade. Now, Japanese universities are required to develop specific strategies to help the government reach this "revitalization" goal.