Thursday, June 15, 2017

The Abuse and Use of Rankings

International university rankings have become a substantial industry since the first appearance of the Shanghai rankings (Academic Ranking of World Universities or ARWU) back in 2003. The various rankings are now watched closely by governments and media and for some students they play a significant role in choosing universities, They have become a factor in national higher education policies and are an important element in the race to enter and dominate the lucrative transnational higher education market. In Malaysia a local newspaper, Utusan Malaysia, recently had a full page on the latest QS world rankings including a half page of congratulations from the Malaysian Qualification Agency for nine universities who are part of a state-backed export drive.

Reaction to international rankings often goes to one of two extremes, either outright rejection or uncritical praise, sometimes descending into grovelling flattery that would make Uriah Heep ashamed (the revered QS rankings, Phil Baty a thought leader). The problem with the first, which is certainly very understandable, is that it is unrealistic. If every international ranking suddenly stopped publication we would just have, as we did before, an informal ranking system based largely on reputation, stereotypes and prejudice. 

On the other hand, many academics and bureaucrats find rankings very useful. It is striking that university administrators, the media and national governments have been so tolerant of some of the absurdities that Times Higher Education (THE) has announced in recent years. Recently, THE’s Asian rankings had Veltech University as the third best university in India and the best in Asia for research impact, the result of exactly one researcher assiduously citing himself. This passed almost unnoticed in the Indian press and seems to have aroused no great interest among Indian academics apart from a couple of blog posts. Equally, when Universiti Tunku Abdul Rahman (UTAR), a private Malaysian university, was declared to be the second best university in the country and best for research impact, on the strength of a single researcher’s participation in a high profile global medical project there was no apparent response from anyone.

International rankings have also become a weapon in the the drive by universities to maintain or increase their access to public funds. British and Irish universities often complain that their fall in the  rankings is all the fault of the government for not providing enough money. Almost any result in the better known rankings can be used to prop up the narrative of western universities starved of funds and international researchers and students. 

Neither of these two views is really valid. Rankings can tell us a great deal about the way that higher education and research are going. The early Shanghai rankings indicated that China was a long way behind the West and that research in  continental Europe was inferior to that in the USA. A recent analysis by Nature Index shows that American research is declining and that the decline is concentrated in diverse Democrat voting states such as California, Massachusetts, Illinois and New York.

But if university rankings are useful they not equally so and neither are the various indicators from which they are constructed.

Ranking indicators that rely on self-submitted information should be mistrusted. Even if everybody concerned is fanatically honest, there are many ways in which data can be manipulated, massaged, refined, defined and redefined, analysed and distorted as it makes it way from branch campuses, affiliated colleges and research institutes through central administration to the number munching programs of the rankers.   

Then of course there are the questionable validation processes within the ranking organisations. There was a much publicised case concerning Trinity College Dublin where for two years in a row the rankers missed an error of orders of magnitude in the data submitted for three income indicators.

Any metric that measures inputs rather than outputs should be approached with caution including THE's measures of income that amount to a total weighting of 10.75%. THE and QS both have indicators that count staff resources. It is interesting to have  this sort of information but there is no guarantee that having loads of money or staff will lead to quality whether of research, teaching or anything else.

Reputation survey data is also problematic. It is obviously subjective, although that is not necessarily a bad thing, and everything depends on the distribution of responses between countries, disciplines, subjects and levels of seniority. Take a look at the latest QS rankings and the percentages of respondents from various countries.

Canada has 3.5% of survey respondents and China has  1.7%.
Australia has 4% and Russia 4.2%.
Kazakhstan has 2.1% and India 2.3%'

There ought to be a sensible middle road between rejecting rankings altogether and passively accepting the errors, anomalies and biases of the popular rankers.

Universities and governments should abide by a self denying ordinance and reject ranking results that challenge common sense or contradict accepted national rankings. I remember a few years ago someone at Duke University saying that they were puzzled why the THES-QS rankings put the school in first place for faculty student ratio when this contradicted data in the
 US News rankings. Few, if any, major universities or higher education misters seem to have done anything like this lately.

It would also be a good idea if universities and governments stopped looking at rankings holistically and started setting targets according to specific indicators. High flying research university could refer to the Leiden Ranking, Nature Index or the
 Nature and Science and Publications indicators in ARWU. New universities could target a place in the Excellence indicators in the Webometrics rankings which lists 5,777 institutions as having some sort of research presence.

As for the teaching mission, the most directly relevant indicators are the QS employer survey in the world rankings, the QS Graduate Employability Index, and the Global University Ranking Employability Ranking published by THE.


Governments and universities would be advised not to got too excited about a strong performance in the rankings. What the rankings have given the rankings can take away.


            

No comments: