Feb 03 2011 | Posted by SSSandy

Irrational numbers

The 2010 world university league tables have brought to the fore long-held concerns about their integrity, veracity and credibility.

The catalyst was Times Higher Education World University Rankings changing the methodology it had been using up until 2009. Compare the results of its table, the QS World University Rankings and Shanghai Jiao Tong University's Academic Ranking of World Universities in 2010. In different lists, the same institutions can appear in very different positions. In some cases it seems an alphabetical ranking would furnish more rational results. These tables now depart – at least in part – so far from common sense that they become absurd.

With results highly dependent on the methodology and indicators employed, we still lack consistent measures of research, and especially of teaching quality. The varied use and interpretation of indicators such as citations, staff-student ratios and external opinion can cause serious methodological anomalies.

A more fundamental challenge is how one measures the basic requirements of higher education and research, such as engaged enquiry, scholarly progression, choice in education pathways and international linkages.

Major world universities have evolved over decades or even centuries. They incorporate diverse philosophies, values, cultures, national characters and ambitions. There are small, private, discipline-focused research institutions as well as large, public, multidisciplinary research and teaching communities, with integrated sporting and cultural strengths. On this rich diversity, rankers impose “one size fits all” templates that spit out lists from 1 to 200, and beyond.

There is also an inherent confusion in any league table that imposes qualitative assessments for research and teaching across all disciplines in health, science, arts, the humanities and social sciences.

Both the concept and the methodology of the rankings are flawed, as are the definitions, the indicators, the data provided, and inevitably their conclusions. It is difficult to see how improving the data will make any difference when the fundamental principles and questions are wrong, dangerously so if the results influence towards uniformity.

Rankings have become an influential annual focus for academics, students, parents and even for industry and government. Added to that are growing commercial dimensions, in which some rankers “invite” universities to submit their own confidential intellectual property and charge them to buy back analysis of their own data. There are even businesses that “manage” rankings submissions.

On the positive side, the rankings' prominence attests to the high priority of education and research in society, with a readiness among universities to accept quality improvement, excellence, best practice and competition. There is a risk, however, that some in the sector will allow the rankings to lead their own strategies, rather than vice versa.

We do not claim that all rankings are devoid of value, but there is little common ground between the rankers and the ranked. The rankers are unlikely to go away, or to ignore the potential returns on their investment. One option would be for leading universities to boycott the rankings. Another would be to work together in achieving rigorous criteria, transparency in analysis, a reduction in “qualitative” surveys on teaching and service, and clear notes and disclaimers where needed. But principles and methodologies first must be resolved and agreed, or cooperation is unlikely.

How could the system be improved? A first step would be to group “like with like” universities, disciplines, ambitions and international commitment, with a respect for diversity and national characteristics. An honest ranking system, perhaps with bands or cohorts rather than a facile linear list, would offer more robust, rational results. Rankings should never be used as the sole basis for decision-making, but the intelligent choices of students, parents and stakeholders would be better informed by rankings reality – not rankings roulette.

Postscript:

John Hearn is Deputy Vice-Chancellor (International), University of Sydney, and Chief Executive of the Worldwide Universities Network. Alan Robson is Vice-Chancellor of the University of Western Australia, and Chairman of the Worldwide Universities Network.

This letter appeared in The Times Higher Education, 3 February 2011