Note: This article was published originally in the now-discontinued NZME journal Education Central on 13 August 2019
Whenever a new set of university rankings come out, our universities get busy boasting while the news media comb through the data for possible stories.
Rankings arise from people’s understandable interest in comparative information and from the hunger for competition. Rankings are like sports – pitting institutions against each other in a competition for prestige.
Rankings are big business. The most widely read rankings – the QS and the Times Higher Education (THE) rankings – boil down lots of complex performance data to a single number that determines where the university sits in a global league table. Because they create a hierarchy, these rankings attract attention. They market themselves to international students as a source of advice on where to study. In some cases – such as QS – rankings companies use their rankings as a means of marketing their commercial advisory and other services to institutions. Some, like the CWUR, based in the UAE, may even influence governments as they choose where they want their scholarship students to study. The stakes are high. University leaders believe that they have little choice – participate or perish.
Privately, university vice-chancellors everywhere criticise the robustness of rankings. Because of the competitive nature of rankings, because of their apparent simplicity, because they purport to measure quality or value, because the public has come to expect to know how each university is ranked, because every university plays the same game, for all those reasons, the results of each ranking release represent marketing currency to universities. In New Zealand, opportunistic university marketing directors milk rankings data for good news stories. There are examples everywhere on university websites – even on the side of buses!
Ranking the rankings
There are many, many ranking systems. A 2015 stocktake found that 35 countries had systems that rated and ranked their universities, with some of the 35 having multiple ranking systems. Most of these ranking systems had been created by media outlets but some were produced by government agencies.
On top of that are the international ranking systems. There are ten main systems jostling for attention. And in addition, the two best known and most influential systems, QS and THE, have spun off a range of subsidiary rankings – such as the best universities in each subject, the best universities under 50 years of age, the best Latin American universities, the best universities by reputation, rankings of graduate employability …. ranking institutions on every conceivable measure.
Universities are complex, multi-faceted organisations. It’s hard to find a single measure that sums up quality. So all ten international ranking systems rely on a basket of indicators. Most weight those indicators and then add the results to create a single composite score. It’s that single composite score that gives a ranking system its impact; without a single composite score, the public can’t answer the question of whether institution A is “better than” institution B. But it’s the creation of that single composite score that creates one of the main flaws of these rankings systems – change the weightings and you change the overall score. Those weightings are subjective – they are based on the opinions of experts. Choose another expert and the rankings change. Rankings systems like the Leiden system and u-Multirank don’t create a single score and so escape that criticism. But as a result, they also escape the public radar. Being of high integrity comes at the cost of relevance and impact.
Some ranking systems – again, QS and THE are offenders – use surveys to round out the picture, to get data on the factors that are hard to quantify. But surveys just compound the problem. Surveys measure the opinions of respondents about the institutions that they know; surveys give data on perceptions and opinions, rather than actual quality or performance. They tend to reinforce the existing status. They discriminate against small institutions and those located away from the largest population concentrations. They are entirely subjective. They cannot accurately measure quality.
In a paper published in the European Journal of Education in 2013, University of London researcher Simon Marginson assessed the integrity of six of the major ranking systems against eight criteria – including factors such as materiality, objectivity, the extent to which they encourage performance improvement, transparency … His analysis rates QS and THE as the poorest of the six systems, rating them both “weak” on the majority of the eight criteria (while he scores each of Leiden, u-Multirank and Scimago as “strong” on the majority of the criteria).
Some commentators rail against the league table format, because a league table conveys the impression that institutions can move up, possibly breaking into the upper reaches. But in practice, the “big rich old universities have such built-in advantages that they can never be shifted from the top”.
Other researchers have criticised the obsession of university leaders with rankings. That obsession can distract universities from their core mission and encourage them to give greater priority to factors that will boost rankings. It is claimed that universities that pursue prestige and reputation through rankings do so at the expense of their students or national priorities.
So what is the value of all this ranking effort
As I have pointed out in an earlier discussion, the rankings industry produces masses of performance data, that’s mostly publicly available. The trick is to look closely enough at how the measures are constructed to work out what is and isn’t meaningful, what the data can be used for and what it can’t.
For instance, if you are interested in comparing graduate employability between universities, avoid the QS employability ranking (which uses reputation survey data, a count of the alma mater of 30,000 holders of top jobs around the world, plus the number of research connections between the university and private sector firms, while the actual employment rate contributes only 10% of the score) or the THE employability ranking (which makes it all but impossible to discover their methodology). Instead, go to the Ministry of Education/TEC data on actual employment rates to compare graduate employability performance of the New Zealand universities. An international comparison of graduate employability of individual universities? I doubt that could be done in a robust or meaningful way.
If you want to see how NZ universities compare with each other on research performance, use the PBRF data. But if you want an international comparison, the best place to go is the Leiden University ranking data. That is robust.
It is surely tempting for university marketing and PR staff to use ranking data to boost their reputation. It sounds good. It plays to the public expectation. But while the message has the simplicity that the marketers want, it will nearly always turn out to be simplistic.
So how do we look at quality in universities?
That really sums up the problem. There is a trade-off between the simplicity and the integrity of information on performance and quality.
The academic audit process, managed by the Academic Quality Agency (AQA), looks closely, deeply and intensively at institutional quality. It leads to complex reports that are independent, evidence-based and of high integrity, but that are long and discursive, that don’t make for the kind of simple and punchy messaging that marketers want.
Earlier this year, the AQA created a new audit framework for its next cycle (the sixth) of university quality audits. That framework looks at how well the university serves students – by looking at the processes that underpin teaching, learning, curriculum, leadership, student support, …. This framework creates an agenda for the assessors who conduct the audits. It is based on a set of 30 quality statements that look at how quality is embedded in the processes of the university. It defines the focus of universities in their self-review of their quality. And it defines what assessors should expect of the evidence presented to it by universities under review.
This new framework represents a step forward for the AQA. My expectation is that the reports, built around those 30 points of assessment, will be more coherent and more useful than their predecessors. The primary audience for AQA’s audit reports is the university’s council and management team; the audit can and should provide a quality improvement agenda for the institution’s leadership. But this new audit framework will also help government agencies and analysts to understand how, and how well, a university manages its quality.
But even these sharper AQA reports are likely to make the adman’s eyes glaze over. Independent, yes. Useful, yes. A great thing, yes. But too many words for PR. Too internally focused. Altogether too much nuance.
So where do we go to look at institutional quality performance?
People – government, commentators, educationists, advisors, students – want and need to compare institutions’ performance and quality.
Those AQA quality reports, like the corresponding NZQA reports on wānanga, polytechnics and other providers – may be long and dense but they carry masses of important information on quality. Those reports, in combination with the mass of robust performance information available on government websites give a good sense of how institutions perform. It’s a shame that, in 2013, the TEC discontinued its excellent annual series of performance reports that consolidated much of the data on each institution and put that data into context. Because now, there’s a lot of information available but it’s a stretch to join up these different views of quality and performance. As for international comparisons, …. there is lots of rankings data on each of the universities. But we all need to use it with care.