What if there were no university rankings?
In 2012 at a San Francisco meeting of the American Society for Cell Biology, a group of editors and publishers signed a declaration to change the way Journal Impact Factors were being used to inappropriately depict the comparative quality of research across disciplines.
The declaration called DORA, now an organisation, turned 10 last week, celebrating how the sector working together to push back against inappropriate metrics for journals and research, can support the development of meaningful assessment.
What would happen if universities built upon the success of DORA and went further? What if, together, they opted out of participating in university rankings?
Where could we be in 10 years if we replaced the easy to measure metrics that ranking companies choose, and which often serve their own commercial ends, with measures that are important for what diverse universities are trying to achieve, using data that they themselves believe is valid?
It appears opting out of, or getting rid of rankings is a periodic contemplation, in chancelleries, tea rooms, committee meetings and working groups in many Australian universities.
We typically dismiss the idea as impossible. It is seen as a route to institutional bankruptcy, and loss of international students and research funding. Does it have to be?
There has been a strong and growing movement in leading US universities to opt out of the US News and World Report rankings.
This started within disciplines such as law, having reached the end of their tether in being victims to the ambiguity in methodologies of this particular ranking, and lack of rigour in process.
Why should university culture be subject to the business model of a publishing outlet, clinging on to revenue and relevance, by creating news about its own rankings?
At a recent ASU+GSV summit, a CEO of a global EdTech company revealed their disappointment in having failed to buy US News and World Report some time ago with the sole intention of closing it down to do away with its rankings, realising of course there are many more that would remain.
There may be simpler ways of achieving the same outcome.
I think we have a great and growing need for strong, locally relevant, and context appropriate higher education press and content services.
The sector has so much news, ideas and thinking to share beyond a publisher’s own attempts to create rankings for just about everything, just about everywhere, with the purpose of getting universities to pay to subscribe and enter.
It will be interesting to see how the growing abstentions from US News and World Report plays out more globally and across other rankings systems.
But what would we do without them?
What would it mean to universities pursuing their own sense of purpose, and as with DORA, the need to prioritise locally relevant issues?
They could create their own missions and goals, that might be more difficult to measure and compare, but more important to aim for and achieve.
Imagine the goal of transforming more lives than all other institutions. This might be so much harder to measure and sell as a benchmarking service, but so much more valuable to pursue and achieve.
The adages of you “inspect what you expect” and “what gets measured, gets managed” have so many implications when external benchmarked rankings, largely of outdated research outcomes, dominate our assessments of universities.
If your university has a mission and purpose other than to be historically good at research, why not opt out of them and choose another way to measure and market yourself.
Think about how it might change so much of what gets valued, done and promoted in your institution.
What would it mean for staff working in universities, if they could pursue what they believed in and valued, rather than what a ranking company values?
This plays out in many ways in university culture. Leaders and strategies appear to say just about everything is important.
But for many, the outcomes that lead to improvements in rankings remain the thing that gets valued more than most, whether it be in the Vice Chancellor’s regular all staff email, or what most people perceive to be the reasons for academic promotion decisions.
If we really do mean it when we say we value access, opportunity, inclusion, industry engagement, and collaboration, then why don’t we reflect that in actions rather than only words. It certainly does give for dissonance and mixed messages in cultural symbols and how people behave.
Imagine a university where the Vice Chancellor says collaboration internally and with other universities in the state is the priority, and we value teamwork.
Then imagine a local university partner approaches an aspiring professor in the Engineering School with a problem. The aspiring professor has been told that teamwork, collaboration and serving industry is the priority.
How many universities say that, but create cultures where staff first seek to persuade the partner to reimagine their problem in terms of the expertise they hold.
If they can’t, will they pass on the problem and contact to a fellow aspiring professor in the current promotion round whose expertise better fits the problem?
Will they hand over the relationship and problem to the Business Faculty or other university in town where a better solution is available? How much do rhetorical culture, and cultural practice and behaviour align? And how much of it is down to the influence of rankings?
Finally, what would it mean for students and their employers if everyone in the university focused on skills needed and achieved, more than rankings?
Rankings are metrics of quality that mean little for quality of the student experience. The correlation between university rankings and measures of student experience are spurious at best and in inverse relationship at worst.
The only reason students use rankings to choose between universities is that they have no other guide to work with. That may well be our fault.
We do have internal and national measures and benchmarks of student experience, industry and community engagement and other important and diverse goals that universities believe in and set themselves.
But we have failed to prioritise these in our individual marketing efforts and cultural development programs. We do not develop our own comparative schemes to allow them to be compared between institutions by customers.
We might need to invest more effort in doing so to regain control from private companies trading off university reputations, students’ lack of information to serve choice, and the vagaries of university KPIs.
In Singapore, with a radical, national approach to institution planning and development, a different university, with a social purpose in serving skills and learning needs has emerged.
This is the story of Singapore Institute of Technology (SIT) which you can hear on the HEDx podcast here. Imagine if more global universities like SIT opted out of rankings and worked together towards replacing them.
Imagine what our Australian Universities Accord could do to encourage this in Australia, in encouraging more diverse and differentiated universities.
First published in Campus Review on 22nd May 2023.
Emeritus Professor Martin Betts, Co-Founder of HEDx