Johnny Rich, PUSH
March 03, 2020 11:26 (CET)
With an estimated 25,000 universities around the world and a virtually infinite number of variables that students could consider about each of them, it’s tempting to think that anything that simplifies the choice must be helpful. But what is lost in the process?
By letting the compilers of a ranking decide for you what is important and what’s not, not only do you adopt priorities that may not be your own, but you ignore what really does matter to you.
It could be argued, though, that ranking compilers are experts drawing out the true indicators of whatever makes a university the best. There are two obvious problems with that notion.
First, there is no such thing as a ‘best’ university. Just as there is no best meal, no best clothing and no best home, it depends what you like and what you want. What course? What student experience? What outcomes?
Second, even if it were possible for these self-appointed ‘experts’ to come up with an idea of a university that’s best for everyone, most university rankings use criteria that rate what can be measured, not what is relevant. This exaggerates the importance of universities’ research activities, particularly their highly cited research (which tends to mean scientific research published in English). Research in subjects that the student isn’t studying is less important to most students’ daily lives than the teaching they’ll receive (which is inherently harder to measure), the costs they’ll face (which don’t figure in university rankings) or the life they’ll lead.
But surely, students don’t take university rankings that seriously? They’re merely a guide to prompt a deeper, more rational search. It’s hard to prove whether that’s the case and even if you accept it on faith, then you still have to balance that with the harm that rankings do in steering students by promoting unsuitable high-performers and devaluing universities that might suit their personal needs better (not least because rankings rarely cover more than a tiny proportion of all universities anyway).
These are choices that change lives and costs tens of thousands of dollars. Getting them wrong can ruin prospects, finances and hopes. So how do rankings get away with providing advice that, if it were about people’s health, would be banned in most countries? We are all unconsciously biased not to recognise that we might have made better choices and, without comparisons how our lives might have gone, it’s hard to prove otherwise. It’s particularly hard to pin the blame on the influence of a ranking, but, if they’re not influential, why are they so popular?
There is a solution to all this. In a free world, rankings cannot be banned, so instead they must get better: more honest about what they can and can’t do; more transparent about how they do what they do and – most importantly – they must surrender the power to rank to the students themselves. They must be comparison tools, not pop charts.
This is why I agreed to write this blog for U-Multirank. It’s not perfect. It doesn’t cover every university in the world, but it covers more than most ranking systems. It doesn’t feature every variable, but it has more than the leading rankings and it is transparent about the proxies it uses, how it collects its data and its methodology for representing it. Most importantly, U-Multirank gives the power to rank – to decide what’s important – to the individual user.
This approach may not make the process of choosing the right university as easy as simply being told what’s ‘best’, but I hope it makes it better. I hope it prompts as many questions as it answers. Most of all, I hope students recognise that such an important decision is worth making properly.