Andreas Schleicher, Director for the Directorate of Education and Skills, OECD
October 1, 2019 09:41 AM
University degrees have become the entrance ticket for modern socieities. Never before have those with advanced qualifications had the life chances they enjoy today, and never before have those who struggled with a good education paid the price they pay for that today. I know, there are always those who argue that the share of young people getting into universities is becoming too high. But they are usually talking about other people’s children. And in the last century, they might have argued that too many children are getting into high school.
The evidence is clear. On average across OECD countries, men with at least a Bachelors degree earn over 300,000$ more than what they pay for their studies and lose in earnings while studying, when comparing that with those who only have a high-school degree. And taxpayers too get over 200,000$ more for every graduate than what they invest into universities. And despite the rapid rise in graduates, there has been no decline in their relative pay, which is so different from those with poor qualifications.
As a result, the priority of universities to induct a small minority into research capabilities has given way to providing up to half of our populations with advanced knowledge and skills. That is mirrored in the rapid expansion of higher education and the establishment of more diverse types of institutions throughout the globe. There are now over 18 000 higher education institutions offering at least a post-graduate degree in 180 countries.
However, more years of study alone do not automatically translate into better jobs and better lives, and while the average returns to a university degree are very high, those returns vary hugely across universities and study programmes. Then there is the co-existence of graduates looking for jobs, while employers say they cannot find the people with the skills they need. And not least, higher education is an expensive entrance ticket to the knowledge society and the rising costs of higher education are increasingly borne by students themselves.
In sum, modern higher education needs to ensure that the right mix of knowledge and skills is delivered in effective, equitable and efficient ways. That’s easy to say, but what does it mean in practice, and how do we know?
Some consider data on the relevance and quality of university education the Holy Grail of education policy. Better data allow students to make informed decisions about their preferred place of study and show prospective employers evidence of what they have learned. Better data allow institutions to find and build on competitive strengths and address weaknesses. Better data allow employers to assess the value of qualifications. And better data allow governments to determine policy and funding priorities. In some way, data feeding peer pressure and public accountability have become more powerful than legislation and regulation. There was a time when society turned to universities to make judgements about the quality of education. Today, it is the public that wants to make judgements about the quality of universities, and it clamours for reliable data to make those judgements. Then there is the global perspective: Higher education is a global enterprise, with a rapidly growing number of students who are going global, with educational content going global, and with providers of higher education going global. There are now well over 3 million students travelling across the OECD area for study purposes. Without comparative data, universities will be flying blind and not be able to position and reposition themselves in this global market.
Others compare the search for better data on universities more with the Alchemists stone, which medieval chemists thought to transmute ordinary metal into gold. Their argument is that weighing the pig does not make it fatter, or that data will distort perspectives and institutional behaviour, by focusing on what is easiest to measure rather than on what is important. There is the story about a driver who, on a dark night, finds out that he has lost his car key when getting back to his car. He keeps looking below a streetlight – and when someone asks him if that is where he dropped the key, he says no, but that is the only place he can see anything. Indeed, there is always a deep-rooted instinct to look at what is closest to hand and easiest to see. It may not be the best place to look, but it is where there are familiar questions and answers.
Like the knights who pursued the Holy Grail and the chemists that sought the Alchemists stone, we should ask ourselves some tough questions. Why, exactly, do we need better data? What data are we are looking for? And how will we recognize it when we found it, that’s the tough question around the validity of metrics.
The motivation of the medieval chemists was clear, they wanted to produce untold riches with new gold - assuming the laws of demand and supply would no longer hold. The search for better data is motivated by very different agendas, ranging from improving research, teaching and learning up to demonstrating excellence to improve the standing of a university in a tough competitive market.
The knights searching for the Holy Grail had a clear idea of what they were looking for. The Holy Grail was a well defined and carefully described object, and there was only one true grail. The Alchemists stone was to be recognised by the transformation of ordinary metal into gold. When it comes to measuring higher education, there are probably more competing visions of what the desirable attributes of universities are than there are universities.
But there is more we can learn from the medieval ages here. The search for the Holy Grail was overburdened by false clues and cryptic symbols. The medieval Alchemists followed the dictates of a well-established science with great determination and very systematically, but that science was built on wrong foundations. Many of the established university rankings suffer from severe methodological flaws too, and perhaps most importantly, from the assumption that the quality of a university can be condensed in a single number that can be a-priori defined.
U-Multirank has tackled some of these design flaws successfully. For a start, it is multidimensional and user-driven, allowing users to develop their own rankings by selecting metrics that reflect their own preferences, thus shifting judgement on what constitutes a good university or programme to the eye of the beholder. It also provides a level of granularity that allows users to drill down to the level of specific study programmes, acknowledging that unviersities might excel on some but not on other programmes.
But even U-Multirank is only as good as the data that feeds it, and there is an urgent need for improving the relevance and quality of data on universities. We need to know much more about the practices in higher education that drive better student learning and graduate outcomes. Surveys of student experience are one way to get at this. Graduate surveys can give us information on labour market and social outcomes. Student assessments can shed light on the actual learning outcomes. And employer surveys can give information on the value of graduate skills and expected trends in the labour market. But while data on the performance of institutions in research are fairly comprehensive, data on teaching still largely reflect institutional characteristics, employment patterns for staff, or student retention, progression and completion. Judgements about the quality of teaching and learning are therefore often derived not from outcomes, nor even outputs, but from idiosyncratic inputs or perceptions of reputation.
The value of teaching as a key differentiator of universities is only bound to rise as digitalisation keeps pushing the unbundling of educational content, delivery and accreditation in higher education. Content? In the digital age, anything we call our proprietary knowledge today will be a commodity available to everyone tomorrow. Accreditation? Accreditation still gives universities enormous power, but what will micro-credentialling do to this? That leaves the quality of teaching as perhaps the most valuable asset of modern universities and it becomes harder to hide poor teaching behind great research.
But throwing data into the public space doesn’t in itself change the ways in which students learn, faculty teach and universities operate. Turning digital exhaust into digital fuel to change institutional behaviour and practice requires us to get out of the “read-only” mode of data systems, in which information is presented in a way that cannot be altered. This is about combining transparency with collaboration, another principle that U-Multirank introduced with its user-driven approach to measurement. But more can be done. I am always struck by the power of “collaborative consumption”, where online markets are created in which people share their cars and even their apartments with total strangers. Collaborative consumption has made people micro-entrepreneurs – and its driving engine is building trust between strangers. The reason this works is that behind these systems are powerful reputational metrics that help people put faces to strangers and build trust. Imagine the power of a university system that could meaningfully share all of the expertise and experience of its faculty. Technology could create a giant open-source community and unlock the creative skills and initiative of so many people, simply by tapping into the desire of people to contribute, collaborate and be recognised for it. This is where the accountability and improvement functions of data meet again.