U-Multirank does not produce league tables so institutions as a whole do not emerge ahead of others. Our rankings refer to specific indicators of performance. If we look for example at ‘traditional’ research indicators used by traditional rankings (e.g. citation impact) U-Multirank of course shows ‘usual suspects’ as top performers. Other indicators produce different, sometimes at first glance unexpected results; for example a university of applied sciences which publishes almost all of its publications in collaboration with industry will perform well on co-publications with industry even if its total output is not extensive. A classical research university will have many more publications, but often a much smaller share of them are written together with authors from industry.
U-Multirank aims to present fair pictures of institutional performances showing specific strengths and profiles of universities, which may be surprising to those who do not think beyond traditional research reputations. U-Multirank intends to produce new insights which may challenge current beliefs on institutional reputation that are often based on hearsay and halo effects.
U-Multirank includes 16 subject areas in total, covering psychology, medicine, biology, chemistry, mathematics, history, sociology and social work/welfare and – new for 2017 – economics, chemical, civil and industrial engineering. Additionally, electrical and mechanical engineering, business studies, and computer science have been updated.
U-Multirank currently includes more than 1,500 universities from 99 countries around the world. Around 56% of these institutions are from Europe, 19% from North America, 20% from Asia and 5% from Oceania, Latin America and Africa.
While traditional global rankings focus mainly on 400-500 of the world’s research universities (only about 2-3% of the world’s higher education institutions), U-Multirank covers a far broader range including small specialised colleges, art and music academies, technical universities, agricultural universities, universities of applied sciences as well as comprehensive research universities and others.
U-Multirank publishes a quarterly e-newsletter, at www.umultirank.org. To register, visit our website and click on “Stay in touch”. Interested parties can also follow U-Multirank on Facebook, Twitter, Instagram and YouTube.
Journalists and media organisations can send an email to email@example.com requesting to be added to the media contact list.
If your university or institution is not currently included, you can express your interest by completing a simple registration form online. Deadline for registration will be in mid-July 2017.
Universities that participated in the 2017 ranking do not have to re-register for 2018 as they will be contacted directly by the U-Multirank team.
Gaps might occur in the data for one of two reasons and U-Multirank uses two different symbols to indicate the reason for any gap.
If a university has a ‘dash mark’ (-) instead of a performance score, then that means that (valid) data simply isn’t available. Given the range of universities included in U-Multirank and the different data collection requirements in different countries, it is inevitable that sometime the universities just won’t have the information available at the time of collection, or that they use non-comparable definitions or the data did not pass our verification rules. Even if the relevant data for some indicators are not available this does not prevent an institution from participating and being visible in U-Multirank as they can still be compared on the basis of the data that is available.
The second type of gap is marked with an ‘X’, which means that the indicator in question can’t be applied to the institution. For example, a university that offers only bachelors and masters degrees would not be able to provide data about a PhD programme. They don’t have one.
Whenever publicly available bibliometric and patent data are applicable, then U-Multirank uses them. In addition to this, most universities have also provided additional self-reported data. Some universities, however, haven’t provided data and so only their publicly available data is shown. Obviously that can mean gaps across several indicators for those universities.
If you represent a university that has not provided data to fill these gaps – or perhaps one that does not yet feature at all in U-Multirank – then you are kindly invited to register to participate in the next data collection round.
The freely accessible U-Multirank web tool offers five main channels for users to explore and personalize the rankings, each opening up a world of possible ways of comparing universities.
The ‘For students’ channel has been specially designed to meet the needs of students who want to compare study programmes or universities of their choice. It has a special focus on teaching and learning and uses the responses of more than 100,000 current students at participating universities worldwide in a global student survey exclusive to U-Multirank.
The ‘compare’ channel allows users to compare higher education institutions of similar profiles or make comparisons with specific named institution(s).
The ‘at a glance’ feature lets users explore the entire performance profile of a selected university in detail at both the institutional level and as separate faculties.
The ‘favourites’ feature allows users to mark their favourite universities and create a short list. From here, users to easily access their shortlist, or even compare their favourite universities.
In addition, the U-Multirank consortium has created ‘readymade rankings’, on the institutional level and on the subject level. These are examples of how a user can compare institutions with similar profiles on a pre-selected group of measures.
U-Multirank exists solely for its users and that is reflected in the way the findings are made available. Once they have been gathered and verified, U-Multirank’s data are published on its unique interactive online tool, which offers various entry points for different users, depending on what they want to do.
Generally users can start by selecting the sorts of universities or programmes they are interested in and would like to compare. Then users can say which measures of university performance are important to them. In this way U-Multirank offers almost unlimited possibilities for different users to develop personalised rankings that suit their different interests.
Additionally, U-Multirank provides a mobile version and app, offering students and all users alike, university comparisons, their way, with the touch of a finger. This light version makes user-driven comparisons even easier for users on the go by focusing on key U-Multirank elements.
The new mobile version allows users to find their best match, save favourites, compare specific universities they have in mind – or based on similarities – and access comparative information.
Some performance measures within U-Multirank rely on information collected directly from the universities themselves. That is because U-Multirank collection process is the first time many of these sets of information have been gathered together into a single international, public source in a comparative form.
The data that universities provide about themselves needs to be carefully verified and this is one of U-Multirank’s most significant annual undertakings. It includes several steps and procedures: data is subjected to multiple statistical tests for consistency and plausibility; ‘outlier’ and other surprising results are carefully checked. The process includes both ‘manual’ and automated checks and a continual process of direct communications with universities.
Bibliometric indicators seek to measure the quantity and impact of scientific publications. They are based on a count of the scientific publications produced by the academic staff of a university and the number of times these are cited in other publications. The bibliometric analyses in U-Multirank are based on the Thomson Reuters database, an extensive verified database of academic publications. U-Multirank partner CWTS (Centre for Science and Technology Studies) at Leiden University is responsible for generating the bibliometric data.
The data included in U-Multirank are drawn from a number of sources: information supplied by the institutions themselves, data from international bibliometric and patent databases, and surveys completed by more than 105,000 students at participating universities – one of the largest international student samples in the world. By offering this wealth of data U-Multirank provides comprehensive information to its users.
Performance measures or indicators are the different areas of university performance that are used within U-Multirank to compare universities. A full list of these performance measures as well as their definitions can be found on our U-Multirank.
It makes little sense to compare the performances of institutions with completely different missions and activity profiles: for example, comparing a specialised, regionally orientated, Bachelor-awarding College of Information Technology with one of the world’s leading comprehensive, research-intensive universities does not shed light on what makes either of them good or bad at what they are trying to achieve.
This is a key point of difference between U-Multirank and other approaches.
Depending on how they choose to use the U-Multirank comparison tool, one of the first things users are given the opportunity to do is to select the characteristics of the universities they would like to compare. In other words, they define the kind of universities they are interested in.
To do this U-Multirank uses a set of descriptive criteria to ‘map’ key features and the various activities that different universities are engaged in. These selection criteria are not about performance – for instance, there’s no inherent better or worse to being large or small. They include the level of degrees offered, the subject areas the university is active in, the proportion of graduate and international students and the size and age of the institution.
Having made this selection, the web tool displays performance information only for universities that meet these mapping criteria. This process has been further streamlined for the 2016 edition, reducing the number of required steps, to help the user get their comparison results more quickly and efficiently.
Users can create their own personalised university ranking listing institutions that match their selection criteria according to the performance measures that they consider important to them. Depending on their choices, different universities will perform better than others. This is calculated according to the performance scores in each of the indicators they have selected.
The performance scores range from ‘A’ (very good) to ‘E’ (weak), based on an assessment of all universities. The more ‘A’s an institution gets in the indicators that matter to the user, the higher it will be on the user’s ranking when they sort them that way. Alternatively, they can choose to display the list alphabetically, or even sort by an individual indicator.
This method allows users to see both the comparative strengths and weaknesses of any university.
U-Multirank never creates composite aggregate scores as is not consistent with fair and transparent comparisons. So, unlike traditional rankings, U-Multirank does not attempt to put universities into numbered lists or to declare 100 universities to be the best in the world. Given that no ranking system – not even U-Multirank – has access to data about every higher education institution in the world, declaring some to be the ‘best’ is to wilfully overlook large portions of the sector or consider them on their own terms.
U-Multirank covers five ‘dimensions’ of performance: teaching and learning, research, knowledge transfer, international orientation and regional engagement. No one dimension is more or less important than any other. Each is relevant in different contexts and to different users. Often the user will want to define their own mixture of performance indicators across dimensions. This is at the heart of what makes U-Multirank a ‘multi-dimensional’ ranking and a unique comparison tool.
Performance in each dimension is assessed through a number of indicators, with universities ranked separately on each individual indicator. On each indicator, institutions are ranked into five groups ranging from ‘A’ (very good) to ‘E’ (weak).
Universities are measured for performance at an institutional level, but for many indicators – particularly those in teaching and learning – it would be misleading to measure them at anything other than the faculty level, ie. categorised by subject area (or disciplinary field).
U-Multirank makes it possible for users to compare universities in their own way by creating personalised rankings.
By enabling users to specify individual universities or the type of institutions they wish to compare (in terms of the activities they are engaged in), they can create ‘like-with-like’ comparisons, which allow for more meaningful results. Users can then decide which areas of performance to include in the comparison of the selected group of universities, using any of our 30+ performance indicators, across five dimensions: teaching & learning, research, knowledge transfer, international orientation and regional engagement.
With U-Multirank’s multi-dimensional approach, universities can be assessed on a range of individual performance measures, with the performance groupings per measure ranging from “A” (very good) to “E” (weak), allowing for meaningful and responsible comparisons.
Unlike traditional rankings, U-Multirank never produces composite scores because there is no sound methodological justification for ‘adding up’ the scores of diverse individual measures, or for weighting them to produce a single composite score as used in league tables.
This is just one way that traditional league table approaches misrepresent the true picture of quality and diversity. Another is that they tend to exaggerate differences in performance between universities, creating a false impression of exactness (for example, suggesting that number 27 in a list must be ‘better’ than number 29, whereas in fact differences in scores may be both negligible and influenced more by methodology than performance).
U-Multirank aims to correct the ‘football-league’ mentality of over-simplified league tables and instead provides transparent, statistically sound and fair comparisons.
U-Multirank’s 2017 data release includes 16 subject areas (13 in 2016). These were reviewed and selected by the U-Multirank consortium through stakeholder consultations, including business and industry, higher education experts and student representatives.
The current total number of higher education institutions is 1,497 (1,302 in 2016). This covers more than 3,280 faculties and more than 10,500 study programmes in the subject areas of psychology, medicine, biology, chemistry, mathematics, history, sociology and social work/welfare and – new for 2017 – economics, chemical, civil and industrial engineering. Additionally, electrical and mechanical engineering, business studies, and computer science have been updated. This year 99 countries are featured.
For all 1,497 universities, U-Multirank includes bibliometric and patent data from publicly available databases. Some of these datasets are also used by other global university rankings. These performance measures (indicators) that use bibliometric data are based on a count of the scientific publications produced by the academic staff of a university and the number of times these are cited in other publications.
In addition to the publicly available sources, a large proportion of U-Multirank’s data cannot be found anywhere else.
Prospective students, parents, universities and governments all around the world need higher education institutions that do well in different areas, to meet the needs of different students and to meet different labour market and research needs. Diversity is a key strength of the global higher education sector and we need mechanisms to protect that diversity while still measuring the different ways of performing well.
Traditionally, the available information on the performance of higher education institutions focused mainly on research-intensive universities, and so covered only a very small proportion of higher education institutions. Universities that wanted to be recognised internationally for their performance needed to conform to a narrow idea of quality.
When it was launched in 2014, U-Multirank changed the landscape. It draws on a wider range of analysis and information, covering far more diverse aspects of performance, to help students make informed study choices, to enable institutions to identify and develop their strengths, and to support policy-makers in their strategic choices on the reform of higher education systems.
A multi-dimensional ranking and information tool is the fair way to compare universities globally, measure differences in performance while reflecting contexts and protecting diversity. U-Multirank has proved that this principled, transparent and authentic approach is not only feasible, but also widely supported by education stakeholders.
U-Multirank is for students – whether looking for a place to study or to move elsewhere around the world – for their parents, teachers and advisers. It is for researchers in higher education institutions, for decision-makers in institutions (presidents, vice-chancellors, rectors, deans of institutions), for employers and businesses. It is also for governments, ministries and policy-maker and for the media.
Every aspect of U-Multirank was designed in close consultation with stakeholders representing these groups to ensure that it meets the diverse information needs of them all.
The unique web tool was designed to provide a user-friendly and interactive interface that can be used flexibly by all these diverse groups. The new mobile version of the web tool is aimed principally at student users, but remains supportive of the wide range of different information needs.
The idea originated at a conference under the 2008 French Presidency of the European Union, which called for a new methodology to measure the different dimensions of excellence in higher education and research institutions in Europe and in an international context.
Following this, the European Commission commissioned a feasibility study into developing a multi-dimensional ranking system. This study, completed by a consortium of higher education and research organisations (known as CHERPA) in 2011, confirmed that both the concept and further implementation of a multi-dimensional ranking were feasible, based on pilot work with 150 higher education institutions from Europe and around the world.
U-Multirank built on this feasibility study, publishing its first set of ranking results in May 2014.
For the 2017 results, U-Multirank has enhanced the web tool further to improve its presentation and functionality, simplifying the user tracks and ranking selections allowing for faster results, as well as enhancing the presentation of the results in both the university profiles and results pages.
On the initiative of the European Commission, U-Multirank is developed and implemented by an independent consortium led by the Centre for Higher Education (CHE) in Germany, the Center for Higher Education Policy Studies (CHEPS) at the University of Twente and the Centre for Science and Technology Studies (CWTS) from Leiden University – both in The Netherlands.
The consortium is headed by Professor Dr. Frans van Vught of CHEPS and Professor Dr. Frank Ziegele of the CHE. Other partner organisations include the Bertelsmann Foundation, student advice organisation Push and software firm Folge 3. The consortium also works closely with a range of national partners and stakeholder organisations. A full list of partners is provided on the U-Multirank website.
U-Multirank is free to users and free to all higher education institutions to participate. The institutions participating in U-Multirank bear their own operational costs of data collection which vary depending on the sophistication of their internal management information systems.
U-Multirank is supported by the European Commission and receives €4 million in funding from the European Union Erasmus+ programme for the years 2013-2017.
The goal is to establish an independent organisation to manage the ranking on a sustainable non-profit funding model.