U-Multirank takes a different approach to the existing global rankings of universities. Firstly, it is multi-dimensional and compares university performances in the different activities that they are engaged in. It is not confined to research but takes into account different aspects and dimensions of the performance of universities: teaching and learning, research, knowledge transfer, international orientation and regional engagement.
Secondly, U-Multirank does not produce a combined, weighted score across these different areas of performance and then use these scores to produce a numbered league table of the world’s “top” 100 universities. The underlying principle is that there is no theoretical or empirical justification for such composite scores. Empirical studies have shown that the weighting schemes of existing global rankings are not robust: small changes in the weights assigned to the underlying measures (the indicator scores) will considerably change the composite scores and hence the league table positions of individual universities. Therefore, the U-Multirank methodology looks at the scores of universities on individual indicators and places these in five performance groups (“very good” through to “weak”).
Closely related to the multidimensional approach is the focus on users’ needs. U-Multirank provides information relevant for decision making by many different parties, that is students, university administrators, policy-makers, academics, business leaders, et cetera. Composite indicators, as used in other rankings, define the relevance of each indicator uniformly and patronize users. U-Multirank, on the other hand, feels that different users of rankings can have quite different preferences and priorities with regard to what they find important areas of university activity (“Quality lies in the eye of the beholder”). Hence U-Multirank leaves the decision about the relevance of different performance indicators to its users. This is facilitated by U-Multirank’s interactive web tool.
Many stakeholders have been involved in the development of U-Multirank – its indicators, the underlying data collection instruments, as well as the design of the interactive web tool through which the results are presented. They will continue to be involved in the further refinement of U-Multirank’s features. The information needs of the users of U-Multirank and their suggestions about new and existing features and functions will be monitored constantly and taken into account.
U-Multirank combines institutional ranking (of whole institutions) with a set of field-based rankings that focus on particular academic disciplines or groups of programmes. Both are equally important. Whereas field-based information is most important to many users of rankings (e.g. to students who are looking for a university in the field they want to study, to academic researchers interested in comparisons with colleagues in their field), other users (including e.g. university presidents and policy makers) are also interested in information about the performance of institutions as a whole.
In contrast to existing global rankings, which are rankings of only one university type, namely the internationally-oriented research university, U-Multirank is broader and provides information on institutions with different missions. U-Multirank covers institutions with a range of profiles: universities of applied sciences, regionally-oriented colleges, and specialised institutions such as music academies and teacher training colleges. U-Multirank highlights the diversity in the various institutional profiles. U-Multirank includes more than 300 institutions which have never been visible in global rankings before, a number of them showing very good performance on particular indicators.
If a ranking includes a wide range of institutional profiles it has to assure that it provides meaningful comparisons. U-Multirank, therefore, enables users to compare particular sorts of universities (“like-with-like”). It does not make much sense to compare a small regional undergraduate teaching institution with an internationally oriented research university, nor to compare an Arts Academy with a technical university. U-Multirank invites the user to first choose a number of empirical “profile indicators” and then compare institutions with similar profiles.
Institutions are ranked into five different performance groups (rank groups A through E, with A expressing “very good” and E “weak” performance) for each of the indicators in the rankings. While the league tables produced in other global rankings may satisfy media needs for headlines (“The number one is…”), they tend to exaggerate differences in performance between universities and provide a false impression of exactness (“Number 27 is better than number 29”). U-Multirank, therefore, leaves it to the user to produce her/his own list of universities (or university fields), showing the performance on a selection of indicators made by the user.
To assist U-Multirank users in producing their results, the interactive web tool can sort universities alphabetically or sort them based on the scores for a particular indicator. As an extra service, users can follow an approach similar to the Olympic medal table, where universities with the highest number of “very good” scores (“A” scores in the ranking tool) would be shown first in the table. If there are more than one with the same number of As, then (and only then) the number of “B” ( “Good”) scores come into play. Similarly, to rank those with the same number of As and Bs, the Cs are considered, and so on.