Professor Ferdinand von Prondzynski (formerly President of DCU, currently Vice-Chancellor of Robert Gordon University, Aberdeen) discusses new forms of higher education ‘profiling’ over at his A University Blog:
I don’t really doubt that as recipients of public money universities should present transparent data as to how this is being spent and what value is being generated by it. But comparisons between institutions based on such data always carry some risk. So for example, DCU’s student/staff ratio looks more favourable because the university has a much larger focus on science and engineering than other Irish universities, and laboratory work requires more staff input. NUI Maynooth is ‘cheap’ because the main bulk of its teaching is in the humanities, which are less resource-intensive. This information may not be immediately obvious to the casual observer, who may therefore be driven to questionable conclusions. Ironically some of these risks are not so prominent in the more mature league tables, such as the Times Higher Education global rankings, which will already have allowed for such factors in their weightings. The raw data are more easily misunderstood.
It seems to me that institutional profiling is not necessarily preferable to rankings. And it could be open to mis-use.
The ‘comparisons = rankings’ conflation seems a bit premature, specifically regarding U-Multirank. It has been specifically designed not to give a one-number-ranking, and instead to provide the kind of information to those interested that they are looking for, across various variables and indicators. There are always dangers with any numbers that attempt to quantify the quantitative, that is why they require interpretation and clarification. The handwringing here just seems performative, as though one should be seen to have an issue with all attempts at benchmarking/measurement. Rankings have been around for a decade at this stage, and so our attitudes towards them should move beyond reactions that they are simply ideologically suspect, that they are to be rejected accordingly.
I would ask, that if this new attempt at improving the situation of rankings is “not necessarily preferable”, then what is? Or what could be added to the U-Multirank system? It includes teaching (which most other rankings don’t), as well as regional engagement, and technology transfer. It is going to be available to use by any number of stakeholders (rather than bureaucrats and administrators, as was previously the case), and this is the expectation today – transparency. Finally, it will include subject/discipline specific indicators for those often-overlooked areas of study (arts, humanities, and social sciences).
Websites like Eurostat are also open to “mis-use”, and mis-interpretation, but is this a good reason to reject them? Open datasets, which can be interrogated and reconfigured in novel ways by the curious, are becoming the norm (data journalism, or groups such as Open Data Dublin, etc.) For a change there is an opportunity for real bench-marking, for real transparency, which no doubt will have room for improvement, and yet it seems that some still want to protect their data fiefdom from the unitiated, the great unwashed.
[Update 17/1/14: Education Datapalooza to Promote Innovation in Improving College Access, Affordability, and Completion: “Today, in response to the President’s call, the White House, the U.S. Departments of Education and Treasury, and the General Services Administration are hosting an Education “Datapalooza”, highlighting innovators from the private, nonprofit, and academic sectors who have used freely available government data to build products, services, and apps that advance postsecondary education, empower students with information, and help colleges innovate in creative and powerful ways.”]