Some points on Ireland’s university ranking ‘slide’

I did this a short while ago for another ranking, for work in Dublin Institute of Technology’s Higher Education Policy Research Unit (HEPRU), but here it is for the Times Higher Education World University Ranking. First some facts, then a tiny bit of analysis. I’ll make it quick. We read that Trinity College Dublin has slipped in the ranking from 129th position to 138th. From the data that THE makes available, however, the overall score for TCD, as provided by THE,  has improved, from 50.3 to 51.2:

Screen Shot 2014-10-02 at 09.16.53Screen Shot 2014-10-02 at 09.17.43This is significant, especially when we consider that the overall score is not simply the aggregate of the five variables listed above. Alex Usher of Higher Education Strategy Associates notes in a recent post: “But this data isn’t entirely transparent. THE […] hides the actual reputational survey results for teaching and research by combining each of them with some other indicators (THE has 13 indicators, but it only shows 5 composite scores).” This is especially significant when we consider what has happened to UCD in the THE ranking. We go from last year’s result, when it was in the top 200:

Screen Shot 2014-10-02 at 09.31.02

To its latest rank, this year:

Screen Shot 2014-10-02 at 09.30.47

Notice anything? The overall score is withheld. Sure, there are clear differences in the individual indicators, but what do these mean? Did UCD’s researchers really publish 15.3(%? Notches? Magic beans?) less this year (Citations)? The difference in Research is 0.9, so the “volume, income, and reputation” seems to be more or less intact. Teaching has actually improved by 4.6. At best, however, the overall ‘improvement’ in score by TCD could indicate (charitable interpretation of the ranking) that other universities are also improving, but that they have improved quicker. This echoes the truth about life among predators in the wild that you don’t necessarily need to be the fastest to survive a predator – you just need your neighbour to be slower than you.

An Irish Times article goes on about the above, saying that “main impact of the cuts has been on the student-staff ratio, which is one of the major factors used by all the ranking agencies.” Which is true. But the OECD in its recent Education at a Glance report notes that staff-student ratio is not an indicator of teaching quality, nor teaching outputs, nor results. It’s an indicator which has been jumped on because it is an intuition pump, in that it “elicits intuitive but incorrect answers.” There is as much evidence saying that large classes can lead to better learning outcomes as suggests the opposite.

One may then be inclined to agree with Prof. Andrew Deeks of UCD when he says “Our own analyses show that in terms of objective measures of teaching and research performance, we are performing well and making good progress.” The call to reverse cuts, in the hope that this will magically lead to an improved performance in rankings is a political argument. And that’s fine. But beware of rankings bearing ill-tidings. Rankings measure what they measure, rather than measuring the objective reality of higher education – and what they claim to measure may be questionable in and of itself.

Daniel Bell, post-industrial society, and who should pay for basic research

A few things popped Daniel Bell’s The Coming of Post-Industrial Society (1973) on my radar, and so I got an old copy for myself online. The edition I have is from 1976, with a new introduction from the author where he attempts to lessen the strain of the excessively heavy lifting some of his ideas were being forced to do by subsequent interpreters. What struck me is that for a 40 year old book, much the same conversations are being had, although it appears that in some respects we have leap-frogged the substantive elements in favour of nitty-gritty technical fixes. Bell’s book rewinds us to these bigger picture problems. Continue reading

World University Rankings – Information or Noise

Messing around with some of the results available from the Times Higher Education World University Rankings website, it’s interesting to note that near the top of the ranking over the years, things stay relatively stable, and further down there’s quite a bit of variation. In an ideal world, all the datasets would be available for download and easily manipulable (transparency!) but this is not yet the case. Anyway, doing some work for work, here’s a selection of a few institutions with their ranks plotted from the last THE-QS ranking in 2009-2010, to the most recent THE(-TR) ranking for 2013-2013.

There’s quite a bit of change from 2009-2010 to 2010-2011, when THE split from QS (or vice versa). This split resulted in a change in methodology and weightings, but things have not yet settled down, because weightings have either continued to change (though they have stayed the same since 2011 and 2012 it seems), but as Andrejs Rauhvargers notes (pdf), “the scores of all indicators, except those for the Academic Reputation Survey […] have been calucalted differently.” As well as this, in a recent journal article (“Where Are the Global Rankings Leading Us? An Analysis of Recent Methodological Changes and New Developments”), Rauhvargers notes that the THE doesn’t/won’t publish the scores of its 13 indicators. Transparency! Anyway, for what its worth, here are some pretty pictures that illustrate the noisiness of the rankings. Just fooling around with the data to see if I will return to this with the data for the full top 200 over the past 5 years.

New Picture New Picture (4)

Quote: Academics as the Four Yorkshiremen

While there are some, perhaps-justified, fears about modern academia effectively losing the insights of the next Newton, it’s worth recalling the circumstances in which many of the well-known figures in the history of science conducted their work. While they may not have been writing grant reports of marking exams, they were likely seeking patronage, carrying on journalistic careers, undertaking the duties of a doctor or a vicar, teaching, family business or otherwise making a – usually non-scientific – living.

Those who really were excluded were not solitary geniuses who could not find sufficient time for thinking, but those who were, as a result of class, geography, race or gender, never likely to have the opportunity to begin an education, let alone contribute to the established scientific societies and journals. And this affected the science that was done: ample research shows how the norms, assumptions and interests of elites have shaped supposedly value-free science.

Rebekah Higgitt, “Who’s missing in modern academia, solitary geniuses, or something much more significant”

This quote brings us to the heart of “well-in-the-bad-old-days-things-were-simpler ‘argument’, that has been trotted out since time immemorial. It’s the academic’s equivalent of the Four Yorkshiremen.

Right. I had to get up in the morning at ten o’clock at night, half an hour before I went to bed, eat a lump of cold poison, work twenty-nine hours a day down mill, and pay mill owner for permission to come to work, and when we got home, our Dad would kill us, and dance about on our graves singing “Hallelujah.”

But you try and tell the young people today that…

The demands made on academics are of course onerous, and we can’t deny that. But nor should we let this blind us to the situation faced by others in the wider academic and research world. We cannot let the poor working conditions of those who are employed blind us to the undermining of the work itself. We see this in the nouveau-indentured labour of the graduate students, the adjuncts, the non-tenure track lecturers, and the researchers on short/fixed-term contracts, all with little hope of security. Being aware of this casualization of academic labour, and the erosion of tenure, is imperative.

Getting rid of the academic as the heart of the academy is not a matter of ‘including stakeholders’, or ‘increasing efficiency’, or bringing better organizational models to bear. It is a coup against knowledge, and all the processes required to create it. Lecturers and researchers are at the heart of academia, as they live their lives in it. They cannot do what they do without the university. Administrators could go and administer anything, elsewhere. Students hang around for 3-4 years (any longer, and they cross over the border from student to researcher, on to the academic side of things) and then are gone. Presidents and Rectors always have the option to helm other forms of organization. But when we allow the role of  the academic as researcher and teacher to be shunted to one side, we lose something. It is not to suggest that, Smaug atop the horde, the academic is at the pinnacle of a hierarchy, and all others are subordinate to them. Rather it is that the academic has the centrality of the hub, the central node in the vast knowledge-creating network that is the university. Only by recognising, and asserting this can we preserve the workers as well as the work.

Enhanced by Zemanta

University Rankings and Jacques Ellul’s concept of “technique”

Currently I find myself doing work on world university rankings (Times Higher, ARWU, QS, etc.) and with all the reading of policy and academic papers, fatigue is starting to set in. It seems that rankings are a Good Thing, or rankings are a Bad Thing, or rankings are a Thing, which can be potentially Good or Bad. There is an awful lot of noise, but not a lot of information. Luckily, a restorative to this policennui (can that be a word?) arrived yesterday in the post, in the form of Jacques Ellul’s The Technological Society.

As per usual, a translation of a foreign book somewhat clunkily misleads a potential reader, because the original title of this work is La Technique: L’enjeu du siècle I won’t get into the implications of the subtitle (is this ‘stake’ a wager after Pascal where we are gambling on our humanity, etc. etc.), but rather the main idea of “technique”. Ellul writes first and foremost about technique, as something that precedes technology and “the Machine”. Indeed, technology must necessarily follow the existence of technique, because even science comes after technique. Our inability to recognise this thus far, he writes, is a cause for much of our confusion. Making these distinctions, as Ellul does, allows us to see certain things which otherwise escape our notice:

The machine, so characteristic of the nineteenth century, made an abrupt entrance into a society which, from the political, institutional, and human points of view, was not made to receive it; and man has had to put up with it as best he can.

We are still creatures of the nineteenth century, in the form of industrial scale of production, global capital, international communications, urban society, and so on. I would add another to those Ellul suggests, however, in the form of the university and higher education. Education itself, public education, is a product of the nineteenth century. It was developed (piecemeal, and unevenly) in order to meet the needs of industrialism. Industrialism here is another big noun we use which encompasses innumerable assumptions, expectations, as well as other big nouns. And it can also be subsumed within Ellul’s technique.

Higher education and the university, however, took some time to catch up with this sense of ‘public education’, and even longer to confront the implications of industrialism. Universities were elite institutions: elite in terms of access, and there to serve the sons (no/few daughters) of the elite. A transition slowly took place, however, and the elite form of education gave way to mass education. Today a further transformation is taking place, as mass education has given way to universal education. (This outline comes from Martin Trow’s 1973 Problems in the Transition from Elite to Mass Higher Education. Below tables are from Ellen Hazelkorn‘s “Everybody wants to be like Harvard – Or Do They: Cherishing All Missions Equally” , available as pdf here.)elite, mass, universal This chronological sketch presents an overview of what higher education is, in its bare outline. It puts idealisations such as “the life of the mind”, or “pure research”, or “moral improvement”, or anything else, at arm’s length. These ideas have a place, but a place in context. This life of the mind was only possible in the specific situation of a world where 0-15% of individuals attended university. Cardinal Newman, for example, was not proposing that everybody would attend his ideal college. This form of higher education was predicated on a majority of the population not attending university. That the idealised view of education developed in such elite institutions is no accident.

As a greater proportion of the population began to attend universities, a greater number of such institutions needed to be built. HEI growth in OECD We see this in the table below, which shows the results of this need for growth. A caveat needs to be mentioned however, because this table is for OECD nations only. It represents what is often called the “developed world”. It doesn’t include the BRICS, or the Next Eleven, or any other locations where the greatest growth in higher education will be necessary. Indeed, if the supply for higher education internationally is to keep pace with supply, considerably greater growth than the above will be necessary. Hazelkorn quotes John Daniel in 1996 as saying that “one sizeable new university has to open every week” to keep pace with the projected growth. This means that we have yet another transition in higher education, from universal higher education, to global(ized) higher education.

It is in this context of geopolitics, economic development and growth, demographics, and a complex knowledge ecology that university rankings exist. The older, national rankings (such as U.S. News & World) were a result of or took place at the same time as the transition from mass education to universal education. They met a need to monitor and benchmark higher education according to set principles and expectations. World university rankings are relatively recent, but they mark the transition of university systems into this internationalized context of higher education. The crucial difference, however, is the different status of the participants. We have very wealthy, stable, “developed” countries alongside developing countries going through very real growing pains. Everybody is on the same page, it would appear. Rankings are this ‘same page’, and they are an example of Ellul’s technique par excellence.

Technique integrates everything. It avoids shock and sensational events. Man is not adapted to a world of steel; technique adapts him to it. It changes the arrangement of this blind world so that man can be a part of it without colliding with its rough edges, without the anguish of being delivered up to the inhuman. Technique thus provides a model; it specifies attitudes that are valid once and for all. The anxiety aroused in man by the turbulence of the machine is soothed by the consoling hum of a unified society.

So wrote Ellul in 1954, and almost 60 years later we see that university rankings are yet more evidence of this. Rankings function in this way, concealing the wrenching changes being worked in higher education.

World university rankings are symptomatic of and instrumental in this process. Rankings purport to provide the model of a university, just as Cardinal Newman once did. On the other hand is the alternative position, that rankings are a fait accompli. A show of macho realism is held up as the ideal attitude, jutting the chest out, saying “rankings are here to stay, so man up and get on with it.” Any problems with rankings can be solved by methodological finessing: an altered weighting here, a peer-review element there, with a cherry of altmetrics on top. More interesting work has been done on the coercive aspect of rankings, of these forms of evaluation as an example of neoliberalism. I have nothing to add to this, because while the observations made are often, all too often they result in exercises in taxonomy, slowly segueing into mudslinging. Tracing the outlines of neoliberalism(s) thus is tantamount to terminological trainspotting. “Oh, there’s an example of privatization, and there’s a deregulation…” (Godwin’s Law for the word “neoliberalism” wouldn’t go astray).

We need to return to the essence of education – whatever that is, though presumably it doesn’t involve Newman, unless he has been sanitized of all elitism – and allow universities to remain ‘faithful’ to their ‘missions’. So does Ellul speak to these perspectives? Indeed he does, and in more or less just these terms. He outlines (prefiguring Morozov) how technique and its proponents have two solutions at the ready. The first is the creation of new techniques to mediate between the human and the technical/technological. There’s nothing that more technique, more technology can’t fix. Your ranking is having unforeseen and problematic effects? The solution is more ranking! (I have elsewhere termed this the “Harder, Better, Faster, Stronger” ideology of technology)

The second solution involves redefining the human (via, I suggest, a claim to an “original”, pre-technique situation). I myself have suggested this type of solution in the past, by suggesting we need an ethics of technology, or as Ellul writes, “a Humanism to which the technical movement is subordinated”. But, he continues, “the panacea of merely theoretical humanism is as vain as any other.” Usually at this point, when one has read that there is form of solution A, and form of solution B, and our learned author says both are wrong, then comes boldly proposed alternative C. Ellul is not one for such Hollywood endings. European cinema-style, we are left with an ambiguous and inconclusive end to the book, rather than an ending.

One interpretation of his impasse may be that Ellul did not have the conceptual, theoretical framework to develop such an alternative solution. In this version of events, the requisite distance to see things as they actually are was not available to Ellul (aside: on the role of ‘actual’ here, see Daniel Dennet’s piece on the ‘surely’ operator, “How to spot a weak argument”). If we do not have an alternative solution ourselves (and I do not), then we too do not have the requisite distance to fix matters, to see clearly. This is indeed the way things are often discussed regarding rankings. We hear that they are new, and they have only been around for about a decade, and so it is too soon to tell whether they are a Good Thing, or – if they are a Bad Thing – how they can be replaced beyond piecemeal technocratic improvement.

I am tempted to go in another direction from Ellul’s aporia here, though, in that maybe there is no such solution. If there isn’t one answer, then what if there isn’t just one question, or just one problem called “world university rankings”. What if these rankings must nest within an entire system which conveys not only information to us about our how our policies are being implemented (such as how countries are drawing upon rankings), but also how these policies are being received (in terms of the ‘consumer choice’ aspect of rankings, on the individual, citizen-level).

This would mean rankings would have to be considered as but one element in a diverse system of indicators and evaluation, working not just towards finding the market ideal of efficiency, but also the constitutional and democratic goals of political legitimacy and freedom. In this outline, universities might be ranked, but so too would the activities taking place within them, at the discipline-level, where peer-review would lend greater rigour and legitimacy to the process. Similarly, the entire system of higher education would be evaluated to ensure that there isn’t a growing gap of educational inequality, whereby research is preferred at the expense of education and training (as is now the case).

In some ways, this is a combination of the two forms of solution which Ellul criticises, but this doesn’t seem to be a failing in and of itself. It may have to invoke Engels in my defense here, however, and note that a difference in quantity (understood here to include scale) implies a difference in quality. University rankings deal with the meso, in terms of institutions, but we continually attempt to extend the implications beyond this, to milk them for more than they are worth. At the macro we require the evaluation of entire systems, such as the Universitas 21 ranking of National Higher Education Systems. Similarly, on the micro level, we need evaluations closer to the activities of individual disciplines or sub-disciplines, in a way that does justice to the vagaries of their praxes. Similarly, there is a difference when our analysis is one based on a static thing to be measured, versus the dynamic view of ongoing activities. We need to look at what teachers and researchers do, beyond what they simply have done (this is what is so problematic about the REF in the UK, it uses the concept of “impact” on the level of the individual researcher when perhaps it applies better to the department or discipline).To do justice to the activities we wish to evaluate, we have to see that we do not measure, like the fisherman holding up a dead or dying fish we have caught, but rather we estimate or guesstimate as it is in flux, and in situ.

Whether this is idealistic, or cynically technocratic, I cannot tell. It would doubtless require more administration and bureaucracy, and perhaps my hope in our ability to undertake the effort may be misplaced. Nevertheless, it is an alternative, and it has given me something to think about even beyond the two types of solutions which Ellul suggests that we are limited to. And that’s a start.

The ethics of research evaluation: the individual

One must endeavour at all times when reading evaluation proposals, policy documents, legislation, white papers, and journal articles without number, to see research in its proper context. Most of these documents suggest that we must see research in the “fullest” context, or the “broader” context. And interestingly, this full broadness has a tendency to favour the economic perspective. This is not necessarily the product of malevolent neoliberal intent. It is often a by-product of the hope that a quantitative approach lends itself to impartiality and objectivity. To an extent, I understand this. But I would say that objectivity and neutrality are not the same thing. To stand to one side and not critique a process which we consider damaging is not impartial, but it is rather decidedly compromised. To regard individuals and processes and structures equally impartially is compromised. To suggest that ‘research’ is a disembodied Thing, to be poked and prodded without consequences is compromised. Research is the product of an individual’s labour. Research is embodied. This is the proper context in which it should be viewed.

Research is the result of an individual’s effort, experience, time, and toil. Even group research isn’t done by a group, but rather by the coordinated activities of individuals in concert. It is not impersonal, and all discussion that disembodies research is to be resisted. The reason it is to be resisted is that all work has a politics. So just as my work implies an ontology, a view of the world which it is necessary to make explicit as per standard research proposal/project structure, it too implies a model of interpersonal and social interaction. That a It is necessary to draw out this politics, this standpoint which can be objectively sketched, certainly, but which by no means implies a ham-strung neutrality. My point about neutrality here is that all forms of impartiality and objectivity should favour the individual, for they are the actors in research. We should not favour an evaluation protocol, or a quantitative methodology, or another disembodied process by putting it in the same category as the individual researcher.

My point, then, is that not to have a politics for ones research and work leaves one at the mercy of those who do. In this, I am assuming the implication of research in a nexus of various vested interests, both within the hierarchy of research, higher education, and broader social and economic networks and institutions. These vested interests are biased in that they have their own responsibilities (to shareholders, voters, boards of directors, etc.), and they can coordinate their activities accordingly. Research, as a whole, does not speak with one voice, but the researchers as individuals should speak for themselves, and be biased in terms of their own interests. This is one of the points that is glossed over in stakeholder theory, that these different stakeholders can be in conflict. Consensus and agreement are not the greatest good here. Indeed, the assumption that some compromises must be found, that there should be some give and take, hides a deeper conflict that needs to be brought out into the open.

A step towards this taking place on a productive footing is that researchers as individuals need to become more explicit about what it is they are doing, and why they are doing it. What makes them, as researchers in Higher Education, different to researchers in private industry, as well as different to business interests, to politicians, to bureaucrats and administrators – even administrators within their own Higher Education Institutions. With this a more open discussion and debate can take place regarding research evaluation and the direction which research should take. Researchers and scholars are citizens, and pay their taxes, and so their views and expertise should not be silenced in favour of the politician’s mythically intolerant and hard-nosed taxpayer, nor the minor deities of Innovation and Growth. Researchers do not necessarily need to speak with one voice (no “X of the world unite!”), but they do need to speak with their own voices, in their own language, and from their own expertise.

Where are the higher education libertarians?

How come we don’t have a Tea Party of research evaluation? Where is the “Don’t Tread On Me” flag for the REF?  How come all the market ideology which is imported into the administration of universities is the of the unreconstructed sort? How come the focus is on ever more regulation (of individual researchers and their work), whereas elsewhere in this grand market regulation is anathema? Where is the spontaneous order ideologues, the invisible hand acolytes for the knowledge economy? Where aren’t academics and researchers recognised as the experts they are and so left to self-regulation, as is the norm elsewhere in the ‘market knows best’ dreamlandfantastytime? If the Michael Goves and David Willetses of the world are bringing market mechanisms into education and research on the principle that these realms are markets already, well why not expand this thought to its ultimate conclusion. If they are markets (of ideas, of knowledge, of technology, of understanding) then the last thing required is any government involvement. Or perhaps it is an incoherent analogy from the off…