Reading about “The benefits of the research blog” at the Kpop Kollective, an interesting corollary of having a research blog springs to mind. William Gunn, who is Mendeley’s head of academic outreach, noted recently in a Research Trends virtual seminar (available here, along with the other seminars of the day) that when it comes to making your work/publications/etc. available for download on repositories (etc.) that “Readership patterns correlate with eventual citation patterns”, when we understand readership patterns here to be synonymous with actual downloads rather than just clicks. We assume here when somebody downloads the pdf (or recording, or whatever else) of your work, that they intend to read it (clicking on the page which has this download link usually just has the abstract, which is of limited use in deciding “I am/am not interested in this work”). This allows you, as a researcher, to know in advance of publication how your work will be received. Thinking about this leads to some interesting questions.
The older model of how we conceive publication is as in Fig. 1. We have a fairly direct vector going from research to output to the reception of this output. This reception, and all other related factors such as citation counts and other forms of bibliometric analysis taking place on a meso- and macro-level, is what influences a researcher’s continued work. Do I continue to focus on the same things? Is there an interested audience for this type of work, and am I connecting with them? This is one of the points which is often glossed over in discussions of bibliometrics. There is something of a binary attitude towards such forms of quantitative analysis, where by those in the sciences, technology, engineering, and mathematical fields (STEM) are held to be most readily catered for by bibliometrics, where as those in the arts, humanities and social sciences (AHSS) are not. In my work in this area, for both STEM and AHSS researchers, I take a somewhat tempered view. My philosophical background in hermeneutics and the philosophies of information and knowledge (Gadamer, Ricoeur, et al.) allows me to see that bibliometrics is a tool to try put a researcher’s work into context. That is what hermeneutics is all about. The difficulty is that this tool is extremely blunt, and that the results are far too slow in coming. Calling these “metrics” is misleading, as it arrogates a notion of precision and agreed-upon rules of measurement where at best we have indicators – and beyond that not much aside from a lot of disagreement!
I don’t have any great interest in revisiting this debate, as it is something of a rite of passage for those who are interested in library sciences and the measurement of research. The time for walkabout has passed, however, and we need to fully engage with the underlying principles of this area. We want to measure research perhaps because this research is funded by the taxpayer, or other stakeholders, and so we have a (i) responsibility to do so. Then there is the fact that we want to ensure that our research is of high quality and it is having an effect (or that other great shibboleth, “impact”) and so we have a (ii) desire to measure it. Either way, both of these impulses to measure are informed by something similar, to my mind, namely a view to contextualize research. We want to see how our research fits into the broader research and/or knowledge landscape. If so, how to we leverage this? If not, how do we improve this?
This is research in the context of the information age. Structures of academia are atavistic. They are socially of the medieval age (the lectures, the tutorials), the industrial age (with a split between the natural sciences, and everything else – i.e., between what pays and what does not), or the nuclear age (in terms of bibliometrics and Big Science, thus repeated references to Vannevar Bush’s Science: The Endless Frontier in the scholarly literature). The dragging of higher education, kicking and screaming, into the network, the information age via MOOCs, Open Access publishing, altmetrics, and these new forms of measuring research of course are encountering resistance.
What is interesting about Gunn’s observation is that we have inserted an intervening layer between research and publication. This allows us to sidestep the old vector, or perhaps better, to speed up the analysis of how research is received. It inserts a feedback loop where previously there was only a linear route. It brings the benefits of a network model into the research process. This, of course, cannot be an isolated fix. Networks must nest within other networks for us to fully benefit. As Manuel Castells notes in Communication Power [p. 23], there are three major features of networks: flexibility, scalability, and survivability. Transfer this to the research world, and we have some interesting yardsticks against which we might best compare our activity. Are we flexible enough to “reconfigure according to changing environments and retain [our] goals while changing”? Do we have the “ability to expand or shrink” the scope of our research with little disruption? And can our research “operate in a wide range of configurations” because it is not monocular? By thinking in terms of the network, we are always seeking to discover the context of research.
Consider, then, the new model. It is rather more complicated, but this is a manifestation of the benefits we can reap. Here we have a kind of network within the process. There are intervening layers or routes between the old ones. The arrows are a kind of a clue to the different iterations of the contextualization process. The dark arrows are the old, traditional model, where the only feedback research received was binary: yes or no, your paper has been peer-reviewed and will be published, or not. The next layer of the process, with the baby blue arrows, is that which William Gunn discussed. Here pre-publication, or making available of research outputs on your institution’s repository – such as Dublin Institute of Technology’s excellent Arrow – speeds up the contextualization process (full disclosure: DIT is the institution where I am currently based). The feedback you receive is no longer binary. It is not simply yes or no, but has become graded: not just is it being read or not, but is it being read a lot, or a little? The next level is that which the Kpop Kollektive discuss, where we have an intervening layer between research and pre-publication, namely blogging and other activities. I would include Open Scientific Notebooks here too, such as were discussed by Jeremy Frey in his Research Trends virtual seminar. The researcher can, at this level, receive near immediate feedback and engagement with the research they are undertaking, thus allowing a greater degree of flexibility, scalability, and most importantly for the researcher, survivability in a period of uncertainty.
The benefits of fully engaging with these new processes, and ensuring that they are part and parcel of higher education research is of benefit to all: stakeholders, administrators, and researchers themselves. Buzz-words such as impact take on some tangible qualities, as research is disseminated on an ongoing basis, and those who are interested are free to engage with research on their own terms and when it suits them, rather than being beholden to the laggy vicissitudes of the academic publishing behemoth. The is not to say that there are further questions to be asked, or exceptional circumstances in which the above picture is not appropriate – far from it. But it is an exciting time when all those involved and interested in research and knowledge can have a say in the tone and content of this debate.