Bradford’s Law, as explained by the Weekly Peedja, estimates the “exponentially diminishing returns of extending a search for references in science journals”. It suggests that articles published in scientific journals follow, more or less, a Pareto distribution. As with all power laws, this means that a small number of articles are responsible for the vast majority of citations. This is still taken to be that which underlies modern bibliometric and citation software such as Thomson Reuters’s InCites, though of course there are alternative approaches being developed, such as AltMetrics. The question I wish to ask is whether Bradford’s Law is any longer sufficient in a world of so-called “big data”. I have read that we apparently do not have a “good explanation” for why Bradford’s Law “works”, but I would suggest that we have a sufficient explanation which isn’t necessarily exhaustive. That is simply the fact that our attention is finite, given that we cannot read everything, nor forever. Our reading is intentional, goal directed, and so we want to read with maximum efficiency. If we are working on a particular area of research, it behoves us to engage or at least be familiar with what the majority of people are reading and working on.
This idea, Bradford’s Law, was first described in 1934, in the years leading up to the first development of Big Science, the time of Vannevar Bush and Science: The Endless Frontier. This was to be the time of socio-economically informed – if not led or directed – research, the birth of an attitude which still permeates discussions of research through concepts it spawned such as “impact”. It is undeniable that this big model of research still exists (think of CERN), but the question to be asked is whether the long-tail in science is becoming more and more important. Big Science is not a universally applicable model, and nobody assumes it is.
Massive projects undoubtedly will always exist, but they are subject to the law of diminishing returns if they do not allow for un- or only semi-directed research, the kind of activity that takes place in the long tail. Historians and philosophers of science would point to numerous examples which they say illustrate that the accidental element of research must not be underestimated. Kuhn’s concept of paradigm shifts seems to put this insight into a central position within science (as do Feyerabend, and Polanyi to a lesser extent). The long tail, as its market acolytes keep telling us, is where “disruption” takes place. Bradford’s law is a power-law, but it is still curiously linear.
Research budgets are being squeezed, and so collaboration is coming to be regarded as a way around this, a compromise between lack of funds, and the need for a degree of “bigness” to make research worthwhile. What is the position of research tools such as InCites or Mendeley in this picture? If we have a Bradford style distribution of journals within subjects, then is it also the case that there will be a Bradford distribution of research areas and foci in and of themselve, quite separate from the journals themselves? That is to say, doesn’t Bradford’s Law, as it illustrates journal activity, not reflect the reality of research activity? Are not researchers similarly distributed? Undoubtedly, but what Bradford’s law doesn’t allow us to model is interdisciplinarity and collaboration. This is a dangerous oversight.
Not being able to account for interdisciplinarity as the inevitable result of collaborative research is myopic given that so much knowledge is created under liminal or interstitial conditions, at the barriers of permeation between disciplines (bio-technology, nano-technology, materials sciences, etc.) Returning to Bradford’s Law, this held definitely for when our reading time was finite. It was a way to attempt to model the emergent features of the readership of journals and their articles. With advances in computing techniques, however, this barrier of finite reading time has been if not quite rendered infinite – then at least made elastic.
Data-analytics advances, aggregation software, data-mining, altmetrics are all methods of co-opting the time and effort of others, and using their insights to our own benefit. It is crucial to remember in all of the celebration of computing techniques and technologies that a fundamental amount of human effort and time is the foundation of all of the benefits reaped by algorithms. It is incumbent upon us, in this time of change within the world of research to not only recognise this, but to more explicitly address this human element. This might allow us to continue to do not only research with impact, but also that “pure” research which is the lifeblood of all research in all disciplines, that pure research that produces disruption, that lives in the long tale, and that gives us the most surprising and unexpected results and advances.