I do not intend here to get into the arcana of Business Intelligence 2.0 (BI 2.0), because I am not one of its acolytes. That is, I am not trying to sell you anything. What I want to point to is one central problem with the entire notion of Business Intelligence, which is related to other, similar problems with information. In that self-serving white paper from 2002, “Business Intelligence 2.0: Are we there yet?” (pdf link here), Gregory S. Nelson gives what he sees as a speculative projection of where BI 2.0 might take us. In between repeated, and worrying references to Star Trek, he suggests that through BI 2.0, hey-presto
Decisions, facts and context will be developed through “crowdsourcing.” No longer will reports (or how data is structured) be left up to the designer, the environment will evolve as users make the data and derived insights work better through contributions of many.
This can be called the fallacy of absent agency, common to most popular applications of the concept of emergence to society (Nelson misattributes this to evolution rather than emergence), which operates under the assumption that all that takes place in the realm of Web 2.0 happens magically, without the intentionality of human actors being present. It neglects the fundamental point that all of the algorithms that Nelson refers to in his golden litany of successful businesses (Reddit, YouTube, Facebook) are built on the time and effort of their users. Just to make it clear, that means human eyes.
The data which are to be used by BI 2.0 are not going to spontaneously arrange themselves into some coherent form. That requires – yes! – a designer. Whether that designer is a computer scientist, or information scientist, or data analyst, or archivist, or librarian, or whatever else, there is a human there who has to make decisions about what happens with the data/information, in what way, in what order, and – most importantly – why.
That why is the reason for this post. The purpose of BI 2.0 is not, presumably, to maintain the status quo. It is, presumably, to be used as a tool which will somehow lead to innovation. It should, presumably, alert those who deploy it to opportunities and possibilities for new technologies and processes and opportunities for discover. Here I am not simply discussing BI 2.0 as it exists in the sphere of the market, but more significantly in the realm of research and academia, to where it will no doubt slither at some point. The problem with this is that the very fundamentals upon which BI 2.0 is built do not favour innovation. Indeed, the very nature of BI 2.0 is that it is a creature of purest ideology, of nothing but the status quo.
The problem with any notion of crowd-sourcing “decisions, facts, and contexts” is that these are going to be collated on the basis of pure numbers. Big numbers. What most people think is so will be what is crowd-sourced. This puts a premium on what is known, familiar, given. It is information as ideology. We should instead be making space for the alternative, or what we think of as the speculative and visionary. That is not the realm of an algorithm.
Even though humans are involved in all these Web 2.0 entities as I have pointed out, on top of this you need a social interface with these humans which allows for those who do not think alike. Where is the space of disruption in BI 2.0? Where is the gap through which originality enters? If BI 2.0 is allowed to become a model for how we arrange and organise information in our society, it precludes the very possibility of future research and innovation. Where we think we will be creating new opportunities, we will be strangling them in their infancy.