A structural reason for technology’s ethical blind-spot

I want to offer here a possible structural reason for why when technology is subject to ethical critique that their response is often all too insufficient. I am thinking primarily of the inability of the targets of Evgeny Morozov‘s broadsides to respond either tonally or in terms of content to what he has to say. One reason I have heard in discussions is that those who are subject to attack want to somehow “rise above” what is being directed at them, but to me this misses the point, and the dual failure here (of style and content) is connected to something broader.

Returning for a moment to those doing the attacking, such as Morozov or Dale Carrico, they tend to view the ethical blind spot I mention here as wilful, as a sin of commission rather than one of omission.  The sense one gets in reading their pieces is that technologists are dastardly and malevolent in their intentions. I shall add a caveat here, and say the positions of Morozov, but especially Carrico are considerably more sophisticated than this outline can do justice to, but that neither of these figures (whom I agree with across the board) are the focus of what I am saying. Their critique is a staging area for my observations here. I am less vituperatively inclined (part of my hermeneutic formation), and so am interested in finding reasons for the inadequacy of these responses to attack, beyond the possibility of a vast (coordinated? emergent?) conspiracy of techno-evil.

What if we take this inability to respond adequately to ethical attack, and attribute it to one of the central tenets of technological thought in terms of coding, of the digital, of information theory. This fundamental principle is that of substrate neutrality. That is, the material, the medium through which information is communicated is irrelevant, once there is such a medium. The information is independent of its conveyance, according to this mindset. All that matters is that it is conveyed. That is the level at which technological reflection and theory takes place. Whether a code is transmitted via copper wire, optic fiber, radiowave, etc. The medium is not the message. The message is the message.

A potential corollary of this concept is to allow it to filter out across ones entire mental landscape, dangerously unthought. It is then that this idea becomes ideology (specifically, the Californian Ideology). It is a fine idea when restricted to the realm of your experience and labour, but to allow it unfettered power leads to the often ridiculous notion that reality itself can be hacked, or “disrupted”, with no consequences. It denies social reality. It denies individual human agency. In this it repeats the wackier mistakes of the first generation Fabians, except the arrogance of an often uncomplicated socialism is replaced by a crude informationalism. All that matters is information. “Information wants to be free!” Grand plans are replaced by algorithms in this latest iteration.

“Disruption” is uncritically held up as an ideal, never mind the fact that the socio-legal structures we have in place have had centuries and generations to develop and mature, that there have been hundreds of years of bug testing. But now, we can disrupt all this collective work, bringing “innovation” (another article of faith) that responds to the designs of a few engineers, or at best a bigger group of shareholders. This disregards the fact that in this network and networked society, the lives of those who exist even outside these information ecologies are affected by technologies they will never use. That is why the socio-legal structures of this pre-information age still endure and exist, to protect those who will never be touched by, or have access to these advanced technologies, no matter how ubiquitous they are claimed to be.

Throughout all this is the denial of individual agency, of the human being as an ethical being, responsible for their own personal behaviour, responsible to their fellow human beings and citizens, and responsible to future generations. Not just the shareholder. The blind spot is a result of incommensurate timescales, and incommensurate pictures of reality. Technology does not have a politics, and it does not have an ethics. It cannot respond to ethical concerns because these take place in meatspace, and what happens in IRL is – if not quite anathema – certainly something that complicates and sullies the abstracted elegance of the powerpoint presentation of a technology’s unique selling point.

Technologies exist to clean up the clutter of reality, to sort the mess, to make life easier for us. But what is often forgotten is something that we humanists, philosophers, critical readers are obsessed by. It is, in the title of a Susan Sontag essay, where the stress falls. Information is not independent of its means of communication. The idea of substrate neutrality destroys agency. It neutralises the human voice. It effaces gesture, a warning glint in the eye, the roll of the shoulders implying but not signalling irony and distance from ones own words. With this in mind, let us read again what technology does:

Technology exists to make life easy for us.

Technology exists (to make life easy [for us]).

Technology exists to make life easy (for us).

Technology exists to make life easy for us. 

Where does the stress fall? All too often, the definition seems to be the second and third option, for here the style and intonation comes from the whole surrounding technological discourse where the focus is on the technology and the science and how wonderful they are. But they are wonderful as products of our imagination (Chris Anderson’s notion that big data will lead to boot-strapped scientific theories is way off, in various senses…). They are a result of our human agency. Technology is both by us, and for us. Where we put the stress in even this constructed definition of what technology is can be illuminating. It shows what the technology hides from itself in its effort to be better at what it does.

What I am saying is that I diverge from Morozov and Carrico somewhat in the intention I see behind the ethical blind spot. I am more inclined to attribute it to an attempt to be good. As such, I think the best thing we can do is draw out some of the contradictions behind this, to show that a thoroughgoing substrate neutrality is not only (i) destructive, but also (ii) incoherent. As with other fields of human endeavour, technology requires an ethics and also a politics, a  theory of human agency, that brings us beyond critique.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s