Redefining superintelligence

Definition of “superintelligence

By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.

Entities such as companies or the scientific community are not superintelligences according to this definition. Although they can perform a number of tasks of which no individual human is capable, they are not intellects and there are many fields in which they perform much worse than a human brain – for example, you can’t have real-time conversation with “the scientific community”.

Nick Bostrom, How Long Before Superintelligence? Continue reading

Advertisements

Towards a definition of technology

One popular view holds that the human mind truly came into its own via a “rewiring” (Kevin Kelly in Out of Control, online version here).  With this there was an extension of the brain beyond its grossly physical, biological limitations. Technology serves body, body serves brain, with brain and body as a multiplicity serving the genes in the final reckoning. Technology is the overcoming of limits – of whatever kind. That which overcomes limits in whatever mode thus serves us as a technology. We should be able to extend such a provisional definition beyond hammers and wheels and machines. Accordingly, in the most serious sense, art is a technology. A poem, a painting, a dance is a recalibration of purely crude literalism (as a kind of tyranny of an ontology of representation, of reality as a ‘given’ rather than as a ‘made’). Art can move us beyond these assumptions which we have forgotten were once new.

With this, considering all things in terms of limits and our overcoming them, does this open the door to new analyses? Are there examples where it can be said not to apply? In technological terms, what is the function of this definition, and can it tell us anything new? It is basically an engineer’s-eye-view, whereby a technology is a solution to a problem. Its function is to solve a problem, the problem being a limit which has been encountered. From this definition, some fascinating implications follow. Technology at this basic level of analysis appears to us as being self-perpetuating. Each technology brings with it a new set of problems, which in turn requires new sets of solutions, and on, and on…literally ad infinitum. We can say then that the first characteristic of technology is that it is autocatalytic. In chemical terms, a reaction is chemically autocatalytic if its product is also the catalyst for further reactions. This is so for technology and as such it manifestly is a source and product of feedback. Edward Tenner examines this in detail in terms of “revenge effects”. I am not taking an ethical stance on the matter one way or the other, as Tenner’s thesis requires of him. Nor am I suggesting this is simply a corollary of technology. I am saying that this feature is structurally integral to all technologies. (For an alternative overview with an ethical focus, see “The Unanticipated Consequences of Technology” by Tim Healy.)

Problem: solved.

If we return to the human mind, and consider the following. The rewiring that brought about consciousness brought about the ability to think in terms of what is not present, in time and space. It allowed for planning for a harsh winter. It allowed one to consider the potential benefits and dangers the next valley over. It escapes what I above called crude literalism. There are lines in Robert Browning’s “Andrea del Sarto” that capture this:
“Ah, but a man’s reach should exceed his grasp,
Or what’s a heaven for?…”
The human being is that creature par excellence whose reach exceeds its grasp, except it is the cognitive reach that exceeds the physical grasp. Technology is the attempt to bridge the two. We will never, in our present form, have the steady-state evolutionary form that living fossils such as the coelacanth; in them, reach and grasp coincide perfectly. Homo sapiens, in contrast, are necessarily an imperfect solution to the problem of life (if this can be said without woe-is-us connotations), the problem of adaptation to an environment. The reason for this is that by being sapiens we continually alter our environment such that no steady state becomes possible. We are implicated. Each new product of our mind that enters the world as either a physical or a mental technology is a solution that creates new problems. The cycle cannot stop. 

Ideas and criticism

If an idea is presented, and elements of it are questioned in a manner that parries but does not thrust, if it is purely surface, does this in fact have any merit as criticism? A criticism ideally has a purpose, which is basically to be a form of troubleshooting. It must be specific, it must engage with the topic. To not do this is equivalent to saying “well I haven’t used the software you developed, but it’s probably rubbish” (we might call this The Troll’s Refrain). Stand back and ask how does this help us to actually improve the idea. Even thinking ‘us’ is useful, because if there’s a conversation, then there is a joint effort. David McCandless (author of the excellent Information is Beautiful) has a useful breakdown of how we think about the world, in the following pyramid:

In this, there is a progressive hierarchy of whittling out what is irrelevant, the old effort of separating the wheat from the chaff. This means that the bottom is a field somewhere, and so the top is presumably a Weetabix. Ideas are about turning information into knowledge, they move us from one level to the next. The non-criticism I am thinking of above seeks to move us back down, from knowledge back to information, or data. Instead of information, it focuses on noise.

There is a temptation in all debate to ask for greater and greater precision, but at the same time you have to step back and ask what level of perfection you want. If ideas are to be abstract, then they of course cannot account for everything. What we are searching for is a degree of functional ‘robustness’. This is a term which Kevin Kelly makes continuous reference to in his writing (I am mainly thinking about 1995’s Out of Control, which has dated far less than I would have expected), denoting that capacity for enduring networks of communication to cope with unpredictability. It is a nice antidote to management horseshit about “best practice” because it factors into its very structure that we start from least worst, and go on from there. We trade off a small degree of efficiency for a considerably greater degree of structural toughness. There will ever be noise, but not every blip is a threat to the entire system. Ideas are supposed to be solutions, and they are not to be rejected on aesthetic grounds. If it works it works, and from there we can begin the work of making our solution elegant.