a blog about things that I've been thinking hard about

The Singularity: Purpose and Transition

25 April, 2006
intelligence and purpose, without reproduction?

We think of the Singularity as resulting in some kind of super-intelligence

But intelligence is a property of purposeful biological organisms.

And purpose results from reproduction and natural selection.

Is the Singularity going to reproduce?

Singularity: Man or Machine?

The concept of the Technological Singularity is that technology is used to create new and more powerful technologies, and this creates a positive feedback cycle. The rate of development constantly accelerates, until, at some point in the future, there is an "explosion", where all possible new technologies are developed in a fraction of a second by a super-intelligent entity, whose super-intelligence is itself a result of the accelerated development of technology.

Intelligence plays a critical role in this scenario, since human intelligence is one of the main drivers of technological development, and "intelligence technologies" can be expected to play a more significant role in the feedback cycle as time progresses.

The Failure (So Far) of Artificial Intelligence, and its Consequences for the Singularitarian Vision

Originally, when the computer was invented, and the possibility of artificial intelligence was first considered, it seemed like it might be not all that hard to make a machine more intelligent than its creators. "Intelligence technology" seemed to be just a matter of making an intelligent machine.

But many decades later, we have discovered that human intelligence includes abilities which are quite non-trivial to reproduce in "intelligent" machinery.

At the same time, we have discovered many information processing tasks which can be better performed by computers than by human minds.

Taken together, these two discoveries imply a different view of the future development of technology for intelligence – that it will involve combinations of human and machine intelligence, each component doing what it does best. This leads to the concept of Intelligence Amplification, where machines are not intelligent by themselves, but they have capabilities that "amplify" human intelligence.

The Post-Intelligence-Amplification Singularity Transition

Even if we accept that the medium-term development of intelligence will involve a combination of human intelligence and machine intelligence, there is still an expectation that eventually there will be some kind of transition to a pure machine intelligence.

Two major reasons why we might expect this are as follows:

There are, however, some reasons to doubt that a human-to-machine transition will occur. The first is simply that it is not very "nice" from the point of view of being a human. It becomes an issue of "us" against "them": if the machines "take over", in the style of the Terminator movies, then what becomes of us? Whatever it is, it probably isn't anything good.

Other reasons have to do with the definition of "intelligence" and its relationship to purpose.

What is Intelligence?

A post-transition singularity will consist of some machine intelligence which has become disconnected from the needs and desires of its human creators. But what exactly is "intelligence"?

One could engage in endless philosophical discussions of this question, but for the purpose of this article I will adopt a plausible hypothesis, which is that:

Intelligence is purposeful information processing.

The most critical part of this definition is the word "purposeful", and we really need to understand what "purposeful" means if we are to understand what "intelligence" is.

I have discussed that issue somewhat elsewhere, and a simple definition is that purpose exists where the cause of something is explained by its effect, even though the cause precedes the effect. The scientific understanding of purpose is that it is always explained by the existence of some type of selection, which in the case of living things is natural selection.

Machine Purpose

In the case of machines, at least up to the present time, purpose has always come from human purpose. That is, if a machine has a purpose, it is subordinate to the purposes of the person who uses it. There is no natural selection acting directly on the machines; the only selection acting on them comes from humans selecting which machines to create and use and natural selection acting on the humans. The genes of people who use machinery to their advantage get selected for, and the machinery that they use continues to exist because it helps those people satisfy their biological goals.

Machinery will not acquire its own autonomous purpose unless it becomes subject to a direct natural selection, which can only happen if machines become able to autonomously reproduce themselves. The idea of machine reproduction is part and parcel of many popular science fictional end-of-the-human-world scenarios.

But in real life, giving a machine enough intelligence and technology to reproduce itself autonomously is so difficult and non-trivial, that the singularity will most likely have happened before this problem is solved. Which implies that the singularity won't consist of self-reproducing machines. But if it doesn't, where will the singularity get its purpose from?

Machines Cleverer than Us

A slightly different transition scenario is where machine intelligences become cleverer than us, even while they still serve us for the achievement of our own goals. Because they are cleverer than us, they will sometimes take actions for reasons that we cannot understand, and we will no longer be certain that their purposes are what we intended them to be.

Self-Corruption

Related to the question of machine purpose is the issue of self-corruption. "Self-corruption" is what happens when an entity engages in the achievement of sub-purposes in ways that act against the original driving purpose (as derived from a selection mechanism). It occurs when the entity is so intelligent that it out-smarts its own internal mechanisms for measuring the achievement of the goals which define its purpose.

In the case of people, there are many ways in which we seek to gain happiness and pleasure which are contrary to the original biological purpose of long-term reproductive success. We have sex with contraception, which does not lead to reproduction. We eat food which tastes good, but which is unhealthy when eaten to excess. We engage in many artificial pleasures, which serve no direct useful purpose. Some of these pleasures are based on the development of sophisticated technologies – two simple examples are watching television and playing video games.

A more direct example of self-corrupting pleasure seeking is the taking of mind-altering drugs, legal or otherwise, such as alcohol, cocaine and heroin. These substances directly act on the parts of the brain that are supposed to measure the achievement of biological purposes, and they generate false measures of achievement.

Self-Corruption versus Natural Selection

Natural selection acts as a natural brake on self-corruption. For example, people have been drinking alcohol for thousands of years, and it seems likely that evolution has selected against those genes which make it too easy for their owners to have a good time from the consumption of alcohol. Natural selection may also act in favour of those who adopt moral codes which prohibit self-indulgent enjoyment of non-productive pleasures.

The problem of self-corruption directly confronts the non-reproductive singularity. If the singularity does result from the development of self-reproducing machines, then natural selection will act on the machines just the way it currently does on people, and self-corruption might be avoided (that is, the self-corrupted machines will self-destruct, and only the uncorrupted machines will successfully reproduce.). But the most plausible singularity scenarios do not involve self-reproductive machinery. Thus there will be nothing to prevent self-corruption once it occurs: not only will self-corruption be inevitable, but there will be no way to recover from it, because the original source of the purposefulness that created the singularity (i.e. us humans) will have been superceded by that same singularity.

Intelligence Amplification Self-Corruption Scenario

One plausible singularity scenario involves Internet-based intelligence amplification, as described in my article The Open Source Singularity. In this scenario, people contribute intelligence to the Internet, including both software and generally useful information, and the Internet feeds intelligence and information back to its users, helping them to solve their problems more efficiently. What will self-corruption look like in this scenario? Some possibilities include:

The Super-Intelligent Machine Self-Corruption Scenario

A second scenario involves some single giant machine-based intelligence (which may have evolved somehow from the previous scenario). Such a super-machine is too large to reproduce, and doesn't necessarily want to, because reproduction would mean creating new super-machines which would threaten its own continued existence.

In a non-reproductive scenario, there is nothing to maintain purpose, and such an intelligence may self-corrupt itself very quickly. For example, the singularity might go into accelerated development on Tuesday at 10 am, achieve peak singularity at 11 am, and successfully self-corrupt three minutes later at 11.03 pm.

Self-corruption may or may not create an opportunity for the continued existence of life or intelligence afterwards. This will depend on whether or not the super-intelligence has previously eliminated all competition, and also on whether or not it has successfully consumed (or destroyed) all available natural resources.

It is conceivable that there could be a series of "rise" and "fall" events, where a singularity forms, self-corrupts, and then re-forms based on those resources and technological artefacts which survived the self-corruption, and does so over and over again, until eventually it evolves into some stable state.

Purpose and Stability

In the very long term, the heat death of the universe will ensure "stability" (although there is an argument that intelligent life may be able to exist indefinitely in an ever-declining universe, if it learns how to become ever more and more thrifty in its consumption of energy resources).

The major effect of a technological singularity may be to speed up the evolution of planet Earth into such a "stable" state. A singularity super-machine will reach a stable state when it takes control of all natural resources and prevents the existence of any competitive entity that might use those same resources to threaten it. Once it has achieved this state, the singularity machine doesn't actually have to "do" anything anymore.

This might seem strange to us, living as we do lives full of activity, seeking always to improve our abilities, our knowledge and our power. The human urge to expand in all these dimensions is a very "positive" thing, and a technological singularity that just sits there doing nothing more than preventing the existence of alternative entities seems very "negative" and depressing.

However, we must remember that the final long-term purpose of anything is not so much successful reproduction, but survival. To be successful, genes must reproduce more successfully than other genes, because that is the only way to survive. The gene which reproduces slightly less successfully than its competitors will occupy an ever smaller proportion of the gene pool, and in a world of finite size with finite resources, it must eventually cease to exist at all.

The Race for Space

It might seem that there is a way out of the finiteness constraint, which is to expand into outer space. In the infinite reaches of space there should be room for more than one super-singularity machine. I have already touched on a similar theme in my article There is (Almost) No Such Thing as the "Common Good". That article describes two possible "ends of history": exhaustion of natural resources or expansion into space. The singularity introduces a third alternative: the development of an artificial intelligence which takes control of all resources (without necessarily exhausting them), and which doesn't care about conquering space, because it has no reason to care.

To avoid this scenario, we need to develop space-invading technologies before we invent articial intelligence clever enough to take us over. Unfortunately the prospects are not very good: space travel is very energy and resource intensive, whereas the development of artificial intelligence requires only silicon, moderate amounts of electric power and enough clever people willing and able to work on developing software.

Indeed, the problems of space travel are such that we probably need to develop some degree of artificial intelligence (or amplified human intelligence), just to be clever enough to solve those problems. The hard part will be to solve the problem of space conquest as soon as possible after developing enough intelligence to solve it, but before we develop a super-intelligence that decides it doesn't need the hassle of competitors coming from outer space.

Conclusion

I can't say for sure how it is all going to end. But I hope that I have succeeded in highlighting a few issues relevant to any analysis of a possible post-human singularity "transition":

Vote for or comment on this article on Reddit or Hacker News ...