A lot has been said about the technological singularity: what it is, how it might happen, how many decades we might have to wait for it to happen, what effect it will have on the human race.
The main idea of the singularity is that if technology X can be used to improve technology X, then there will eventually be an accelerating positive feedback loop, until technology X becomes as good as it possibly can be.
The most important technology for the technological singularity is the technology of intelligence, because our human ability to create technology depends on our intelligence. When the technology becomes as intelligent as we are, then it won't even need us to manage its own development. But even before this stage is reached, human intelligence can be augmented by technology.
Intelligence can be regarded as a measure of the quality of goal-oriented information processing. Information processing technologies come into three main categories:
Each human being can be regarded as intelligent on their own, but much of our intelligence is really a product of our development within the society that we live in, and it depends on our ability to communicate with other people. It also depends on the ability of society to store information. In less technological societies, almost the only means of storage are the memories of the people in it, and the storage of information requires a continual and concerted effort by the older members of society to communicate what they know to its younger members.
The earliest information processing technologies were those that assisted with storage and communication. We can regard writing as an information storage technology. Any storage technology is also a communications technology, because information may be retrieved from storage by a different person to the person who stored it. The communications element of a storage technology is vastly increased when the ability to efficiently duplicate is included. Thus books were an important information processing invention, and they enabled information to be stored and retrieved without the sender and recipient having to exist in the same time or the same place. But the printing press was a significant advance because it enabled books to be cheaply mass-produced, proportionally increasing the number of people who could benefit from being able to retrieve information stored inside books. There also exist communications technologies that do not necessarily involve any element of storage, and these include the telegraph, the telephone, radio and television.
The third element of information processing is calculation. Some of the earliest calculation technologies were machines and devices to assist with arithmetic, such as abacuses and mechanical calculators.
The culmination of all these technologies is the modern general purpose digital electronic computer. It performs calculations, and it can store and retrieve digital data. Computers communicate with people and with each other. The modern computer connected to a public computer network (i.e. the Internet) subsumes all previous information processing technologies, and can, for instance, provide the capabilities of books, printing presses, abacuses, calculating machines, telegraph, telephones, radios and televisions.
Modern computers are designed to be Universal Computers. This means that a computer can be set up to perform any calculation task that can be performed by a calculating machine, where the instructions on how to perform the specific calculation are themselves provided as information to be processed by the computer. This corresponds to the notion of software as data, and we are all familiar with the concept of increasing the calculational capabilities of our computers by copying data onto them (i.e. installation), or, in some cases, creating new data (i.e. writing new software).
What does all this have to do with the Technological Singularity? If intelligence is a major determinant of the rate at which new technology develops, then any technology which enhances intelligence is going to be a major cause of the acceleration of this rate. Improved information storage technology means that information about technology can be stored and later retrieved. Improved communication technology means that people in different places can share information with each other about how to develop new technology.
Improved calculation technology means that the technology itself can be part of the invention process. Most modern industrial production processes are designed and managed with the help of information processing technology. The design studios have designers sitting at computers using computer software to help them design things. Production facilities have computers controlling and monitoring the processes of production.
Computers and other information processing technology play a major role these days in the development of new technology. But there are still some major constraints that limit how much it is possible for combined human/machine intelligence to improve its own intelligence at an accelerating rate:
What these constraints mean is that, in the very short term, the most significant opportunities for technological self-improvement will come from the combination of people and computers acting to modify the software on computers, including the software which people use to develop new software.
In the modern world, there are two main types of software:
Proprietary software is defined by having an owner. It can only be used by those who pay a price to the owner for the right to use it. It can only be distributed by the owner, and, most importantly, it can only be modified by the owner.
Open source software is defined by any "owner" of the software giving up their rights to control use, distribution and modification (except perhaps with certain limitations as per the GPL). So anyone can use it, anyone can distribute it, and anyone can modify it.
From the point of view of which type of software is most likely to take part in an accelerating technological singularity, open source seems the most likely, since it has the least constraints on how it can be modified and who can modify it. Different open source software products can also be freely combined (sometimes subject to limitations of different licence types), whereas proprietary products can only be combined if they have the same owner, or if the different owners enter into legal negotiations with each other.
However, there is a good reason why a lot of software is proprietary: it has a more straightforward business model, because it is the owner's right to control use and distribution – their copyright – which enables them to receive compensation. There is nothing in modern copyright law which provides any direct compensation to the developers of open source software for the value of the software that they produce.
Because of this, more resources can be put into proprietary software development, and for many types of software the best available software is proprietary. Nevertheless, there are some open source applications which are the best available, or at least better in some respects than their proprietary equivalents.
Open source and proprietary are not always completely separate from each other. Some open source software permits inclusion in proprietary products, some explicitly exclude it. When we consider what software is used to help create or improve software, open source applications can be used to create proprietary applications, and proprietary applications can be used to create open source applications.
There are also applications which run on the Internet, such that they are effectively free for everyone to use, even though the applications are not distributed by their owners, and are only run on the owners' computers. A primary example of this is search engines, such as Google. Search engines are a significant aid to software development, and, for example, a developer might use Google or another search engine to find useful open source (or proprietary) libraries, to find code snippets, to look for causes of error messages, and even to find online documentation. At the same time, search engine companies may use open source software to help implement their search web applications.
If the logic of the technological singularity applies to software development, then where will it end? And, given the current dichotomy of open source versus proprietary, which of those two will it be?
When software by itself becomes more intelligent than human beings, then the process of modification will no longer be constrained by the limitations of human intelligence, and it is hard to imagine that a singularity will not shortly follow (unless the problem of software development is really much harder than we realise, and it can't even be solved by someone more intelligent than ourselves).
But a singularity could easily occur before this point, in which case ordinary (unmodified) people will be included in it, forming part of some super-human intelligence.
If there is a software singularity and it is a proprietary software singularity, then very likely it will have to occur within one particular software company. For those who worry about the negative effects of the technological singularity (and that probably should be all of us), that will result in an enormous concentration of wealth and power within that company.
The open source alternative seems a little more favourable towards the rest of us. Open source is an instrinsically public process, so an open source singularity is likely to include the general public. And given the lack of direct business model for open source, an open source singularity will not necessarily result in an enormous concentration of wealth and power among the lucky few. In other words, the open source singularity will create wealth and power for those who use open source software, which can be everyone (well not everyone, but everyone with a computer and broadband Internet connection).
Of course, just because an open source software singularity is "better" than a propietary software singularity, there is no guarantee that it is going to happen. If the general public appreciated the possibility of a software singularity, then they might vote for governments that would favour open source software in various ways, for example not allowing software patents, and providing income to developers judged (by the public) to have made useful contributions to open source software development.
Unfortunately, even simple technical issues can be difficult to get across to the general public (like how many days between when the Universe started and when people first appeared). Most people probably don't even know what a software patent is, or what "open source" is, and if they have heard of the Technological Singularity then that definitely has a science fiction sound to it.
So we probably have to base predictions about whether a software singularity will be open source or proprietary on the assumption that legal and economic conditions will remain much as they are now (at least for the period of time from the present to whenever the singularity is going to occur).
Proprietary software has the advantage that considerable resources can be devoted to its development. Its disadvantage is that freedom of modification can only exist within the company that owns it. And even within the company, there probably exist significant constraints on who can do what type of modification, given the necessity to maintain a coherent and reliable product range to sell to their customers.
Open source software has the advantage that there are no artificial or legal constraints on its modification. The disadvantage is that all development must be funded by parties who cannot expect to get any direct payment in return for their efforts.
I suspect, although I cannot be sure, that freedom of modification will turn out to be more important in the long run, and resources will turn out to be less important. Firstly, self-modification is a defining characteristic of the technological singularity, and open source software can be designed to be as modifiable as possible, without concern for any negative consequences. Proprietary software will always want to "hold back", and the released forms of much propietary software is shackled in complex schemes which restrict how the software can be used, and even which data can be processed.
Secondly, the resource constraint may matter less and less. As technology for developing new software technology improves, it becomes easier and easier to make new improvements, which means that less resources are required to make any particular improvement. So proprietary software developers will lose their advantage in this respect.
We can already see some of these aspects of modifiability and easier development having benefits for open source developers:
Given the unknowable nature of the post-singularity future, it might seem crazy to do anything to deliberately make it happen sooner. But if you believe that there are different possible singularities, some "better" than others, then you might want to do something to make a "good" singularity happen sooner. There is, after all, only going to be one singularity, so whichever gets there first is going to be it.
What can we do to help develop an open source singularity? A simple answer is: help develop open source software. A slightly better answer is: help develop open source software which makes it easier to develop open source software, and, if possible, help develop open source software which makes it easier to develop open source software which makes it easier to develop open source software. And so on. And it would be even better to develop open source technologies which assist with the development of open source software more than they assist with the development of proprietary software.
Some people have been inspired by the idea of an open source technological singularity to start open source AI projects, like the AI Mind Project. This seems a logical approach, since an artifical intelligence is the likely form that the technological singularity will take. The only problem is that this mistakes the end for the means. The cause of the technological singularity is the ability of technology to assist with the improvement of technology. So it's going to make more difference to a future singularity if you make mundane improvements in software that can be used now to develop new software. Something like the recently famous Ruby on Rails will probably do more to bring the singularity closer than any ambitious artificial intelligence project.
For another simple example, in my article Disorganized Incremental Software Development, I develop the idea of using cryptographic hash codes to distinguish interfaces and implementations, thus avoiding difficulties of namespace collision that plague much of modern software development. These difficulties force the control of software projects to be more centralised than it really needs to be. Removing them could totally decentralise software development, encouraging increased participation in open source development and maximising open source software re-use. (My idea has not developed any further than the article, so it's something that someone with more spare time than myself might like to take up and develop further.)