Abstract
Up until recently, humans and computers have used radically different strategies in their intelligent decision making. For example, in chess, a human chess master memorizes about 50,000 situations and then uses his or her neural-net based pattern recognition capabilities to recognize which of these situations is most applicable to the current board position. In contrast, the computer chess master typically memorizes very few board positions and relies instead on its ability to analyze between 1 million and 1 billion move-countermove sequences for each move.
Humans do not have the precise memory nor the mental speed to ever excel at using the recursive paradigm (not without a computer to help). But while humans will never master the recursive paradigm, machines are not restricted to it.
Computer neural net simulations have been limited by two factors: the number of neural connections that can be simulated in real time and the capacity of computer memories. While human neurons are slow (a million times slower than electronic circuits), every inter-neuronal connection is operating simultaneously. With about 100 billion neurons, an average of 1,000 connections per neuron and a computational rate of 200 computations per second per connection, the human brain is capable of about 20 million billion (2 times 1016) connection computations per second (ccps). Neural computers today can process about 200 million connection computations per second, which is 100 million times slower.
Moore's law states that both computing speeds and densities double every 18 months. The exponential progress with the linear passing of time implied in Moore's law accurately describes computational progress from the electromechanical computing at the beginning of this century to the present day. For reasons that will be explored, it is likely to continue to hold well into the next century. With the massively parallel architecture implied in a hardware-based neural net system, both the doubling of speed and of density each double the number of connection computations. Neural computers will, therefore, match the capacity of the human brain in terms of ccps in about 20 years, or the year 2012. Achieving the memory capacity of the human brain (1014analog values stored at the synapses) will take a little longer — about 27 years or they year 2019.
Several scenarios for architecting and programming such computers to achieve human-level intelligence will be explored. One is to examine a trend in magnetic resonance imaging (MRI) scanning that is similar to Moore's law. We have been rapidly increasing the resolution and speed of MRI scanners, which today are able to resolve individual somas (nerve cell bodies). Scanning a human brain to map the locations and interconnections of somas, axons, dendrites, synapses, presynaptic vesicles and other neural components is one scenario to achieve human-level intelligence in a machine that will be examined.
Once a machine can match human intelligence in area in which the latter is now superior, it will combine with it computational capabilities that already exceed human intelligence in terms of speed and accuracy. It is also the case that Moore's law is likely to continue to hold past this threshold.
Even partial success in this undertaking will present society with many new dilemmas. The social, economic, scientific and philosophical implications of the emergence of true machine intelligence in the twenty-first century will be explored in this presentation.