Computers


The computer revolution is upon us. In the next 25 years, all aspects of computing -- from input to output -- will change radically. The greatest change will occur in the way computers are put together: there will be three fundamental modifications to our thoughts on how we compute. One change will involve neither new hardware nor new philosophy, but will come about simply through the realization of the potential of existing technology. Another change will be made through the development of new hardware, more powerful than anything currently constructed, but will remain within our philosophical paradigm of computing. The third change will be a completely new approach to artificial intelligence, and will require the abandonment of many current "truths".

The least radical of the three new approaches will make computers ubiquitous. Microcomputers with the power of current minicomputers can be made easily and cheaply. The basis will be a distributed multiprocessor architecture. As soon as computer designers realize that for many tasks it is already cheaper to use a dedicated microprocessor than it is to use special purpose hardware and a central processor, we will see a new breed of machine. Imagine a microcomputer based on a 16-bit central processor with dedicated 8-bit processors for input-output control, printer control, mass storage control, and telecommunications. Such a machine would have the full power of the CPU available for computing, would not be tremendously expensive, and could, with good software, rival many minicomputers. If this design philosophy is used, and is coupled with software designed on the principle of maximizing computer productivity, we will soon see many very powerful, cheap computers. By 2008, these machines will be everywhere.

We will also overcome the 11.8" barrier. Supercomputing has almost reached the point where the speed of light (11.8" per nanosecond) is a serious design limitation. The next 25 years will see the creation of new machines able to circumvent the problem through parallel processing. The gradual development of the technology of vector and array architectures, and the concurrent development of the techniques and algorithms needed to program such computers, will provide the basis for scientific calculation in the future. These machines will supplant current mainframes, and will provide more power for the tasks computers already perform. Some problems which are computationally infeasible (astrophysical and weather models, for example, or any other problem based on partial differential equations) will become tractable. But the basic idea of computers as number crunchers will remain.
The advances in artificial intelligence will be based on one simple fact: that while digital computers can perform millions of calculations in a flash, the hallmark of intelligence is precisely the opposite. The difference between a human being and a computer is that the human seeks to avoid solving problems by repetitive methods. As an example, consider the game of chess. If chessmen are placed on a chessboard in a completely random fashion and the layout shown to a novice and a Grand Master, they will each be able to remember the positions of only a few pieces. But if the pieces are placed in a position which could arise in play, the Grand Master will be able to recreate the setup almost perfectly. A human being understands the game as a "gestalt" rather than simply as an arrangement of figurines. While chess-playing computers are gradually getting better, their approach is still the simple one. In the next 25 years, artificial intelligence researchers will realize that you don't mimic the brain by doing ever better what the brain does badly to begin with. The result will be the creation of new theories. The computers which will be built will be unlike anything ever imagined. They may be non-deterministic, the will probably not be based on boolean logic, they may even be bioelectronic. But by 2008, they will think.