by Steven K. Roberts
Information Today 
November, 1985

Powerful portables. Hi-res graphics. Macs. Microdisks. Much has happened since the prehistoric personal computer era of a decade ago, and more wonders lie ahead. What will the next ten years bring?

Any exploration of future computer technology is subject to considerable uncertainty, especially now that we are on the verge of a philosophical shift in system design. For more than a decade, the major tool of crystal-ball gazers in this industry has been Moore’s Law— the roughly annual doubling of integrated circuit densities.

“256K RAMs,” replied a smug industry-watcher at a 1980 trade show, when I asked what we could expect in 1985.

But we are approaching a dead end. No matter how fast we make micros, no matter how much memory we cram onto chips, no matter how well we optimize machine architectures for the operating systems of choice, there still remains one fundamental problem: all computation is funneled through a single processing site.

Many will argue that this is fine—that with efficient memory management and enough speed, you can compute anything. But this “von Neumann bottleneck” not only implies an intrinsic processing limitation, it also limits the elegance of software design. When the poor machine has to fetch one datum at a time from memory and trundle off to turn some primitive logical crank, our hope for machine intelligence—the ability to represent and manipulate abstract symbols—is remote.

Implicit in this complaint is the supposition that we want intelligent machines. Apparently, not everyone does. My occasional straw polls on the issue do not suggest a populace yearning to be intellectually outpaced. But consider the horsepower required for all those alluring computer applications ahead: context-sensitive spelling checkers, useful online help facilities, functions that can be invoked by analogy, natural-language database query, and more. All call for some measure of intelligence.

But how do we escape the intellectual straitjacket imposed by that old-fashioned step-by-step method of processing data? We can always distribute tasks over a number of standard processors, of course, but that’s philosophically no different from traditional methods. Effective machine intelligence—which I maintain is socially necessary—calls for an information-handling style not unlike our own.

Biological information systems differ dramatically from those based on silicon. Not only is that “bottleneck” completely absent, but programming— learning—is associative (a fact sadly overlooked by many an educational institution). All input/output takes place through a multilayered hierarchy that enables the conscious mind to initiate high-level commands and then deal only with exceptions. Further, most bio-computing is based on pattern recognition, and the collective state of the brain can range from passion to lethargy to full battle-readiness with only the tweaking of a few chemical concentrations. Clearly, we do our thinking with equipment that differs dramatically from computers—LISP, object-oriented programming, and knowledge engineering notwithstanding.

Worthy competitor

All this leads to an observation or two. Until we understand the brain intimately enough to begin manufacturing competing systems, we must content ourselves with writing programs that can model—within application and performance limits—our worthy biological paradigm. But we can expect a gradual decentralization of processing resources, making it more and more difficult to point to a single device on a circuit board and declare, “there, that is the CPU.”

And along with all this will finally come a shift in software philosophy from programming to teaching—from the present crude mapping of human concepts upon semantically-rigid instructions to a “mixed-initiative” exchange of verbal descriptions within a fluid shared context. This is where the revolution in system design will truly hit home, for such use of language is indistinguishable from intelligence itself.

And then comes the mind-modem, the spoken-word processor, the information broker-on-a-disk, and the high-IQ pocket associate nestled in lint, making snide comments about every little faux pas

Ain’t technology wonderful?


Steven K. Roberts is the author of the soon-to-be-published book, Computing Across America: The Bicycle Odyssey of a High-tech Nomad, as well as numerous books and articles on vaguely related subjects. He published a monthly microcomputer column in these pages while cycling last year, and is now gearing up for the next ten thousand miles.