During the development of the Winnebiko II and onward through my return to the road, I resumed my consulting relationship with the Anatec division of Atlantic-Richfield. The company produced industrial process-control systems, and I had written much of their marketing literature along with engineering manuals. This article was unusual in that it was ghost-written for the client with the understanding that I would share the byline. The trade journal is Instrumentation and Control Systems, published by Chilton’s from 1983 to 1992.

ghost-written by Steven K. Roberts
for Anaconda Advanced Technology
I&CS
September, 1986

Overviewing both the industrial and process segments of the control market, the authors describe what they call “nonlinear” technological changes changes that they feel will result in a complete philosophical overhaul of American business.

If ever there was an intriguing time to speculate about the future of distributed control, it is now. We are on the verge of new phenomena ranging from artificial intelligence (AI) to computer-integrated manufacturing (CIM), from 3-dimensional AI display devices to a complete philosophical overhaul of American industry.

This article assesses the technological changes in distributed control that lie ahead, and looks at how they are likely to affect the way we do business. We are not talking about the relatively linear effects of ongoing fine tuning — faster devices, lower power dissipation, increased packaging density, longer optical fiber, and so on. Simple extrapolation can tell us what to expect in these areas: 4M bit RAMs, minimal cooling requirements, smaller and lighter boxes, lower communications costs, and so on. Instead, we are concerned with nonlinear changes — those sudden discontinuities (such as microprocessors) that tend to catch people by surprise as they irreversibly reshape the world.

Many such events will happen while counting down the last 14 years of this century. Today’s control systems and control rooms will seem as archaic tomorrow as yesterday’s random-logic controllers do today. But far from being a subject for speculation over the coffee cups of visionary tinkerers, the future is a subject well worth serious study. The more accurately we can anticipate it, the better off we’ll be when it finally happens.

New ways of doing business

On the surface, it might not seem that there can be fundamental changes in the process control business. Substances flow, as always. We monitor them and tweak a process accordingly. Abnormal events occur. Alarms are triggered and we fix something. Where in this is there room for revolution? Other than clever software techniques and new breeds of hardware, what could radically change our industry?

World markets, that’s what. Process efficiency is no longer a way to make a few more bucks; it is the only way to survive. The U.S. doesn’t have the only game in town anymore, and the effects are far reaching.

Much of the solution to the problem of achieving process efficiency centers around resource management. If you have a cluster of machinery capable of processing 1,000 pounds of glop per hour, you want to keep it doing that around the clock — despite recipe, vendor, and manpower changes.

If you have a vertical surface grinder somewhere in a loop, you want to optimize the downfeed rate for the proper balance between machine throughput and tooling wear, perhaps even adjusting the latter to synchronize replacement downtime with lulls in the product flow. Your ability to compete may depend, ultimately, on how well you manage those expensive resources out on the plant floor. Of course, this is nothing new, but the comfortable margin for error traditionally enjoyed by American industry is fast disappearing.

That disappearing margin is bringing in sweeping changes. Suddenly, statistical process control (SPC) is an essential tool for minimizing rework. More and more representatives of the process industries are showing up at AI conferences (not so long ago the exclusive domain of wild-eyed academicians). And wide-bandwidth distributed control systems are becoming more important than ever.

Along with all this is a new body of manufacturing philosophies, many of which are being seized upon with something approaching zeal by those who see our trade advantage slipping fast. These fall into two broad categories: total enterprise automation or CIM, and a new approach to quality assurance that focuses on SPC.

Computer integrated manufacturing

Industry is spending billions to redesign the very concept of factory, for it has become abundantly clear that CIM is the only way to regain lost markets and maintain the American standard of living.

CIM is perhaps the ultimate expression of manufacturing science. Its intent is to move industry through a three-step process designed to maximize efficiency — to help it evolve from a sluggish and cumbersome labor-sensitive enterprise to a responsive information system whose output just happens to be hard goods.

The first step, already well underway, involves the creation of islands of automation — specialized workstations that perform, with or without human assistance, some well-defined manufacturing task. The hardware involved may be anything from a numerically controlled milling machine to a highly articulated robot. The control systems are generally local and dedicated to their own islands.

The second step, seriously confounded by the variety of vendors and technologies involved, is networking these is lands into a closed loop system. This is essential if the manufacturing process is to escape its paper quagmire; a number of CIM system integrators, unallied with traditional control vendors, are arising to bridge the various gaps.

The strongest single movement in this direction is based on the Manufacturing Automation Protocol (MAP), initially defined by General Motors and now undergoing the lengthy process of industry wide standardization. The intent of MAP, of course, is to make transparent the diverse protocols of hundreds of vendors — many of whom currently coexist in production environments with only a crude, low bandwidth communications capability. System designers are beginning to realize that it makes more sense to break with tradition and standardize with competitors rather than risk permanent isolation by building a better system for which nobody wants to build a gateway.

The third step, still mostly fantasy, is the complete integration of factory automation systems with corporate data processing (DP) and management information computers—effectively uniting a company (which may be spread around the globe) into a single, highly responsive closed-loop production machine. An order received in Topeka may trigger workstations all over the world in ways that no single human can follow. All of the actions, however, culminate in the arrival of a custom bicycle on a consumer’s doorstep or tank loads of a specialty chemical at a distributor’s railroad siding a few days later.

Quality assurance

A new approach to quality assurance is appearing under the initials SPC — statistical process control. Actually, SPC techniques are anything but new. Pioneered in the 30’s, SPC uses statistical methods to track a manufacturing process and keep it in tune. Contrast this to the traditional approach — wait for it to go afoul and then discard or re work the results. Unfortunately, the idea took a while to catch on in the U.S. While we masked our inefficiencies with the prosperity of the postwar production boom, Japan was building a new industrial base around the radical idea that it makes more sense to build quality in than to inspect defects out.

Building quality is not difficult from a technical standpoint. It involves the use of a few real-time statistical tools to identify and minimize the causes of product variance in a relatively abstract version of traditional closed loop control. But the fervor needed to bring about the necessary rethinking of management priorities has become a major issue. W. Edwards Deming and other consultants help create that fervor by charging $3,000-a-day fees to tell CEOs how to restructure their entire corporate culture. Not every one will get the message and there will be casualties.

Yes, CIM and TQC have implications for the future far beyond the merely technical. Both call for major organizational changes, from boardrooms to factory floors. Just in time (JIT) inventory, to pick one specific issue, reflects the needs of the new breed of consumer — less willing to accept preservatives, more insistent upon quality, performance, and the sleek high-tech trappings of new age “yuppiedom.”

Companies must be able to change products quickly in response to this kind of customer demand, implying not only flexible automation but information systems to match. (It’s not hard to imagine networked manufacturing facilities responding to design changes in hours, while traditional management information systems take weeks to catch up.) Established methods die hard.

Achieving smooth information flow between corporate and production levels will be an interesting challenge. Delays must be held to a minimum, yet the information from the factory has to undergo extensive data reduction and abstraction so it makes sense to management and vice versa. A company-wide closed loop system that performs this translation will be a sophisticated one indeed.

Rampant wizardry: the impact of technology

However, the future of process control will be shaped by more than new concepts. Sweeping changes are driven by many factors. While CIM integrators are busily restructuring the basic manufacturing philosophies, legions of inspired designers are conjuring new devices at an almost alarming rate.

As mentioned earlier, we’re going to take multimegabit CMOS RAMs and mainframe-powered microprocessors for granted. But look at a sampler of some of the other new technologies that have barely begun to appear in real products:

Gate arrays that can be reprogrammed on the fly — The notion of defining hardware through custom masking of a gate array is more than a decade old, but a new device family from Xilinx allows random logic to be instantly redefined through a single downloading process. The familiar spawning of software tasks may now be supported by the equivalent spawning of hardware — further blurring the distinction between the two. As such techniques gain prominence, it will become very difficult to point to a chip and say, “That’s the CPU.”

Switching power supplies —Long understood to be far more efficient than their old linear counterparts, these are suddenly available in the form of chips. Linear Technologies and Maxim, among others, offer devices that make it a lot easier to add a cool switcher than a hot three terminal regulator. This and related technologies all add up to more reliable and forgiving hardware, less dependent upon ex pensive environmental conditioning.

Digital signal processing — This is almost becoming an industry unto itself, with multiple discrete Fourier transforms (DFTs) tuned across a spectrum now only a little more difficult than a single broadband fast Fourier transform (FFT). Frequency-domain processing is getting easier by the week, and model analysis is becoming a reasonable component in real-time, closed loop systems.

Three-dimensional displays — These will appear within a few years if early investigations prove fruitful. By impressing spatial Fourier transforms of images upon crystals, such as lead lanthanum zirconate titanate (PLZT), then passing a polarized laser through the medium, a real time hologram should appear. This work is in its infancy, but offers significant potential for adding more information to process graphics.

Packet data communication — This offers a low cost, error-free alternative to wire. Though theoretical limits are in the megabaud range, terminal node controllers running at 1200 baud are already available for less than $200 — including multilayered X.25 protocol software. Complete ruggedized radio modems with an RS-232C connector on one end and an antenna on the other will probably cost less than $1,000. This technology will move quickly into factory data communications alongside multidrop fiber optics— another development that’s just around the corner.

Sensors are essential components in the process engineer’s tool kit, of course. And we’re accustomed to seeing regular breakthroughs in everything from noninvasive flow metering to remote temperature measurement. But watch for continued development in smart sensors. This will further distribute process computation functions that were, not so very long ago, contained entirely within a single, centralized system.

Optical data storage and retrieval — This offers dramatic solutions to long-standing problems. In particular, the CD-ROM packs 550M bytes on a 4.75-in. disk, an amount of data equivalent to a secretary typing 90 words a minute eight hours a day for more than eight years. On line storage of complete process documentation and procedures is now reduced to a half-height 5 1/4-in. optical drive, and new efforts in interactive CD promise more efficient access to textual and graphic information than ever before. And video discs, capable of holding 54,000 video frames per side, will add yet another dimension to process graphics and in-context employee training.

Distribution of resources — This is one of those recurring issues that crops up every time a new generation of microprocessors appears. In general, there is a move away from the single processor bottleneck that has traditionally limited system bandwidth — because increasing the number of processing nodes is one way to improve performance. In the case of a typical control environment, where a body of supervisory logic supports thousands of real world points, there are a number of compelling arguments for the dissemination of processors far and wide.

With a micro dedicated to every few points in the field, for example, burdensome operations such as engineering unit conversions and alarm limit checks can be offloaded from the host system. As technology continues to develop, expect major chunks of high level process logic itself to be resident out where the action is — executing in an array of processors that may number in the hundreds. Because these can be downloaded and reconfigured on the fly, this approach will improve system bandwidth and support the flexibility demanded by CIM and JIT.

The impact of AI — machines that build machines

Many things suggest industry’s need for more than an occasional smattering of AI. These include wide bandwidth control, unprecedented information volume, closed loop processes spanning entire corporations, effective information displays, natural language, automated visual inspection, and assuring data validity in a paperless environment. All of them are inadequately served by today’s sequential processors.

A great deal of effort is being expended to improve these areas, with expert systems getting most of the press. Because seemingly intelligent behavior can be coaxed from current computers once the application domain is defined sharply enough to remove ambiguity, the first sales have involved specialized decision support and consulting systems. But as AI work grows more robust, the industrial applications become very provocative.

Consider the demands of complex real-time processes, for example. Only rarely are control systems programmed so meticulously that no abnormal combination of input conditions can fool them. Even with reasonable alarm limits for measured values, it is difficult to make the machines understand that there may exist unusual combinations of inputs, each one of which is in range, but that collectively represents an imminent failure mode. An experienced plant engineer can glance at all the gauges and say, “Hey, something’s wrong!” However, few control algorithms react this intelligently to subtle variations in the overall tone of their sensory inputs.

This will change. Once a system has contextual understanding of its process and some ability to correlate observed variations historically with subsequent failures, the essence of intelligent control will exist. What then? What if the machine could, on the basis of an idealized conceptualization of its own performance, determine the appropriateness of its control algorithm, and continually adapt it to the task? Present systems are not likely to step outside the fabric of their software and modify the theoretical basis of their responses. But future systems may well do just that. They might even discuss the matter with their human counterparts whenever a shortfall of domain knowledge is suspected to exist.

Users should not be misled by the fashionable overuse of the word intelligence to describe everything from microcontrolled valves to clever programming. Except for a few highly specialized application systems, claims of AI are premature.

One area in which AI is needed as soon as possible, however, is machine vision. Real-world problems are not simple pattern matching tasks. Instead, they involve conceptual frameworks, multiple levels of interpretation, complex scenes consisting of equally visible information both irrelevant and critical, and motion. Something as simple as bin-picking, an essential skill for assembly robots, is trivial for humans but difficult for machines. (“Without the example of human vision,” said MIT’s Berthold Horn, “we would have concluded that vision is impossible!”)

But true vision is very much in process control’s future. It is now closer than ever with the research in image-wide data paths and other radical departures from the severely limiting “von Neumann bottleneck.” This is perhaps best described as plucking one datum at a time from memory and trundling off to turn a primitive logical crank. Adaptive work cells are going to need all the feedback they can get.

The human impact of future systems

What happens when factories of work cells, all linked by fiber optics to truly intelligent controllers and then to global corporate networks, are loosed upon the world? Who can understand them enough to operate them? Worse still, who can fix them — or run things when a computer fails?

These are interesting problems, which will likely be most severe in the near term while those “ultimate systems” are still flickering to life. Will the process engineer of the future walk on water, or will the flood of information represented by the C1M environment be too much for any single person to handle?

Such questions have implications today. Tomorrow’s process control environment is not a logical outgrowth of yesterday’s, and the traditional skill sets are inadequate. Whoever heads the implementation programs must be able to understand — and work with — everyone from MIS personnel to product specialists, melding the disparate interests of each into a global system. Whoever runs the system (which, like a wingless aircraft, may be inherently unstable — rendering the notion of “manual override” obsolete) must not only have awesome intellect and mental agility, but also be intimate with technologies ranging from pressure sensors to layered network protocols.

Organizations built around future factories will not be based on the structured pyramids of specialists and managers common today. A network of minds, so smoothly melded into the machines that the distinction will blur, will be necessary to handle the complexity of a massive real-time system.

How would today’s management structure handle an adaptive, paperless, intelligent, self-modifying system distributed over the earth’s surface and woven through every aspect of a company’s operation? The average 1980’s manager would panic, intimidated by the scope of potential failure modes and nervous about all the things going on without his knowledge. Nervous about data validity, nervous about capacity, terrified of all that logical inertia hurtling through the dimension of time. What happens if it crashes???

The distinction between specialists and generalists will grow ever clearer, and companies building integrated control systems will have to deploy both with care — long before it seems necessary.

Like today. The future will happen with or without you.


Steven K. Roberts is the author of Creative Design with Microcomputers (Prentice-Hall, 1984) as well as more than 200 other articles and books about industrial control, artificial intelligence, and related technologies. He maintains a consulting relationship with Anatec via the GEnie network while traveling full time on his solar-equipped, computerized recumbent bicycle.