direct digital control nears readiness

3
CHEMICAL & ENGINEERING NEWS MAY 18, 1964 Direct Digital Control Nears Readiness A decade from now the transition of the process industries to a new control philosophy could seem, in retrospect, unusually orderly. If so, a good deal of credit can go to the guidelines for manufacturers set down by the Users' Workshop on Direct Digital Computer Control, which just ended its second meeting at Princeton, N.J. The importance of the guidelines to manufacturers and an idea of the po- tential impact of direct digital control (see box on next page) can be seen in one user's statement, which is echoed by the rest: If DDC proves out tech- nically and economically in its final testing, as expected, it will automati- cally be considered for all future plants. The workshop first met a year ago at Princeton, where it mapped out the basic guidelines. Since then, a lot has happened behind the scenes. Manu- facturers have generally adopted the guidelines as a basis for their develop- ments. Several systems have been an- nounced and several more are in the offing. Users have put a year of de- velopment and testing behind them, and two full-scale, final-test installa- tions (by Monsanto and Esso) will be in operation before the year is out. The users group—35 representatives from 25 companies—thus met this year with greater background knowl- edge and a firmer idea of what is needed. The result is that although the basic guidelines remain the same, they have been modified in some cases and have generally been spelled out in greater detail. The common threads running throughout are reliability and economics. From the users' standpoint, comput- ers that can be used for various types of process control fall into four broad categories: • Type I, a fixed-program machine capable of handling the control equa- tions for 20 to 150 control loops. It would be able to perform perhaps half again as many calculations to handle such operations as cascading, where one control-loop signal is used to in- fluence a second loop. It would pro- vide for alarms and graphic display, but no recording. It would be com- patible with elements of Types II, III, and IV, for add-on capability. Cost of the machine would likely run about $600 to $700 per control loop and up to $1000 per loop for the smaller sizes. • Type II, a general-purpose, stored-program machine. It could handle special control functions in ad- dition to the basic control-loop equa- tions. It would typically handle 14- bit words, and could have up to 16,000 words of core memory. Cost would be $50,000 to $100,000 ($700 to $1000 per loop). • Type III, a general-purpose ma- chine capable of simple optimization using algebraic equations for 15 to 20 variables. It would handle 18- to 24-bit words, and have core and drum memories of up to 32,000 words. Cost would be upward of $150,000. • Type IV, a general-purpose ma- chine that could handle all optimiza- tion including linear programing. It would handle 18- to 24-bit words, have core and drum memories for up to 100,000 words, and cost upward of $250,000. Optimization computers that have been in use for several years are typi- cally Type III. Scientific computers are typically Type IV. The comput- ers just becoming available for DDC fall under Type II. Type I is not yet available. Draw the Line. Users are unified in drawing the line for DDC between Type II and Type III. The original guidelines call for DDC computers with on-line availability of 99.95% (about four hours per year downtime, once per year). Optimizing comput- ers have been giving an on-line avail- ability of 99.5% (about 40 hours per AVAILABLE. DDC computers, such as this Westinghouse Prodac 50, are now becoming available year downtime). Reliability, how- ever, is hard to define. Although a Type III could theoretically be de- signed for 99.95% availability, the feeling is that the greater number of components involved makes the statis- tical possibility of failure that much greater. Thus users do not want Type III or Type IV designed for DDC. Above the line, however, users are far from unified. About two thirds would go directly to a Type II com- puter for DDC in a new plant. The other third would take Type I or a combination of Type I with a small general-purpose add-on computer. The add-on could be a Type II main frame (central processing portion and memory only). Behind this split lies the differences MAY 18, 1964 C&EN 21

Upload: vuphuc

Post on 07-Feb-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

CHEMICAL & ENGINEERING

NEWS MAY 18, 1 9 6 4

Direct Digital Control Nears Readiness A decade from now the transition of the process industries to a new control philosophy could seem, in retrospect, unusually orderly. If so, a good deal of credit can go to the guidelines for manufacturers set down by the Users' Workshop on Direct Digital Computer Control, which just ended its second meeting at Princeton, N.J.

The importance of the guidelines to manufacturers and an idea of the po­tential impact of direct digital control (see box on next page) can be seen in one user's statement, which is echoed by the rest: If DDC proves out tech­nically and economically in its final testing, as expected, it will automati­cally be considered for all future plants.

The workshop first met a year ago at Princeton, where it mapped out the basic guidelines. Since then, a lot has happened behind the scenes. Manu­facturers have generally adopted the guidelines as a basis for their develop­ments. Several systems have been an­nounced and several more are in the offing. Users have put a year of de­velopment and testing behind them, and two full-scale, final-test installa­tions (by Monsanto and Esso) will be in operation before the year is out.

The users group—35 representatives from 25 companies—thus met this year with greater background knowl­edge and a firmer idea of what is needed. The result is that although the basic guidelines remain the same, they have been modified in some cases and have generally been spelled out in greater detail. The common threads running throughout are reliability and economics.

From the users' standpoint, comput­ers that can be used for various types of process control fall into four broad categories:

• Type I, a fixed-program machine capable of handling the control equa­tions for 20 to 150 control loops. It would be able to perform perhaps half

again as many calculations to handle such operations as cascading, where one control-loop signal is used to in­fluence a second loop. It would pro­vide for alarms and graphic display, but no recording. It would be com­patible with elements of Types II, III, and IV, for add-on capability. Cost of the machine would likely run about $600 to $700 per control loop and up to $1000 per loop for the smaller sizes.

• Type II, a general-purpose, stored-program machine. It could handle special control functions in ad­dition to the basic control-loop equa­tions. It would typically handle 14-bit words, and could have up to 16,000 words of core memory. Cost would be $50,000 to $100,000 ($700 to $1000 per loop).

• Type III, a general-purpose ma­chine capable of simple optimization using algebraic equations for 15 to 20 variables. It would handle 18- to 24-bit words, and have core and drum memories of up to 32,000 words. Cost would be upward of $150,000.

• Type IV, a general-purpose ma­chine that could handle all optimiza­tion including linear programing. It would handle 18- to 24-bit words, have core and drum memories for up to 100,000 words, and cost upward of $250,000.

Optimization computers that have been in use for several years are typi­cally Type III. Scientific computers are typically Type IV. The comput­ers just becoming available for DDC fall under Type II. Type I is not yet available.

Draw the Line. Users are unified in drawing the line for DDC between Type II and Type III. The original guidelines call for DDC computers with on-line availability of 99.95% (about four hours per year downtime, once per year). Optimizing comput­ers have been giving an on-line avail­ability of 99.5% (about 40 hours per

AVAILABLE. DDC computers, such as this Westinghouse Prodac 50, are now becoming available

year downtime). Reliability, how­ever, is hard to define. Although a Type III could theoretically be de­signed for 99.95% availability, the feeling is that the greater number of components involved makes the statis­tical possibility of failure that much greater. Thus users do not want Type III or Type IV designed for DDC.

Above the line, however, users are far from unified. About two thirds would go directly to a Type II com­puter for DDC in a new plant. The other third would take Type I or a combination of Type I with a small general-purpose add-on computer. The add-on could be a Type II main frame (central processing portion and memory only).

Behind this split lies the differences

M A Y 18, 1 9 6 4 C & E N 2 1

in individual DDC philosophies coupled with opinions on how reliable is reliable. At one extreme, the aim with DDC is merely direct replace­ment of conventional analog loops at less cost. This means a relatively simple Type I computer. At the other extreme, DDC is justified on the basis of improved control, not directly on the basis of hardware economics alone. This means a Type II computer, which can handle control functions of greater complexity.

Reliability, or rather opinions on its relative importance, comes into play at this point. To a large extent, these opinions are colored by the types of processes used by a company.

In all cases, users want manual backup for the computing system, should it fail. But emergency opera­tion of valves and other control ele­ments doesn't necessarily mean that a process can be controlled while the computer is down; often it just means an orderly shutdown. Some processes can go for a number of hours under complete manual control; others can­not last much more than five minutes. The greater the number of processes of the latter type that a company has, the more likely it will be to emphasize reliability.

Thus, some users want 100% availa­bility, at least in some minimum backup form though it may be less

than the best condition. They feel, therefore, that the best way to achieve this is to hold the number of computer elements involved in critical control functions to a minimum, thus holding also to a minimum the statistical pos­sibility that critical elements will fail. Add-on computing equipment would be used for the less critical functions. This weights the balance toward Type I, or toward Type I with a small gen­eral-purpose computer added.

Others, however, feel that the relia­bility of a computer with Type II capability (99.95) is good enough, and that possibly they can save money by getting the entire control in one package. Since the capability of a Type II is needed to begin with, this will be the route many will take.

Users again unite in rejecting re­dundancy as an answer to reliability. The idea here, suggested by manufac­turers, is to use two computers with an automatic switchover to the second computer should the first fail. By eliminating manual backup, it might be possible to break even in cost at about 120 loops, even though two computers are involved. But users want manual backup in any case and aren't willing to pay the added cost for what is, in effect, three systems.

Characterize Control. Besde spec­ifying further the use of DDC com­puters, users also characterized various

aspects of current control practice that will influence computer design. For instance, there is a trend toward greater use of cascade, with about 10 to 15% of control loops currently in­volved in this type of control. Ratios of total process inputs to total outputs (indicating, recording, control signals, and the like) are drifting higher, mov­ing typically toward 4 :1 . The ratios of inputs to outputs used directly in control tend toward 2:1 .

Optimization practice will also af­fect DDC computer design, since in many cases optimizing computers will be resetting the DDC computers. The typical process optimized today has about 100 loops with 350 inputs. Of the inputs, at least 50% are used di­rectly in optimizing. Optimizing cycles, which would, using DDC, in­volve resetting the DDC computer, are either short or long—three to 20 minutes on the one hand, or about eight hours on the other. The heaviest frequency falls at about five minutes.

The workshop group, a committee of the Instrument Society of America, considered a number of other aspects of DDC in detail, such as console de­sign and the need for more accurate sensing devices. It also set up several subcommittees to formulate guidelines for specific parts of a DDC system-valves and valve actuators, input mul­tiplexers, and electric transmitters.

Direct Digital Control: What It Is; Where It Stands

Conventional process control is based on control loops using analog signals of continuously varying pressure (for pneumatic systems or voltage (for electronic sys­tems) proportional to the values they represent. A loop starts with a sensing element (a thermocouple, for ex­ample) which senses a process variable and emits a sig­nal proportional to the variable. This signal feeds to a controller, which compares it with a predetermined value of what it should be (set point) and sends an output signal to a final control element, such as a valve, which operates to keep the variable at set point.

In its operation, the controller may add proportional action and reset action, depending on the characteristics of process response. The former specifies the range of output signal that will give a fully open to fully closed valve. The latter refines this by adding a correction to keep the process variable from assuming a new value offset in one direction or another from the set point. The controller operates according to an equa­tion in which predetermined constants (gains)

specify the amounts of proportional and reset action. With direct digital control (DDC), a digital computer

takes over the functions of all the analog controllers on a process. Signals from the sensing elements feed to an input multiplexer so that the computer can scan them one at a time. Before entering the computer, these are converted to digital signals having discrete values. Output signals from the computer may be con­verted back to analog or remain digital. These then go to the final control elements.

A DDC computer differs from an optimizing computer in that it operates a process on the basis of predeter­mined set points, just as analog controllers do. An opti­mizing computer, on the other hand, operates with a mathematical model of a process to determine the best set points in order for the process to operate at some optimum level of production or economics.

Though DDC is not yet a commercial reality, it is well along the way. So far, two upcoming installations have been announced (C&EN, May 11, page 58).

22 C & E N M A Y 18, 196 4

RETORTS. Acetylene black is being made on a commercial scale in the U.S. for the first time at Union Carbide's new plant in Ashtabula, Ohio. The battery of 24 retorts (left) has a rated capacity of 8 million pounds a year

Carbide Begins Acetylene Black Production First large-scale acetylene black plant in the U.S. has a capacity of 8 million pounds a year

Acetylene black is being made on a commercial scale in the U.S. for the first time. The domestic source is Union Carbide's 8 million pound-a-year plant now in operation at Ashta­bula, Ohio.

Carbide is using the acetylene black captively in the production of its Eveready line of dry-cell batteries. It also plans to sell some of the product in the U.S. and abroad, under the trademark Ucet.

In the past, more than 90% of the acetylene black used in the U.S. has been imported from Canada. Sha-winigan Chemicals, Shawinigan Falls, Que., has been the only producer in North America. Carbide says that domestic production will provide a safeguard against interruption of sup­plies from outside sources, such as the one that occurred in the latter part of 1962 and early 1963. Acetylene black was then in short supply because Canadian production was halted by a labor strike.

Acetylene black hasn't been made commercially in the U.S. before be­cause of the lack of detailed process knowledge. Also, the known markets are rather limited.

The acetylene black plant is opera­

ted by Carbide's olefins division. It is adjacent to the company's calcium car­bide and acetylene facilities. Acety­lene, the feedstock for the process, is piped directly to the new plant.

Process. To make high-purity acetylene black, acetylene is burned with a controlled quantity of air in a battery of specially designed retorts. When the temperature reaches 1500° C , the air supply is shut off. The oxi­dation reaction stops, and an autode-composition reaction, that is exother­mic and self-sustaining, takes over.

The high-purity acetylene black produced in this way is collected and processed through compression rolls to increase the density of the product. It is packed as 50% and 100% com­pressed materials, which have bulk densities of 6.25 and 12.5 pounds per cubic foot.

A major outlet for acetylene black is dry-cell batteries. Acetylene black's physical form makes it preferable to almost all other forms of carbon black for this use, Carbide says. The con­figuration of the carbon black provides the essential electric contact between the particles of manganese dioxide in the depolarizing material of the cell and the carbon electrode.

The structure of acetylene black also makes it useful where nonconductive materials have to be made electrically conductive. For instance, in air­craft tires, or in rubber and plastic tiles, the incorporation of acetylene black makes them sufficiently conduc­tive to prevent build-up of static charge. Other uses for the product are lubricants and polishing powders.

Kalium to Start Potash Shipments This Year Kalium Chemicals, Ltd., expects to start commercial shipments of potash from its solution mining operation near Regina, Sask., later this year. The solution mines that supply the plant are already in operation. Completion of the refinery will permit initial ship­ments of potassium chloride by about October. Full operation is expected early in 1965, and production should be 600,000 tons a year of potash ( K 2 0 ) , according to Boyd R. Willett, Kalium's vice president and general manager.

Kalium, jointly owned by Armour & Co. and Pittsburgh Plate Glass, plans to make bulk shipments of the potash throughout Canada and the U.S. Later, it intends to sell to Japa­nese and perhaps to European cus­tomers. Sales will be made chiefly to formulators rather than to the final user.

About 95% of all potash produced is used as fertilizer. Demand for pot­ash is growing at about 8% per year and, in 1970, world production will be about 17.5 million metric tons (K 2 0) compared with 10.3 million metric tons in 1963.

In Kalium's solution mining process, hot water is pumped down through bore holes into potash beds a mile be­low the surface. This dissolves the potash-bearing parts of the ores. On return to the surface, the solution is refined by a process of crystallization and drying. Kalium says it has suc­ceeded in dissolving the maximum potash ore with as little salt as pos­sible, making the process economi­cally feasible.

After evaporation, the material goes to a thickener tank for settling, and then to a crystallizer to remove iron and other impurities. Each crystal­lizer makes a material of a specific particle size. Kalium expects to ship standard, coarse, and granular mate­rial.

M A Y 18, 1964 C & E N 23