multi variable mpc system performance assessment, monitoring, and diagnosis

Upload: beian-larisa

Post on 06-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    1/17

    Multivariable MPC system performance assessment, monitoring,and diagnosis

    Jochen Scha fera, Ali Cinarb,*aOberer Hasenkopfweg 54, 89075 Ulm, Germany

    bChemical and Environmental Engineering Department, Illinois Institute of Technology, Chicago, IL 60616, USA

    Received 8 July 2002; received in revised form 9 July 2003; accepted 9 July 2003

    Abstract

    This study focuses on performance assessment and monitoring of model predictive control systems. A methodology is proposed

    to determine a benchmark and monitor model predictive control performance on-line. A performance measure based on the ratio of

    historical and achieved performance is used for monitoring and a ratio of design and achieved performance is used for diagnosis.

    Performance monitoring and diagnosis of causes for poor performance are integrated. A real-time knowledge-based system isdeveloped to supervise monitoring and diagnosis activities. Case studies with linear and nonlinear models of an evaporator illus-

    trate the methodology and limitations of linearity assumptions.

    # 2003 Elsevier Ltd. All rights reserved.

    Keywords: Controller performance assessment; Model predictive control; Fault diagnosis; Performance monitoring

    1. Introduction

    Controller performance assessment (CPA) and mon-itoring (CPM) are necessary to assure effectiveness ofprocess control and consequently safe and profitableplant operation. The initial design of control systemsincludes many uncertainties caused by approximationsin process models, estimations of disturbance dynamicsand magnitudes, and assumptions about operating con-ditions. The control algorithm and tuning parametersare chosen using this uncertain information, which maylead to plant performance that may differ significantlyfrom the design specifications. Even if controllers per-form well initially, many factors can cause their abrupt

    or gradual performance deterioration overtime. Sensoror actuator failure, equipment fouling, feedstock varia-tions, product changes and seasonal influences mayaffect controller performance and as many as 60% of allindustrial controllers have some kind of performanceproblem [1]. It is often difficult to effectively monitor theperformance and diagnose problems from raw datatrends [2]. These data trends often show complicatedresponse patterns resulting from the presence of distur-

    bances, noise, time variant response phenomena, and

    nonlinearities. In addition, the availability of a smallnumber of control engineers to assess many controlloops makes the analysis of raw data virtually unma-nageable. These facts stress the necessity for the devel-opment of efficient on-line techniques in controllerperformance assessment, monitoring and diagnosis.

    The goal of controller performance assessment, mon-itoring and diagnosis is to ensure that control systemsperform according to their specifications. This meansthat controlled variables meet their operating targetssuch as specifications on output variability, effectivenessin constraint enforcement or proximity to optimal con-trol. Requirements of a comprehensive approach for

    assessing the effectiveness of control systems include: (1)Determination of the capability of the control system;(2) Development of statistics for monitoring controllerperformance; (3) Development of methods for diagnos-ing the underlying causes of changes in the performanceof the control system [1].

    Suitable performance criteria must be defined todetermine the capability of a control system and abenchmark is necessary for assessment. In this work,determining the capability of control systems is referredto as performance assessment (CPA). Once the perfor-mance criteria are defined and acceptable performance

    0959-1524/$ - see front matter # 2003 Elsevier Ltd. All rights reserved.

    doi:10.1016/j.jprocont.2003.07.003

    Journal of Process Control 14 (2004) 113129

    www.elsevier.com/locate/jprocont

    * Corresponding author. Fax: +1-312-567-8874.

    E-mail addresses: [email protected] (A. Cinar).

    http://www.elsevier.com/locate/jprocont/a4.3dmailto:[email protected]:[email protected]://www.elsevier.com/locate/jprocont/a4.3d
  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    2/17

    during some period of operation is benchmarked, con-troller performance has to be monitored over time todetect significant changes. Since control system inputsare in general of random character, the outputs of theperformance measure will be stochastic as well. There-fore, statistical analysis tools should be used to detect

    statistically significant changes in controller perfor-mance. Monitoring the performance of the control sys-tem is called controller performance monitoring (CPM).When performance degradation is detected, the under-lying root causes have to be identified. In this context, atechnology for isolating problems associated with thecontroller from those arising from the process would bevery useful. The activity of diagnosing the underlyingcauses of changes in control system performance iscalled diagnosis. This study focuses on CPA and CPMof model predictive control (MPC) systems. Diagnosis islimited to distinguishing between root cause problemsassociated with the controller and problems that are not

    caused by the controller. Case studies based on an eva-porator model, an MPC, and a supervisory knowledge-based system (KBS) are used to illustrate the metho-dology proposed.

    An elegant CPM method based on minimum variancecontrol (MVC) and the variance of the controlled vari-able computed from routine process data proposed byHarris [3] has initiated the recent interest in CPA. Thevariance of a controlled variable is an important per-formance measure, since many process and quality cri-teria are based on it. The theoretically achievableabsolute lower bound on the variability of the output

    can be an appropriate benchmark to measure the per-formance of a regulatory control system. This bench-mark is achieved by a system under MVC. Using MVCas performance benchmark, one can assess the perfor-mance of a control loop and make statements on thepotential of improvements resulting from re-tuning ofcontroller parameters or implementing more sophisti-cated linear feedback controllers [4]. A good perfor-mance relative to MVC indicates that further tuning orre-design of the control algorithm is neither necessarynor helpful. In this case, further reduction of processvariability can only be obtained by implementation offeedforward (ff) control or re-engineering of the process.

    A poor performance might result from constraints suchas unstable or poorly damped zeros or control actionlimits and indicates the necessity of further analysis suchas process identification and controller re-design [5].

    Various performance indices have been suggested[2,4,68] and several approaches have been proposed forestimating the performance index for SISO systems,including the normalized performance index approach [4],the three estimator approach [9], and the filtering andcorrelation analysis approach [5]. A model free approachfor linear quadratic CPA from closed loop experimentsthat uses spectrum analysis of the input and output data

    has been suggested [10]. Implementation of SISO loopbased CPA and CPM tools for refinery-wide control loopperformance assessment has been reported [11].

    CPM of multivariable control systems has attractedsignificant attention because of its industrial impor-tance. The multivariate extension of the univariate fra-

    mework using MVC requires the interactor matrix[12,13] that can be obtained theoretically from thetransfer function via the Markov parameters or esti-mated from process data [14]. Alternatively, multi-variate MVC performance might be estimated viamultivariate time series analysis [1]. A pass/fail like-lihood ratio test was proposed to determine if perfor-mance specifications like settling time, decay ratio,minimum variance, or frequency domain bounds aremet [15]. Huang and Shah [5] proposed as benchmarkuser-specified closed loop dynamics, like settling time orovershoot. Kendra and Cinar [16] proposed a systemidentification based method for CPA/CPM of multi-

    variable systems.CPA/CPM for MPC type control systems has been

    studied in recent years. While MVC benchmarking hastraditionally been data-driven, relying only on routinelycollected process data, the availability of a model forMPC offers new alternatives for CPA/CPM of MPCs.Huang and Shah [5] proposed assessment of controllerperformance by measuring the proximity of actual per-formance to optimal performance estimated by solvingthe LQG problem. Patwardhan and Shah [17] suggestedassessment of the actual controlled performance of asystem by comparing it to its historical performance

    using the expected value of the MPC cost function for acertain time frame as the performance measure. Thebenchmark can be obtained by using data from thistime frame and deciding that some historic performanceas acceptable. Patwardhan and Shah [17] and Zhangand Henson [18] proposed a CPA technique for a subsetof MPCs based on values of the objective function forthe output of the underlying plant model and the realplant output.

    Integration of CPM with diagnosis was reported forsingle-loop case [19]. Diagnostic tools for performancedegradation in multivariable model-based control sys-tems have been proposed [20]. Very few uses of KBSs

    for CPM and diagnosis have been reported [21,22].In this paper, an integrated methodology for perfor-

    mance assessment and monitoring of MPC systems anddiagnosis of cause types for poor process performance isreported. Use of real-time KBSs for integrating CPMand diagnosis is also presented. Section 2 describes per-formance assessment and monitoring techniques pro-posed in the literature. These techniques are extendedand integrated to a comprehensive MPC performanceassessment and monitoring methodology in Section 3.Section 4 presents the procedure for diagnosis of rootcauses for poor controller performance. Integration of

    114 J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    3/17

    CPM and diagnosis is illustrated by using the evaporatorcontrol case study in Section 5. The use of a real-time KBS,G2, for integrating CPM and diagnosis is reported in Sec-tion 6. MPC calculations in this work are performed usinga slightly modified version of the MATLAB MPC Tool-box to allow for nonlinear plant models and a stepwise

    calculations necessary for on-line monitoring.

    2. Performance assessment of MPC systems

    Model predictive control is based on real-time opti-mization of a cost function. Consequently, CPA meth-ods that focus on the values of this cost function can bedeveloped. The MPC cost function Ft is

    Ft XP

    jN1

    yt j rt jTQyt j rt j

    XMj1

    Dut j 1TRDut j 1; 1

    where yt, rt, and Dut are vectors of predicted out-puts, reference trajectories, and change in manipulatedvariables at time t, respectively. Q and R are weightingmatrices representing the relative importance of eachcontrolled and manipulated variable. Control moves ateach time interval are obtained by calculating a controlsequence minimizing Ft. Therefore, it is reasonable tomeasure MPC performance by calculating values ofFtusing plant data. A measure based on Ft can be

    Jactualt "TtQ"t DuTtRDut; 2

    where "t and Dut are the vectors of controlled vari-able errors and control moves at time t, respectively. Ingeneral, the value of Ft will be a random variablebecause of the influence of measurement noise and dis-turbances. Therefore, an average or expected value ofthe cost function is more suitable for measuring thecontroller performance achieved:

    Jach EJactualt E"TtQ"t DuTtRDut; 3

    where E is the expectation operator and "t and Dut

    are computed from the data set under examination.Three methods have been proposed in the literature forCPA of MPC: the LQG benchmark [5], the historicalperformance benchmark [17], and the model-based per-formance benchmark [17,18].

    2.1. LQG benchmark

    The achievable performance of a linear system char-acterized by quadratic costs and Gaussian noise can beestimated by solving the linear quadratic Gaussian(LQG) problem. The solution can be plotted in a

    tradeoff curve that displays the minimal achievable var-iance of the controlled variable versus the variance ofthe manipulated variable [5]. This bound of achievableperformance can be used as a CPA benchmark. Closeoperation to this bound indicates performance close tooptimal performance. For the multivariable case, H2

    norms are plotted. The LQG objective function and thecorresponding H2 norms are [5]

    FLQGl E"tTQ"t lEDutTRDut; 4

    k GY k2Q E"t

    TQ"t k Gu k2R EDut

    TRDut:

    5

    The tradeoff curve is obtained by calculating the H2 normsfor different values of l and plotting k GY k

    2Q versus

    k Gu k2R. Once the tradeoff curve is calculated, the H2

    norms under the existing control system are computedand compared to the optimal control represented by thetradeoff curve.

    2.2. Historical benchmark

    This approach requires a priori knowledge that theperformance was good during a certain time periodaccording to some expert assessment [17]. For a certainblock of input and output data, the historical bench-mark Jhist is given by an equation of the same form as(3) where "t and Dut are taken from the historicaldata set. The objective function for the performanceachieved (Jach) is calculated by using again (3) where "tand Dut are taken from any data set. The performance

    measure is defined as the ratio

    hist Jhist

    Jach: 6

    2.3. Model-based performance measure

    Patwardhan and Shah [17] and Zhang and Henson[18] have proposed two alternative approaches based onsimilar concepts that differ in important details.

    Design case approach. Patwardhan and Shah [17]propose the comparison of the achieved performancewith the performance in the design case that is char-

    acterized by inputs and outputs given by the model. Thedesign cost function Jdes has the same form as (3) where"t and Dut are substituted for "t and Dut toindicate the predicted deviations of model outputs fromthe setpoints (an estimate of the disturbance is included)and the optimal control moves, respectively. Jach is thesame as that in historical benchmark (3) and is calcu-lated using plant data. Performance variation betweenthe real plant (Jach) and model (Jdes) is expressed by

    des Jdes

    Jach: 7

    J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129 115

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    4/17

    Expected performance approach. Zhang and Henson[18] have proposed an on-line comparison betweenexpected and actual process performance. The expectedperformance is obtained when controller actions areimplemented on the process model. The expected per-formance incorporates estimates of state noise, but no

    output disturbances. The actual and expected perfor-mance are compared on-line over a moving horizon PCof past data using the ratio [18]:

    IMPCt Jexpt

    Jactt: 8

    The actual and expected performances are defined as

    Jactt XPCj1

    "Tt j PCQ"t j PC; 9

    Jexpt XPCj1

    "Tt j PCQ"t j PC: 10

    The ratios des and IMPC are very similar. In general,they will be smaller than 1 because of imperfect models,sensor and actuator noise or other uncertainties.

    IMPC is a stochastic variable and statistical analysis isadvocated to detect statistically significant changes inthe controller performance [18]. IMPC is assumed to begenerated by an autoregressive moving average(ARMA) model

    Aq1IMPCt Cq1zt; 11

    where q1 is the backward shift operator, Cq1 and

    Aq1

    are monic polynomials and zt is a zero-mean,uncorrelated, Gaussian noise signal. Collecting asequence of IMPC values in a time interval in which thecontroller performs as expected, polynomials A and Cand the variance of z can be estimated. Zhang andHenson [18] report that the IMPC is highly serially cor-related and the autoregressive part is first order. Conse-quently, they propose

    1 a1q1IMPCt zt 12

    and define

    DIMPCt Aq1bCCq1 IMPCt; 13

    where bCCq1 and Aq1 are estimated polynomials.The estimated noise variance is used to compute 95%confidence intervals on DIMPCt [18]. Violation of thesecontrol limits indicates a statistically significant changein controller performance. According to (12) and (13),DIMPCt is a prediction residual and should have anormal distribution. Prediction residuals are frequentlyused to monitor variations in autocorrelated randomvariables using well-established statistical process con-trol charts.

    3. A comprehensive MPC performance assessment

    technique

    The LQG benchmark is limited to a special group ofMPCs characterized by the equality of control (M) andprediction (P) horizons and lack of ff components and

    constraints. It may be considered as a limit of achiev-able performance in terms of input and output varianceto evaluate various types of controllers. Since M and Pare two independent and important tuning parametersand incorporation of constraints and ff control areimportant advantages of MPC over conventional con-trollers, alternatives to the LQG benchmark are desir-able for monitoring the performance of these moreinteresting group of MPC implementations. To developnew alternatives, it is useful to review the steps ofobtaining the LQG benchmark. The essential step is thecalculation of various control laws for different valuesofl and prediction and control horizons (P M). This

    step is a case study for a special type of MPC (uncon-strained, no ff) and a special parameter set (M P).The outcome of this study is an optimal value of thecost function and an optimal controller parameter set.Using the same information (plant and disturbancemodel, covariance matrices of noise and disturbances),studies can be conducted for any type of MPC and theinfluence of any parameter can be examined. Thesestudies can be automated and the corresponding valueof the cost function can be reported as function of theunderlying parameter set.

    A value of the cost function suitable to be the histor-

    ical benchmark and a design case that performs accep-tably can be identified based on CPA decisions. Twoperformance measures for on-line monitoring aredefined after a benchmark is obtained. histt is exten-ded for computation at each sampling time to determinecontroller performance. dest is extended for compu-tation at each sampling time to assist in diagnosis ofcause types for poor performance. CPM is implementedby using the LQG benchmark or a benchmark obtainedfrom case studies and histt. When the controller per-formance is declared poor, dest is used to make diag-nostic decisions.

    Tools for CPM and diagnosis are available for four

    types of MPCs by obtaining benchmarks for con-strained cases and controllers including ff components,and establishing statistical analysis to the historical andmodel-based performance measures histt and dest(Table 1).

    3.1. A benchmark obtained from comparative studies

    The tuning parameters of MPC include: the predic-tion horizon P, the control horizon M, and parameter that determines the desired speed of approach to thesetpoint by using a relationship between the setpoints

    116 J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    5/17

    and the reference trajectory rt k spt k 11 spt k. In addition, weight matrices and inputconstraints can be used to adjust the aggressiveness ofthe controller. The minimum achievable value of thecost function J can be found by varying M, P, and ifthe weight matrices and constraints are fixed to specificvalues. If the control objective is disturbance rejection, is irrelevant, since the set points equal zero. Fig. 1shows the values of the cost function for differentcombinations of P and M. For P M (LQG bench-

    mark), the largest value of P M minimizes the costfunction. However, M 2 and P 20 seems to be theoptimum combination for the parameter ranges underexamination.

    An optimal parameter set and the optimal Jach areidentified for given constraints, parameter ranges, andweight and covariance matrices. The minimal value of Jcan be used as a benchmark. A quantitative measure ofthe performance is given by hist. Systematic compara-tive studies may be computationally too intensive,especially if limits on control moves and weight matricesare considered. Therefore, one might want to select M

    and P first and then continue to seek the benchmarkvalue by varying other parameters. The absolute opti-mum may be missed because of the interdependencies of

    parameters, but the tradeoff is significant reduction ofthe computational burden.

    3.2. On-line statistical monitoring of historical benchmark

    For on-line monitoring, hist is computed at eachsampling time. In analogy to the calculation of Jact [18],the achieved cost function (Jach) is calculated over amoving horizon PC of past data

    Jach 1PC"XPC

    j1

    "Tt j PCQ"t j PC

    DuTt j PCRDut j PC

    #; 14

    where "t is the vector of control errors at time t. Theperformance measure histt at sampling time t is

    histt Jhist

    Jacht: 15

    Since hist is a random variable, statistical process mon-

    itoring (SPM) tools can be used to detect statisticallysignificant changes. histt is highly autocorrelated. Useof traditional SPM charts for autocorrelated variables

    Table 1

    Categorization of techniques to be used (fffeedforward)

    Controller specification Assessment Monitoring Diagnosis

    Unconstrained, no ff LQG histt dest

    Unconstrained, ff Comparative study histt dest

    Constrained, no ff Comparative study histt dest

    Constrained, ff Comparative study histt dest

    Fig. 1. Variation of the cost function (J) with control (M) and prediction (P) horizons.

    J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129 117

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    6/17

    may yield erroneous results. An alternative SPMmethod for autocorrelated data is based on the devel-opment of a time series model, generation of the resi-duals between the values predicted by the model and themeasured values, and monitoring of the residuals [23].The residuals should be approximately normally and

    independently distributed with zero mean and constantvariance if the time series model provides an accuratedescription of process behavior. Therefore, popularunivariate control charts (such as x-chart, cumulativesum, and exponentially weighted moving averagecharts) are applicable to the residuals. Residuals-basedSPM is used to monitor histt. An autoregressive (AR)model is used for representing histt:

    Aq1histt t; 16

    where Aq1 is monic polynomial with ai, i 1; . . . ; naand t is a zero-mean, uncorrelated, Gaussian noisesignal. Expand (16) to estimate histt

    histt a1q1 a2q2 anaqnahistt

    t: 17

    Estimates of ai are obtained from analysis of processdata, and the estimates ofhistt, ^histt, are computedusing (17). The residuals are

    et histt ^histt: 18

    The AR model and the variance of ek can be estimatedfrom an in-control data set using software such asMATLAB System Identification Toolbox. A standardx-chart is designed using control limits at 3 standard

    deviations (3 limits) to monitor the residuals et andconsequently histt.

    3.3. Statistical monitoring of model-based performance

    measure

    des [17] is a model-based performance measure whichis in closer agreement with the general MPC methodol-ogy. Therefore des is used in the proposed method asmodel-based performance measure after modifying thecost functions for on-line monitoring. Jdest and Jachtare computed using Eq. (14) with " and ", respectively.The performance measure dest is

    dest Jdest

    Jacht: 19

    Statistical monitoring similar to that for histt isnecessary to detect significant changes over time.

    4. Diagnosis

    Monitoring the model-based performance measure isuseful in diagnosing the causes of performance degra-dation. Some root causes affect the design case con-

    troller while others do not. For instance, increases inunmeasured disturbances, actuator faults, or increase inthe model mismatch do not influence the design caseperformance. Accordingly, Jdes remains constant whileJach increases, reducing the model-based performancemeasure. Root cause problems such as input saturation

    or increase in measured disturbance, on the other hand,affect the design case performance as well. This leads toan approximately constant value of the model-basedperformance measure, if the effect is quantitativelyequal (which happens for a good process model). Fur-thermore, the proposed statistical monitoring allowschange detection over time for random variables ofunknown distribution function. The three techniquesintroduced can be classified according to the type ofcontroller and the indexes used for assessment,monitoring, and diagnosis activities (Table 1).

    If a degradation in performance is indicated, diag-nosis can be performed by looking at dest. If it has

    not changed significantly, the reason for the overalldegradation does affect both the design and achievedperformance cost function to the same extent. Thus, thecause belongs to group I (Table 2). If the model-basedperformance measure shows a degradation as well, thecause belongs to group II. This diagnostic sequenceassumes that only one source cause occurs. If multiplecauses can occur simultaneously, then the diagnosislogic becomes more complex.

    Diagnosis of group I and its subgroups. Subgroups aredefined to further distinguish between the root causeproblems in group I. All changes in the controller (e.g.

    tuning parameters, estimator, constraints) are assumedto be performed manually. These changes are knownand their effects can be monitored. However, the actiontaken is known and the root cause of the effect does notneed to be identified by diagnosis tools (subgroup Ia).The remaining two root cause problems (changes inmeasured disturbances and input saturation) make upsubgroup Ib. Additional information is needed to dis-tinguish between the two root cause problems in sub-group Ib. Looking at the manipulated variables, inputsaturation can be determined by visual inspection. Asaturation effect in a manipulated variable indicatesinput saturation as underlying root cause and rules out

    the increase in measured disturbances.

    Table 2

    Groups of root cause problems

    Group I Group II

    (a) Change in controller

    specifications

    Change in process dynamics

    (b) Change in measured

    disturbances

    Change in unmeasured

    disturbance

    (b) Change in input saturation Change in noise covariance

    118 J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    7/17

    Diagnosis of group II. Distinguishing between perfor-mance degradation due to increases in unmeasured dis-turbances and changes in process parameters is aquestion of model validation. Consider a highly idea-lized case where disturbances can be regarded as whitenoise. If the model is perfect, the innovation sequence is

    white noise as well [24]. Imperfect models change thecolor of the innovation sequence that can be detectedusing various methods.

    If it is assumed that changes in controller specifica-tions are done manually and do not need to be identifiedby the diagnosis tools, the sequence of detection anddiagnosis follows the path in Fig. 2. Performance ismonitored over time using the performance measurebased on hist . Once a degradation is detected, des isused to distinguish between root cause problems ofgroup I and group II. Information about the trend ofmanipulated variables is used to distinguish betweenproblems resulting from constraints and increases in

    measured disturbances.Real-time diagnosis with G2. G2 is a commercial KBS

    development tool for building real-time KBSs [25]. Itcan be used for developing supervisory KBSs for build-ing process models, monitoring and control systems,fault diagnosis algorithms, on-line operator interaction,and integrating these functions. Expert knowledge andreasoning can be represented by rules that can makeinferences based on process data. In addition, proce-dures containing a certain sequence of actions can beprogrammed, which is useful, for instance, in automat-ing checks on different variables. Communication

    between G2 and external systems is handled by the G2standard interface (GSI). GSI provides the necessary

    network protocol information for communicationbetween G2 and external functions written in C. In ourwork, software modules developed in MATLAB areconverted to C with the MATLAB C compiler andlinked to G2 [26].

    5. Integrated CPM and diagnosis for MPC of an

    evaporator using the comprehensive studies benchmark

    The techniques for CPA, CPM, and diagnosis areapplied to MPC based control of an evaporator. First, ahistorical benchmark is found. Then, performancemonitoring and diagnosis are performed simultaneouslyfor two different cases differing by the use of linear andnonlinear plant models. The fundamental assumption ofa known plant and disturbance model while assessingthe initial performance is perfectly valid for the first case

    study and questionable for the second. The impact oflinearity assumption and other effects resulting fromnonlinearity are shown and discussed.

    A forced circulation evaporator (Fig. 3) model is usedin this study. It is a linear state space model in deviationvariables obtained from linearization around normaloperating conditions [27]. The system has three con-trolled variables (separator level (L2), product compo-sition (X2), and operating pressure (P2)), threemanipulated variables (product flowrate (F2), steampressure (P100), and cooling water flowrate (F200)), andfive disturbances (circulation flowrate (F3), feed flowrate

    (F1), feed composition (X1), feed temperature (T1), andcooling water inlet temperature (T200)).

    Fig. 2. Diagnosis logistics.

    J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129 119

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    8/17

    5.1. Initial assessment of the control systems capability

    System specifications. The capability of the MPC sys-tem for controlling the evaporator is assessed by con-ducting simulations using the linear model of theevaporator. The controlled variables are separator level,product composition, and operating pressure. Themanipulated variables are product flowrate, steam pres-sure and cooling water flowrate. The weight matrices Wand R, representing the relative importance of the con-trolled and manipulated variables, respectively, are

    W

    0:5=m 0:0 0:00:0 1:0=% 0:00:0 0:0 0:5=kPa

    0@ 1A; 20

    R

    0:2min=kg 0:0 0:00:0 2:0=kPa 0:00:0 0:0 0:5 kg=min

    0@ 1A: 21The limits on the controlled and manipulated variablesare listed in Table 3. The noise signal is assumed to bewhite and is generated such that the standard deviation

    of each measurement is approximately 1% of its origi-

    nal value under normal operating conditions. Theuncontrolled inputs are a combination of white noisesequences whose standard deviations are about 1% oftheir original value and a pseudo random binary signalthat adds step changes to the disturbance. The magni-tude of the step changes is about 1% of the originalvalue of the variables. A Kalman filter is used for stateestimation as default. A change in the estimator is partof the case studies.

    Case studies for initial performance assessment. Casestudies are performed to find an optimal achievable

    performance and the corresponding set of tuning para-meters based on known plant and disturbance models,and estimates of the noise and disturbances. In this case,P and M are the only tuning parameters since is irre-levant (no setpoint change) and the weight matrices andconstraints are given. Simulations are performed forvalues of P from 1 to 15 and for M from 1 to P. Theresults (Fig. 4) indicate that the optimal Jach is obtainedfor M P 1, which is fairly surprising as stabilityproblems usually exist for this combination. As theoptimal value is by far the smallest, the M P 1 caseis taken as reference case.

    Flowrate F3 is selected as measured disturbance and

    the corresponding reduced value of the cost function asthe historical benchmark. After identifying the bench-mark and the design case tuning parameters, theARMA models needed for CPM are built using theMATLAB System Identification Toolbox. Assumingthat detection of large changes in the controller perfor-mance is of interest, an x-chart with 2 as proposed in[5] is applied to prediction residuals. If detection ofsmall changes in the mean of the performance measuresis desired, a CUSUM chart would be more suitable.

    A number of cases have been considered to test theCPM and diagnosis techniques to detect the performance

    Fig. 3. Flowsheet of the evaporator system [27].

    Table 3

    Constraints on controlled and manipulated variables

    Variable Lower limit Upper limit Dumax

    L2 0.5 m 1 m

    X2 5% 5%

    P2 10 kPa 10 kPa

    F2 2 kg/min 3 kg/min 100 kg/min

    P100 50 kPa 50 kPa 100 kPa

    F200 20 kg/min 20 kg/min 100 kg/min

    120 J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    9/17

    degradation and identify the category of the underlyingroot cause problem:

    (1) Increase in unmeasured disturbances F1 and X1at t 300 min. The disturbance data sequence ofthese variables are multiplied by a factor 4.Hence, the variance and the size of the step dis-turbance increases.

    (2) Increase in measured disturbance F3 at t 300

    min. The disturbance data sequence of this vari-able is increased by a factor 4.(3) Increase in measurement noise at t 300 min.

    The magnitude of the noise sequence is increasedby a factor 4.

    (4) Change to a less sophisticated state estimator asan example of an on-line tuning attempt. TheMATLAB MPC Toolbox has a default state esti-mator named DMC state estimator.It assumes thematrix relating the unmeasured disturbances andthe states to be an identity matrix [28]. This impliesthe questionable assumption that each dis-turbance affects one and only one state variable.

    (5) Increase in model mismatch at t 300 min.Matrix B of the state space model is changed to:

    B

    0:1 0:37266 0:00:1 0:0 0:0

    0:0 0:036914 0:0075272

    0@ 1A; 22where B is the matrix relating the manipulated variablesand the states in the continuous time state space model.The control system has turned out to be fairly robust con-cerning changes in B. To get an effect that causes a largedecrease in performance, matrix B is multiplied by 0.5.

    (6) Decrease of the saturation limit ofP100 at t 300min. The upper limit of P100 is decreased from295 to 195 kPa.

    5.2. CPM with linear plant simulation model and linear

    MPC model

    The linear model of the evaporator is used in the

    MPC algorithm. In the following, a subset of the casesdescribed above are presented. It is assumed that onlythe change selected is occurring but no other variationsexist for each case. The effects on the two relevantmeasures hist and des and on the trend of the manipu-lated variables are discussed and plotted as appropriate.

    In control situation. Figs. 5 and 6 show the perfor-mance measures hist and des and the correspondingprediction residuals ehist and edes . The horizon of pasttime intervals PC used to calculate the design andachieved cost functions is chosen as 75. Hence, hist anddes can be calculated for t > 75 min. For t < 76 theperformance measures are set to zero. The step change

    in performance measures at t 75 min is statisticallyrelevant, leading to a violation of the control limits.Apart form this initialization effect, the figures show anin control situation and both residuals are in statis-tical control. Because of the idealized setting of this casestudy, values close to 1 can be observed for hist and desindicating performance according to historical bench-mark and design specifications, respectively.

    Increase in unmeasured disturbances. The increase inunmeasured disturbances at t 300 min causes con-troller performance degradation as indicated by hist(Fig. 7). The decrease in hist can be observed and the

    Fig. 4. Variation in cost function (J) with control (M) and prediction (P) horizons.

    J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129 121

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    10/17

    Fig. 5. hist in an in control situation.

    Fig. 6. des in an in control situation.

    122 J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    11/17

    out of control situation for the residual is indicatedby repeated violations of the control limit and a changein the variance. des and edes show similar changes indi-cating that the problem belongs to group II as expected.

    Increase in measured disturbances. The increase in the

    measured disturbance causes performance degradationas indicated by hist . Since des does not decrease, theactual cause of degradation belongs to group I. Trendsin manipulated variables are observed to distinguishbetween the possible subgroups. A performance degra-dation due to constraints is ruled out since the manipu-lated variables are not saturated.

    The trend of the performance measures after the dis-turbance is introduced becomes smoother (see plots inG2 screensFigs. 13 and 14). This happens becausehist is the ratio of a constant over a random number. Ifthe mean of the random number increases more than itsstandard deviation, the random character of the ratio

    decreases. Jach increases due to the increase in the mea-sured disturbance.

    Increase in measurement noise. An increase inmeasurement noise has a negative effect on perfor-mance, reducing both hist and des. Because des isaffected as well, the root cause problem is identified asbelonging to group II.

    Change in the state estimator. On-line (re-)tuningmight be one way to enhance the overall plant perfor-mance. This kind of on-line improvements is donemanually and the operators are aware of the changesmade. The change in state estimator has a negative

    effect on the performance. Since des is not affected, thechange in the estimator affects the estimation accuracyof the design and achieved performance cases in thesame manner.

    Increase in model mismatch. A change in the matrix

    relating the manipulated and controlled variablesdegrades the performance as indicated by a reduction inhist. Since des is affected in a similar manner as well,the underlying problem belongs to group II.

    Decrease of the saturation limit. The saturation limitof P100 is set to zero at t 300 min. hist indicates aperformance degradation (Fig. 8). Because des does notdecrease, the source cause of the degradation belongs togroup I. To distinguish between an increase in measureddisturbances, an increase in the measurement noise andan input saturation as the source cause, the trend of themanipulated variables is observed (Fig. 9). The effect ofinput saturation can be seen clearly between t 300 and

    350 min. After t 350 min the MPC being aware of thislimit tries to stay at the operation point by rearrangingthe use of the manipulated variables. However, theinput saturation is correctly identified to be the rootcause problem.

    5.3. CPM with nonlinear plant simulator and linear

    MPC model

    For this case, the linear model is used in MPC algo-rithm whereas the process is simulated by the nonlinearmodel. A historical benchmark and an optimal tuning

    Fig. 7. Effect of increase in unmeasured disturbances on hist.

    J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129 123

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    12/17

    Fig. 8. Effect of input saturation on hist.

    Fig. 9. Effect of input saturation on the manipulated variables.

    124 J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    13/17

    obtained earlier and used with the linear model are nowtested by using the nonlinear plant model. This is themore realistic setup and consequently a more valuabletest evaluating CPM of MPCs.

    Problems resulting from the optimal tuning para-meters. The set of tuning parameters found earlier(M P 1) was considered surprising and question-able, since this combination frequently causes stabilityproblems. This stability problems can be observed whenapplying this parameter set to the nonlinear plantmodel. The controlled variables make wild oscillations,and hist indicates poor performance. This poor perfor-mance indicates the need to change P and M to morerealistic values (this is done on-line in the G2 environ-ment as discussed in Section 6). The results of suchchanges can then be measured by hist (Fig. 10). P 15

    and M 1 turns out to be a good choice and the cor-responding (new) historical benchmark is Jhist 0:162.However, this correction is more cosmetic than reason-able, since the linearized model turned out be inap-propriate to represent the closed loop dynamics.

    In control situation. In a typical in control sit-uation with P 15 and M 1, nothing causing statis-tically significant performance degradation happens andhist and des are in statistical control except for a fewfalse alarms caused by the use of the more accuratenonlinear model. If 3 limits (99% confidence) that aremore popular in statistical process control were usedinstead of the 2 control limits (95% confidence), all

    false alarms would be eliminated. The actual out-of-control signals in subsequent figures are still significantwith 3 limits. The two in control situations (basedon linear and nonlinear) differ significantly. The perfor-mance measure based on histt indicates a compara-tively poor performance since its mean is around 0.5 ascompared to 1 in the first case study. The same holds fordes which now averages at about 0.6 indicating a sig-nificant model mismatch between the internal (linear)model and the plant (nonlinear) model.

    Increase in unmeasured disturbance. An increase inunmeasured disturbances at t 300 degrades controller

    performance as indicated by hist. Since des is alsoaffected, the problem belongs to group II.

    Increase in measured disturbance. When the measureddisturbance is increased, reduction in hist indicates per-

    formance deterioration. In contrast to the linear modelcase, des shows a statistically significant change as well(Fig. 11). The increase in des indicates that the differ-ence between the design and the achieved cost functionbecomes smaller. The increase in the measured dis-turbance results in a larger increase in the design costfunction than in the achieved performance cost functionwhich is surprising. However, des does not decrease anddegradation due to constraints can be ruled out bylooking at the manipulated variable plots. Conse-quently, an increase in measured disturbances is diag-nosed correctly to be the actual root cause problem.

    Increase in the measurement noise. The increase inmeasurement noise affects the performance negativelyas indicated by reductions in hist and des pointing outto a source cause of group II.

    Change in the state estimator. hist decreases as a resultof the change in the state estimator. In contrast to thelinear case, des is affected negatively as well. This meansthe difference between achieved performance and designcost function increased due to a larger increase in theachieved performance cost function. For the designcase, the estimator compensates for noise and unmea-sured disturbances, whereas for the achieved perfor-mance case it also compensates for a significant model

    mismatch. Hence, it might be qualitatively reasonablethat the estimator plays a more important role for theachieved performance cost function resulting in a largerdegradation of performance.

    Increase in model mismatch. A change in model para-meters degrades the performance as indicated by reduc-tions in hist. Since des is affected as well, the underlyingproblem belongs to group II.

    Reduction of the saturation limit. A reduction in thelimit on P100 decreases the performance as indicated byhist. In contrast to the linear case, an increase in des isobserved. This increase can be explained by the portion

    Fig. 10. hist for different prediction and control horizons.

    J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129 125

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    14/17

    of shared problems between design and achieved case.Input saturation is identified as the root cause problemby looking at the manipulated variables where P100reaches its upper bound.

    5.4. Summary of case study discussion

    For the case of linear simulator model (MPC modelof high accuracy since the simulator and MPC modelsare identical) the assumption of known plant and dis-turbance model is valid and the proposed CPA, CPMand diagnosis techniques work well for this case. Theinitial performance assessment leads to a reasonableperformance benchmark and a set of optimal tuningparameters, resulting in values close to 1 for hist.Moreover, performance degradation due to the occur-rence of different root cause problems can be detectedand their categories identified. Table 4 summarizes the

    response of the three measures towards the differentevents. In Table 4 d indicates a decrease, i anincrease, n not affected, s saturated and notconsidered. For the case of known closed-loop dynamics,the approach proposed seems to give satisfactory results.

    When the nonlinear plant model is used in simula-tions with linear model in MPC (model of low accu-racy), the invalidity of the assumption of known closed-loop dynamics affects the performance assessment stepdramatically. The benchmark and the set of optimaltuning parameters obtained are unreasonable. The factthat the optimal set of tuning parameters for the linearcase results in these wild oscillations for the nonlinearcase shows that the dynamic representation of the sys-tem through the linearized model is inappropriate.Thus, the use of the linearized model for obtaining abenchmark is not suitable and a closed-loop model hasto be identified. However, once an initial assessment is

    made (either obtained by using a suitable model orbased on expert knowledge and experience) a historical(user specified) benchmark can be set and the relativechanges in performance can be monitored. The accuracyof the CPM tool are acceptable in spite of poor modelquality. The differences in responses of the relevantmeasures when the nonlinear plant model is used are indi-cated in Table 4 with bold printed letters. The deviations inresponse to the change in state estimator would lead to awrong diagnosis. This erroneous diagnosis can be asso-ciated with the model mismatch, leading to quantitativelydifferent effects on the internal and plant model.

    Fig. 11. Effect of increase in measured disturbances on des variables.

    Table 4

    CPM and diagnosis results using linear and nonlinear models

    Event hist des MVs exceed limit

    1 d d

    2 d n (i) n

    3 d d (n)

    4 d n (d)

    5 d d

    6 d n (i) s

    126 J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    15/17

    6. Integration of CPM and diagnosis by G2

    MATLAB m-files and G2 can be connected via aninterface based on C code. The m-files are compiled toC and the C code is linked to the GSI. So far this hasbeen done for the simulation and control part of this

    work and for the on-line calculation of the historicalbenchmark.

    On-line tuning. Consider the situation of optimaltuning parameters obtained from case studies based onthe linearized model. When P M 1, heavy oscilla-tions and extremely small values of hist have beenobserved. As G2 allows on-line monitoring and tuning,corrective action might be taken instantaneously. The

    prediction horizon is increased to 10 (see the messagebox in Fig. 12) resulting in a less aggressive controller.

    J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129 127

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    16/17

    Because of this change to performance of the controllerimproves as can be seen in the plot of the performancemeasure or directly from the raw data trend.

    Increase in measured disturbance. The same increaseused in measured disturbance case study with the linearmodel is repeated in the G2 environment. The diagnosis

    logistics mentioned in Section 4 are implemented as arule base in G2 to support the operator. Figs. 13 and 14show the G2 screens with the control panel, the messagebox, the manipulated output, and the performancemeasures. The result of inferencing by G2 and thediagnostic results are displayed in the message box.

    7. Conclusions

    Performance assessment and monitoring of modelpredictive control systems can be integrated with diag-nosis of source causes for poor controlled system per-

    formance. A methodology is proposed to determine abenchmark and monitor MPC performance on-line. Aperformance measure based on the ratio of historicaland achieved performance is used for monitoring and aratio of design and achieved performance is used fordiagnosis. Case studies with linear and nonlinear modelsof an evaporator illustrate the methodology and limi-tations of linearity assumptions. For the linear modelthe assumption of known plant and disturbance modelis perfectly valid. The integrated CPA, CPM and diag-nosis techniques perform well in monitoring and diag-nosis of MPC performance. Studies with the nonlinear

    plant model illustrate that the use of the linearizedmodel for obtaining a benchmark is not suitable.

    Acknowledgements

    Financial assistance provided by Equilon is gratefullyacknowledged.

    References

    [1] T.J. Harris, C. Seppala, L.D. Desborough, A review of perfor-

    mance monitoring and assessment techniques for univariate and

    multivariate control systems, J. Process Control 9 (1999) 117.

    [2] D.J. Kozub, Controller performance monitoring and challenges,

    in: CPC V Proceedings, 1997, pp. 8396.

    [3] T. Harris, Assessment of control loop performance, Can. J.

    Chem. Eng. 67 (1989) 856861.

    [4] L. Desborough, T. Harris, Performance assessment measures for

    univariate feedback control, Can. J. Chem. Eng. 70 (1992) 11861197.

    [5] B. Huang, S.L. Shah, Performance Assessment of Control Loops,

    Springer-Verlag, London, 1999.

    [6] W. DeVries, S. Wu, Evaluation of process control effectiveness

    and diagnosis of variation in paper basis weight via multivariate

    time series analysis, IEEE Trans. Auto. Cont. 23 (1978) 702708.

    [7] D.J. Kozub, C. Garcia, Monitoring and diagnosis of automated

    controllers in the chemical process industry, in: AICHE Annual

    Meeting, St. Louis, MO, 1993.

    [8] S. Bezergianni, C. Georgakis, Controller performance assessment

    based on minimum and open-loop output variance, Chem. Eng.

    Practice 8 (2000) 791797.

    [9] C.B. Lynch, G.A. Dumont, Control loop performance monitor-

    ing, IEEE Trans. Control Syst. Technol. 4 (1996) 185192.

    Fig. 14. Printout of G2 screenincrease in the measured disturbance.

    128 J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129

  • 8/3/2019 Multi Variable MPC System Performance Assessment, Monitoring, And Diagnosis

    17/17

    [10] L. Kammer, R. Bitmead, P.L. Bartlett, Optimal controller prop-

    erties from closed-loop experiments, Automatica 34 (1998) 8391.

    [11] N.F. Thornhill, M. Oettinger, P. Fedenczuk, Refinery-wide con-

    trol loop performance assessment, J. Process Control 9 (1999)

    109124.

    [12] B. Huang, S.L. Shah, E.K. Kwok, Good, bad, or optimal: Per-

    formance assessment of multivariable processes, Automatica 33

    (1997) 11751183.

    [13] T. Harris, F. Bourdreau, J. MacGregor, Performance assessment

    for multivariate feedback controllers, Automatica 32 (1996)

    15051518.

    [14] B. Huang, S. Shah, H. Fujii, The unitary interactor matrix and its

    estimation from closed-loop data, J. Process Control 7 (1997)

    195.

    [15] M.L. Tyler, M. Morari, Performance monitoring of control sys-

    tems using likelihood methods, Automatica 32 (1996) 1145

    1162.

    [16] S. Kendra, A. Cinar, Controller performance assessment by

    frequency domain techniques, J. Process Control 7 (1997) 181

    194.

    [17] P. Patwardhan, S.L. Shah, Performance diagnostics of model-

    based controllers, J. Process Control 12 (2002) 413427.

    [18] Y. Zhang, M.A. Henson, A performance measure for constrained

    model predictive controllers, in: European Control Conference,

    Karlsruhe, Germany, 1999.

    [19] N. Stanfelj, T.E. Marlin, J.F. MacGregor, Monitoring and diag-

    nosing process control performance: the single-loop case, Ind.

    Eng. Chem. Res. 32 (1993) 301314.

    [20] P. Kesavan, J. Lee, Diagnostic tools for multivariate model-based

    control systems, Ind. Eng. Chem. Res. 36 (1997) 27252738.

    [21] P. Jofriet, C. Seppala, M. Harvey, T. Surgenor, B. ans Harris, An

    expert system for control loop performance, Pulp Paper Canada

    97 (1996) 207211.

    [22] S.J. Kendra, M.R. Basila, A. Cinar, Intelligent process control

    with supervisory knowledge-based systems, IEEE Control Syst.

    14 (1994) 3747.

    [23] L.C. Alwan, H.V. Roberts, Time series modeling for statistical

    process control, J. Bus. Econ. Stat. 6 (1988) 8795.

    [24] D. Brian, O. Anderson, J. Moore, Optimal Filtering, Prentice-

    Hall, New Jersey, 1979.

    [25] Gensym, G2 Reference Manual, Gensym Corporation, Cam-

    bridge, MA, 1997.

    [26] E. Tatara, An integrated knowledge-based system for automated

    system identification, monitoring, and sensor audit for multi-

    variate processes, Masters thesis, Illinois Institute of Technology,

    Chicago, USA, 1999.

    [27] R. Newell, P. Lee, Applied Process Control: A Case Study, Pre-

    ntice-Hall, New Jersey, 1988.

    [28] M. Morari, L. Ricker, Model Predictive Control Toolbox for Use

    with MATLAB, The Mathworks, Natwick, 1998.

    J. Schafer, A. Cinar / Journal of Process Control 14 (2004) 113129 129