FAULT DETECTION FOR THE BENFIELD PROCESS USING A CLOSEDCLOSED-LOOP SUBSPACE RERE-IDENTIFICATION APPROACH APPROACH by Johannes Philippus Maree Submitted in partial fulfilment of the requirements for the degree Master of Engineering (Electronic Engineering) in the Faculty of Engineering, The Build Environment and Information Technology UNIVERSITY OF PRETORIA November 2008 © University of Pretoria Summary Closed-loop system identification and fault detection and isolation are the two fundamental building blocks of process monitoring. Efficient and accurate process monitoring increases plant availability and utilisation. This dissertation investigates a subspace system identification and fault detection methodology for the Benfield process, used by Sasol, Synfuels in Secuda, South Africa, to remove CO2 from CO2 -rich tail gas. Subspace identification methods originated between system theory, geometry and numerical linear algebra which makes it a computationally efficient tool to estimate system parameters. Subspace identification methods are classified as Black-Box identification techniques, where it does not rely on a-priori process information and estimates the process model structure and order automatically. Typical subspace identification algorithms use non-parsimonious model formulation, with extra terms in the model that appear to be non-causal (stochastic noise components). These extra terms are included to conveniently perform subspace projection, but are the cause for inflated variance in the estimates, and partially responsible for the loss of closed-loop identifiably. The subspace identification methodology proposed in this dissertation incorporates two successive LQ decompositions to remove stochastic components and obtain state-space models of the plant respectively. The stability of the identified plant is further guaranteed by using the shift invariant property of the extended observability matrix by appending the shifted extended observability matrix by a block of zeros. It is shown that the spectral radius of the identified system matrices all lie within a unit boundary, when the system matrices are derived from the newly appended extended observability matrix. The proposed subspace identification methodology is validated and verified by re-identifying the Benfield process operating in closed-loop, with an RMPCT controller, using measured closed-loop process data. Summary Models that have been identified from data measured from the Benfield process operating in closed-loop with an RMPCT controller produced validation data fits of 65% and higher. From residual analysis results, it was concluded that the proposed subspace identification method produce models that are accurate in predicting future outputs and represent a wide variety of process inputs. A parametric fault detection methodology is proposed that monitors the estimated system parameters as identified from the subspace identification methodology. The fault detection methodology is based on the monitoring of parameter discrepancies, where sporadic parameter deviations will be detected as faults. Extended Kalman filter theory is implemented to estimate system parameters, instead of system states, as new process data becomes readily available. The extended Kalman filter needs accurate initial parameter estimates and is thus periodically updated by the subspace identification methodology, as a new set of more accurate parameters have been identified. The proposed fault detection methodology is validated and verified by monitoring process behaviour of the Benfield process. Faults that were monitored for, and detected include foaming, flooding and sensor faults. Initial process parameters as identified from the subspace method can be tracked efficiently by using an extended Kalman filter. This enables the fault detection methodology to identify process parameter deviations, with a process parameter deviation sensitivity of 2% or higher. This means that a 2% parameter deviation will be detected which greatly enhances the fault detection efficiency and sensitivity. Keywords: Subspace identification, Extended Kalman filter, RMPCT, Benfield, Closed-loop identification, Fault detection, LQ decomposition, Black box, Extended observability matrix, Shift invariant, Guaranteed Stability Department of Electrical, Electronic and Computer Engineering ii Samevatting Geslotelus stelsel identifikasie en fout ontdekking en isolasie is die twee fundamentele boublokke van proses monitorering. Die effektiwiteit en akuraatheid waarmee ’n aanleg gemoniteer word bepaal vervolgens die beskikbaarheid, asook die ekonomiese volhoubaarheid van ’n aanleg. Sub-ruimte identifikasie metodes is ’n samestelling van stelsel teorie, geometrie en numeriese lineêre algebra wat lei tot wiskundige manipulasies en wiskundige berekinge wat baie effektief is om stelsel veranderlikes af te lei. Sub-ruimte identifikasie metodes word gesien as Swart Boks identifikasie metodes omdat dit nie vooropgestelde stelsel inligting nodig het nie. Die proses model struktuur asook die orde word outomaties afgelei. Tipiese sub-ruimte identifikasie algoritmes gebruik eintlik nie-spaarsame model formulasie, waar ekstra terme in die model as nie-kousaal voorkom (dit sluit in stogastiese ruis). Die esktra terme help met die sub-ruimte projeksies, maar is die oorsaak vir die vergroote variansies van die geskatte stelsel vernaderlikes. Gevolglik lei dit tot ’n geslotelus stelsel wat nie geidentifiseer kan word nie. Die voorgestelde sub-ruimte identifiseerings metode in die verhandeling gebruik twee opeenvolgende LQ ontbindings om sodoende ontslae te raak van stogastiese ruis, asook om die stelsel matriks te bepaal. Die stabiliteit van die geidentifiseerde proses word verder gewaarborg deur gebruik te maak van die skuif-variansie einskap van die verlengde waarneembare matriks. ’n Blok nulle word by die verlengde waarneembaarheids matriks gevoeg wat die spektrale radius van die stelsel matriks binne die eenheid sirkel van stabiliteit hou. Die voorgestelde sub-ruimte metode is getoets en geverifieer deur die Benfield proses in geslotelus, met ’n RMPCT beheerded, te identifiseer. Proses modelle is geidentifiseer van geslotelus data soos gemeet op die Benfield aanleg. Validasiepassings van 65% en hoor toon dat die sub-ruimte identifikasie metode doeltreffend is om stelsels effektief in geslotelus te identifiseer. Verdere Samevatting resultate van die residu analise toon dat die modelle geldig is vir ’n wye verskeidenheid van insette en dat toekomstige uitsette baie goed geskat kan word. ’n Proses veranderlike gebaseerde fout ontdekkings metode is voorgestel wat die veranderlikes, soos geskat deur die sub-ruimte stelsel identifikasie metode, monitor vir verandering. Enige sporadiese veranderings sal geklasifiseer word as ’n fout. Die verlengde Kalman filter teorie word gebruik om die proses veranderlikes te skat soos wat nuwe prosesdata inkom. Die verlengde Kalman filter het akkurate aanvanklike stelsel veranderlikes wat verkry word van die sub-ruimte indentifikasie metode. Die voorgestelde metode is getoets en verifieer deur die Benfield proses te monitor. Foute waarvoor gemonitor is, en wat opgetel is, sluit in: skuim skade, vloed skade en sensor foute. Die verlengde Kalman filter kan effektief gebruik word om die aanvanklike prosesveranderlikes te volg soos deur die sub-ruimte metode geidentifiseer is. Afwykings van so klein as 2% kan opegetel word. Die sensitiviteit maak die implementering van die foutopsporings metode ’n doeltreffende oplossing tot die monitering van die Benfield proses. Sleutelwoorde: Sub-ruimte identifikasie, Kalman filter, RMPCT, Benfield proses, Geslotelus identifikasie, Fout ontdekking, LQ ontbindings, Swart Boks identifikasie, Skuif variansie, Gewaarborgde stabiliteit Department of Electrical, Electronic and Computer Engineering iv Abbreviations AEM Abnormal Event Management APC Advance Process Control ARX AutoRegressive with eXogeneous input ASYM Asymptotic Model BLFRLS Bi-Loop Forgetting factor Recursive Least-Squares CVA Canonical Variate Analysis DEA Diethanolamine EIV Errors In Variable EKF Extended Kalman Filter FD Fault Detection FDA Fischer Discriminant Analysis FDI Fault Detection and Isolation FI Fault Isolation HVAC&R Heating, Ventilation, Air Conditioning and Refrigeration IV Instrumental Variable LS Least Square LTI Linear and Time Invariant MIMO Multiple-Input, Multiple-Output MMSE Minimum Mean Square Error MOESP Multivariable Output eError State-sPace N4SID Numerical algorithms 4 subspace System IDentification MPC Model Predictive Control ORT Orthogonal Decomposition PEM Prediction Error Method PCA Principle Component Analysis PLS Partial Least Squares Abbreviations QDMC Quadratic Dynamic Matrix Control RLS Recursive Least Squares RMPCT Robust Multivariable Predictive Control Technology SID System Identification SISO Single-Input, Single-Output SVD Singular Value Decomposition TSODS Two Stage Orthogonal Decomposition Subspace UKF Unscented Kalman Filter Department of Electrical, Electronic and Computer Engineering vi Contents Summary i Samevatting iii Abbreviations 1 Introduction 1.1 Problem statement . . . . . . . . . . 1.2 Background . . . . . . . . . . . . . . 1.2.1 Fault Detection . . . . . . . . 1.2.2 System Identification . . . . . 1.2.3 Recent research contributions 1.3 Motivation . . . . . . . . . . . . . . . 1.4 Objectives . . . . . . . . . . . . . . . 1.5 Approach . . . . . . . . . . . . . . . 1.6 Contribution . . . . . . . . . . . . . . 1.7 Organisation of Dissertation . . . . . v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 FD and SID Theory 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 System Identification . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 A Closed-loop System Identification Approach . . . . . . . 2.2.2 Variance Expressions of Plant Estimates . . . . . . . . . . 2.2.3 Bias Distribution with Estimated System Parameters . . . 2.3 Fault Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 The Relevance of Fault Detection in a Chemical Process Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Classification of Faults for Detection . . . . . . . . . . . . 2.3.3 A Model-Based Approach . . . . . . . . . . . . . . . . . . 2.3.4 Identification via Prediction Error Framework . . . . . . . 2.3.5 Identification via Subspace Framework . . . . . . . . . . . . . . . . . . . . . 1 1 3 3 5 8 10 10 10 11 12 . . . . . . 13 13 14 14 16 17 19 . . . . . 19 22 24 26 36 Contents 2.3.6 2.4 Statistical Approaches to Change Detection in System Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3 The Benfield Process 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Benfield Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Industrial Background . . . . . . . . . . . . . . . . . . . . . 3.2.2 Operational Process Flow . . . . . . . . . . . . . . . . . . . 3.2.3 Process Economic Feasibility and Common Operational Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Benfield Process Identification . . . . . . . . . . . . . . . . . . . . . 3.3.1 Benfield Process Model . . . . . . . . . . . . . . . . . . . . . 3.3.2 Benfield Process Model Isolation . . . . . . . . . . . . . . . 3.4 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Model Predictive Control Strategy . . . . . . . . . . . . . . 3.4.2 Robust Multivariate Predictive Control Technology . . . . . 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 57 57 57 58 4 Nominal Process Modeling and Analysis 4.1 Introduction . . . . . . . . . . . . . . . . . . 4.2 Data Analysis and Preprocessing . . . . . . 4.2.1 Data detrending and drift removal . . 4.2.2 Data outliers and discontinuous data 4.2.3 Preprocessing of data . . . . . . . . . 4.3 Process Modeling . . . . . . . . . . . . . . . 4.4 Process Validation . . . . . . . . . . . . . . 4.4.1 Identified Model Fit . . . . . . . . . 4.4.2 Residual analysis . . . . . . . . . . . 4.5 Process Analysis . . . . . . . . . . . . . . . 4.5.1 Singular Value Decomposition . . . . 4.5.2 Poles and Zeros of Identified Process 4.5.3 Step and Impulse Response . . . . . 4.6 Conclusion . . . . . . . . . . . . . . . . . . . 70 70 70 71 71 72 73 74 74 78 79 79 80 81 81 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 62 62 63 64 64 66 68 5 A Subspace SID and FD Methodology for the Benfield Process 85 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.2 A Subspace Approach for System Identification . . . . . . . . . . . 86 Department of Electrical, Electronic and Computer Engineering viii Contents 5.3 5.4 5.5 Generalization of the 2-Stage ORT Subspace Method for MIMO Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.1 Subspace identification method . . . . . . . . . . . . . . . . 88 5.3.2 Guaranteed estimated plant stability . . . . . . . . . . . . . 93 A Parametric Fault Detection Methodology . . . . . . . . . . . . . 93 5.4.1 Assumptions and Requirements . . . . . . . . . . . . . . . . 94 5.4.2 A Parametric Fault Detection Method using Kalman filtering 95 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 6 Subspace SID and FD Methodology Evaluation: Simulations 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Experimental Setup: Subspace Identification . . . . . . . . . . . . 6.2.1 Controller Dynamics and Simulation Configuration . . . . 6.2.2 Nominal Model of Benfield Process . . . . . . . . . . . . . 6.2.3 Measured and Controlled Variable Disturbances . . . . . . 6.2.4 Persistent Excitation Signals . . . . . . . . . . . . . . . . . 6.3 Validation of the Subspace Methodology: Simulations . . . . . . . 6.3.1 Experiment 1: System Identification with White Noise Interference . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Experiment 2: System Identification with Coloured Noise Interference . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Experiment 3: Signal-to-Noise Ratio Robustness . . . . . . 6.3.4 Experiment 4: Identified System Stability Investigation . . 6.4 Experimental Setup: Fault Detection . . . . . . . . . . . . . . . . 6.4.1 Classification of Faults for Simulation . . . . . . . . . . . . 6.5 Validation of the Fault Detection Methodology: Simulations . . . 6.5.1 Experiment 5: Fault detection of abrupt faults . . . . . . . 6.5.2 Experiment 6: Fault detection of incipient faults . . . . . . 6.5.3 Experiment 7: False alarm robustness test . . . . . . . . . 6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 . 102 . 103 . 103 . 105 . 105 . 108 . 108 . 108 . . . . . . . . . . 111 116 117 120 122 123 123 127 130 133 7 Subspace SID and FD Methodology Validation: Real Process Data 136 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 7.2 Subspace Identification Methodology Validation . . . . . . . . . . . 137 7.2.1 Subspace Identification using the 2-ORT subspace methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 7.2.2 Validation of the identified model . . . . . . . . . . . . . . . 140 7.2.3 Analysis of the identified model . . . . . . . . . . . . . . . . 147 Department of Electrical, Electronic and Computer Engineering ix Contents 7.3 7.4 Fault Detection Validation . . . . . . 7.3.1 Detection of Process Foaming 7.3.2 Detection of Process Flooding Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Subspace SID and FD Methodology Verification 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Subspace Identification Methodology Verification . . . . . . . 8.2.1 Critical Evaluation of Data Sets used for Identification 8.2.2 Validation Fits of Identified Process Models . . . . . . 8.2.3 Residual Analysis . . . . . . . . . . . . . . . . . . . . . 8.2.4 Identified System Analysis . . . . . . . . . . . . . . . . 8.3 Fault Detection Methodology Verification . . . . . . . . . . . . 8.3.1 Detection of Process Flooding and Foaming . . . . . . 8.3.2 Detection of Sensor Faults . . . . . . . . . . . . . . . . 8.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Conclusions and Further Research 9.1 Conclusive Summary . . . . . . . . . . . . . . . . . . . . 9.1.1 The Benfield Process . . . . . . . . . . . . . . . . 9.1.2 Operating Philosophy and Operational Problems 9.1.3 Benfield Process Control Solution . . . . . . . . . 9.1.4 Process Monitoring . . . . . . . . . . . . . . . . . 9.1.5 Process Model Identification . . . . . . . . . . . . 9.1.6 A Parametric Fault Detection Approach . . . . . 9.1.7 Validation and Verification of Methodologies . . . 9.2 Critical Review of Own Work . . . . . . . . . . . . . . . 9.3 Directions of Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 152 154 155 . . . . . . . . . . 157 . 157 . 157 . 158 . 158 . 163 . 163 . 164 . 165 . 168 . 168 . . . . . . . . . . 170 . 170 . 170 . 171 . 171 . 171 . 172 . 174 . 174 . 175 . 177 Bibliography 179 A Residual Analysis of Nominal Benfield Process Model 187 B Complementary Subspace Theory 196 C Residual Analysis: Simulation Case Study 198 D Identified System Validation Results: Real Process Data 215 E Identified System Verification Results: Real Process Data 231 Department of Electrical, Electronic and Computer Engineering x List of Tables 2.1 Different existing subspace identification algorithms in a unifying framework [1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1 CVs, MVs and DV definitions. [2] . . . . . . . . . . . . . . . . . . . 64 6.1 6.2 6.3 . 105 . 109 MPC configuration parameter settings. . . . . . . . . . . . . . . . Experiment 1: Parameter configuration setup. . . . . . . . . . . . Validation fit results for 2-ORT, N4SID and ARX with white noise interference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Experiment 2: Parameter configuration setup. . . . . . . . . . . . 6.5 Validation fit results for 2-ORT, N4SID and ARX with coloured noise interference. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Experiment 3: Parameter configuration setup. . . . . . . . . . . . 6.7 Experiment 4: Parameter configuration setup. . . . . . . . . . . . 6.8 Experiment 5: Parameter configuration setup. . . . . . . . . . . . 6.9 Experiment 6: Parameter configuration setup. . . . . . . . . . . . 6.10 Experiment 7: Parameter configuration setup. . . . . . . . . . . . 7.1 . 109 . 113 . . . . . . 113 117 119 123 128 131 MV-CV Identification results. . . . . . . . . . . . . . . . . . . . . . 139 List of Figures 2.1 2.2 2.3 2.4 2.5 Closed-loop system configuration [3]. . . . . . . . . . . . . . . . . Three types of faults: input faults γi and output faults γo act additively, whereas system faults act as a change in system parameters θ [4]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Model-based fault detection structure [5] . . . . . . . . . . . . . . Basic steps to subspace identification methods [1]. . . . . . . . . Virtual Closed Loop construction [6]. . . . . . . . . . . . . . . . 3.1 3.2 Benfield Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 MPC control law for a SISO process [7]. . . . . . . . . . . . . . . . 65 4.1 4.2 4.4 4.5 4.6 4.7 Validation fit of nominal process model with open-loop step-test data. Validation fit of nominal process model with closed-loop raw plant data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The auto-correlation and cross-correlation of residuals and M V1 with output CV1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The singular value decomposition of the identified Benfield process. The poles and zeros for the individual M V and CV pairs. . . . . . The step response of the identified Benfield process model. . . . . . The impulse response of the identified Benfield process model. . . . 5.1 Closed-loop system [8]. 6.1 6.2 6.3 6.4 6.5 Closed-loop environment used for subspace SID evaluation. . Bode plot of the coloured noise model. . . . . . . . . . . . . FFT of the coloured noise model. . . . . . . . . . . . . . . . Validation fit for the 2-ORT subspace identification method. The auto-correlation and cross-correlation of residuals and with output CV2 . . . . . . . . . . . . . . . . . . . . . . . . . Validation fit for the 2-ORT subspace identification method. The auto-correlation and cross-correlation of residuals and with output CV2 . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 6.6 6.7 . 15 . . . . 22 25 38 49 76 77 79 80 82 83 84 . . . . . . . . . . . . . . . . . . . . . . . . 87 . . . . . . . . . . . . M V1 . . . . . . M V1 . . . . . . . 104 107 107 110 . 112 . 114 . 115 List of Figures 6.8 6.9 6.10 6.11 6.12 6.13 6.14 6.15 6.16 6.17 6.18 6.19 6.20 Robustness and model identification consistency of the 2-ORT subspace identification method. . . . . . . . . . . . . . . . . . . . . . Spectral radius inspection for nominal model updates. . . . . . . . Average validation fit percentage for nominal model updates. . . . Fault detection and subspace SID system overview. . . . . . . . . Infinity matrix norm error detection with abrupt process faults. . Measured system output behaviour with abrupt faults. . . . . . . Process state behaviour with abrupt process faults. . . . . . . . . Infinity matrix norm error detection with incipient process faults. Measured system output behaviour with incipient faults. . . . . . Process states behaviour with incipient process faults. . . . . . . . Infinity matrix norm error detection. . . . . . . . . . . . . . . . . Measured system outputs. . . . . . . . . . . . . . . . . . . . . . . Process states. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Validation fit using open-loop step-test data with partial process feedback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Validation fit using closed-loop raw process data. . . . . . . . . . 7.3 The auto-correlation and cross-correlation of residuals and M V1 with output CV1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Validation fit using closed-loop raw process data. . . . . . . . . . 7.5 The auto-correlation and cross-correlation of residuals and M V1 with output CV1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 The poles and zeros for the identified process model. . . . . . . . 7.7 Step response of the identified process model. . . . . . . . . . . . 7.8 Bode plot of the identified process model. . . . . . . . . . . . . . . 7.9 Foaming under normal closed-loop process operation. . . . . . . . 7.10 Foaming under normal closed-loop process operation. Graph a illustrates the infinity norm parameter deviation results, where graph b illustrates the Maximum-minimum parameter fluctuation measurement as determined from graph a . . . . . . . . . . . . . . . . . . 7.11 Flooding under normal closed-loop process operation. . . . . . . . 7.12 Flooding under normal closed-loop process operation. Graph a illustrates the infinity norm parameter deviation results, where graph b illustrates the Maximum-minimum parameter fluctuation measurement as determined from graph a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 120 121 121 125 125 126 129 129 130 131 132 132 7.1 8.1 . 142 . 143 . 144 . 145 . . . . . 147 148 150 151 153 . 153 . 154 . 155 Validation fit using raw closed-loop process data, measured in August, 2008. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Department of Electrical, Electronic and Computer Engineering xiii List of Figures 8.2 Validation fit using raw closed-loop process data, measured in September, 2008. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 8.3 Validation fit using raw closed-loop process data, measured in October, 2008. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 8.4 The auto-correlation and cross-correlation of residuals and M V1 with output CV1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 8.5 Measured MVs and CVs. Abnormal differential bed pressures, observed from the CV fluctuations, resulted in foaming [9]. . . . . . . 166 8.6 Detection of foaming due to abnormal differential bed pressure fluctuations. Graph a illustrates the infinity matrix norm measure for estimated parameter deviations, where graph b depicts the severity of the parameter deviation by illustrating the difference between the maximum and minimum parameter error over a predefined period of monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 8.7 easured MVs and CVs. Abnormal differential bed pressures, observed from the CV fluctuations, resulted in foaming. . . . . . . . . 167 8.8 Detection of foaming and the possibility of flooding. Graph a illustrates the infinity matrix norm measure for estimated parameter deviations, where graph b depicts the severity of the parameter deviation by illustrating the difference between the maximum and minimum parameter error over a predefined period of monitoring. . 167 8.9 Measured MVs and CVs. An abnormal spike in the MV3 illustrates the degradation of a sensor where possible sensor failure is inevitable.168 8.10 Detection of sensor failure. . . . . . . . . . . . . . . . . . . . . . . . 169 A.1 The auto-correlation with output CV2 . . . A.2 The auto-correlation with output CV3 . . . A.3 The auto-correlation with output CV4 . . . A.4 The auto-correlation with output CV1 . . . A.5 The auto-correlation with output CV2 . . . A.6 The auto-correlation with output CV3 . . . A.7 The auto-correlation with output CV4 . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . Department of Electrical, Electronic and Computer Engineering and . . . and . . . and . . . and . . . and . . . and . . . and . . . M V1 . . . M V1 . . . M V1 . . . M V2 . . . M V2 . . . M V2 . . . M V2 . . . . 188 . 188 . 189 . 189 . 190 . 190 . 191 xiv List of Figures A.8 The auto-correlation with output CV1 . . . A.9 The auto-correlation with output CV2 . . . A.10 The auto-correlation with output CV3 . . . A.11 The auto-correlation with output CV4 . . . A.12 The auto-correlation with output CV1 . . . A.13 The auto-correlation with output CV2 . . . A.14 The auto-correlation with output CV3 . . . A.15 The auto-correlation with output CV4 . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . M V3 . . . M V3 . . . M V3 . . . M V3 . . . M V4 . . . M V4 . . . M V4 . . . M V4 . . . C.1 The auto-correlation with output CV1 . . . C.2 The auto-correlation with output CV3 . . . C.3 The auto-correlation with output CV4 . . . C.4 The auto-correlation with output CV1 . . . C.5 The auto-correlation with output CV2 . . . C.6 The auto-correlation with output CV3 . . . C.7 The auto-correlation with output CV4 . . . C.8 The auto-correlation with output CV1 . . . C.9 The auto-correlation with output CV2 . . . C.10 The auto-correlation with output CV3 . . . C.11 The auto-correlation with output CV4 . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . M V1 . . . M V1 . . . M V1 . . . M V2 . . . M V2 . . . M V2 . . . M V2 . . . M V3 . . . M V3 . . . M V3 . . . M V3 . . . Department of Electrical, Electronic and Computer Engineering . 191 . 192 . 192 . 193 . 193 . 194 . 194 . 195 . 199 . 199 . 200 . 200 . 201 . 201 . 202 . 202 . 203 . 203 . 204 xv List of Figures C.12 The auto-correlation with output CV1 . . . C.13 The auto-correlation with output CV2 . . . C.14 The auto-correlation with output CV3 . . . C.15 The auto-correlation with output CV4 . . . C.16 The auto-correlation with output CV1 . . . C.17 The auto-correlation with output CV3 . . . C.18 The auto-correlation with output CV4 . . . C.19 The auto-correlation with output CV1 . . . C.20 The auto-correlation with output CV2 . . . C.21 The auto-correlation with output CV3 . . . C.22 The auto-correlation with output CV4 . . . C.23 The auto-correlation with output CV1 . . . C.24 The auto-correlation with output CV2 . . . C.25 The auto-correlation with output CV3 . . . C.26 The auto-correlation with output CV4 . . . C.27 The auto-correlation with output CV1 . . . C.28 The auto-correlation with output CV2 . . . C.29 The auto-correlation with output CV3 . . . C.30 The auto-correlation with output CV4 . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . Department of Electrical, Electronic and Computer Engineering and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . M V4 . . . M V4 . . . M V4 . . . M V4 . . . M V1 . . . M V1 . . . M V1 . . . M V2 . . . M V2 . . . M V2 . . . M V2 . . . M V3 . . . M V3 . . . M V3 . . . M V3 . . . M V4 . . . M V4 . . . M V4 . . . M V4 . . . . 204 . 205 . 205 . 206 . 207 . 207 . 208 . 208 . 209 . 209 . 210 . 210 . 211 . 211 . 212 . 212 . 213 . 213 . 214 xvi List of Figures D.1 The auto-correlation with output CV2 . . . D.2 The auto-correlation with output CV3 . . . D.3 The auto-correlation with output CV4 . . . D.4 The auto-correlation with output CV1 . . . D.5 The auto-correlation with output CV2 . . . D.6 The auto-correlation with output CV3 . . . D.7 The auto-correlation with output CV4 . . . D.8 The auto-correlation with output CV1 . . . D.9 The auto-correlation with output CV2 . . . D.10 The auto-correlation with output CV3 . . . D.11 The auto-correlation with output CV4 . . . D.12 The auto-correlation with output CV1 . . . D.13 The auto-correlation with output CV2 . . . D.14 The auto-correlation with output CV3 . . . D.15 The auto-correlation with output CV4 . . . D.16 The auto-correlation with output CV2 . . . D.17 The auto-correlation with output CV3 . . . D.18 The auto-correlation with output CV4 . . . D.19 The auto-correlation with output CV1 . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . Department of Electrical, Electronic and Computer Engineering and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . M V1 . . . M V1 . . . M V1 . . . M V2 . . . M V2 . . . M V2 . . . M V2 . . . M V3 . . . M V3 . . . M V3 . . . M V3 . . . M V4 . . . M V4 . . . M V4 . . . M V4 . . . M V1 . . . M V1 . . . M V1 . . . M V2 . . . . 216 . 216 . 217 . 217 . 218 . 218 . 219 . 219 . 220 . 220 . 221 . 221 . 222 . 222 . 223 . 223 . 224 . 224 . 225 xvii List of Figures D.20 The auto-correlation with output CV2 . . . D.21 The auto-correlation with output CV3 . . . D.22 The auto-correlation with output CV4 . . . D.23 The auto-correlation with output CV1 . . . D.24 The auto-correlation with output CV2 . . . D.25 The auto-correlation with output CV3 . . . D.26 The auto-correlation with output CV4 . . . D.27 The auto-correlation with output CV1 . . . D.28 The auto-correlation with output CV2 . . . D.29 The auto-correlation with output CV3 . . . D.30 The auto-correlation with output CV4 . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . M V2 . . . M V2 . . . M V2 . . . M V3 . . . M V3 . . . M V3 . . . M V3 . . . M V4 . . . M V4 . . . M V4 . . . M V4 . . . E.1 The auto-correlation with output CV2 . . . E.2 The auto-correlation with output CV3 . . . E.3 The auto-correlation with output CV4 . . . E.4 The auto-correlation with output CV1 . . . E.5 The auto-correlation with output CV2 . . . E.6 The auto-correlation with output CV3 . . . E.7 The auto-correlation with output CV4 . . . E.8 The auto-correlation with output CV1 . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . and cross-correlation . . . . . . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . of residuals . . . . . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . and . . . M V1 . . . M V1 . . . M V1 . . . M V2 . . . M V2 . . . M V2 . . . M V2 . . . M V3 . . . Department of Electrical, Electronic and Computer Engineering . 225 . 226 . 226 . 227 . 227 . 228 . 228 . 229 . 229 . 230 . 230 . 232 . 232 . 233 . 233 . 234 . 234 . 235 . 235 xviii List of Figures E.9 The auto-correlation and cross-correlation of residuals with output CV2 . . . . . . . . . . . . . . . . . . . . . . E.10 The auto-correlation and cross-correlation of residuals with output CV3 . . . . . . . . . . . . . . . . . . . . . . E.11 The auto-correlation and cross-correlation of residuals with output CV4 . . . . . . . . . . . . . . . . . . . . . . E.12 The auto-correlation and cross-correlation of residuals with output CV1 . . . . . . . . . . . . . . . . . . . . . . E.13 The auto-correlation and cross-correlation of residuals with output CV2 . . . . . . . . . . . . . . . . . . . . . . E.14 The auto-correlation and cross-correlation of residuals with output CV3 . . . . . . . . . . . . . . . . . . . . . . E.15 The auto-correlation and cross-correlation of residuals with output CV4 . . . . . . . . . . . . . . . . . . . . . . E.16 The poles and zeros for the identified process model. . Department of Electrical, Electronic and Computer Engineering and . . . and . . . and . . . and . . . and . . . and . . . and . . . . . . M V3 . . . M V3 . . . M V3 . . . M V4 . . . M V4 . . . M V4 . . . M V4 . . . . . . . 236 . 236 . 237 . 237 . 238 . 238 . 239 . 240 xix Chapter 1 Introduction 1.1 Problem statement Fault detection and isolation (FDI) is a scientific discipline that involves the early detection and isolation of process disturbances, due to various process anomalies. The ever increasing demand for productivity in process control requires the implementation of more complex and sophisticated control solutions. The increasing control complexity means that the probability of fault occurrences can be significant, and an automatic supervisory control system should be used to detect and isolate anomalous working conditions as early as possible [10]. On-line process monitoring with FDI can provide stability and efficiency for a wide range of industrial processes. With continuous on-line FDI, it is possible to detect and isolate abnormal and undesired process states, which ultimately increases plant performance [11]. The benefits associated with FDI have given the industry enough motivation to focus a large amount of attention on FDI in dynamic systems. Many model based approaches have been proposed during the last two decades [12, 13]. Model based fault detection methods monitor the dependencies between different measurable signals, where the dependencies are expressed by mathematical process models. The conceptional realisation of the mathematical process models of the plant can vary according to the following approaches: the parity space approach [14], state estimators [15], unknown input observers, Kalman filters [16], parameter identification [15] and artificial intelligence methods [17]. Regardless of the developments of various FDI approaches for linear and nonlinear plants, there is still a need for robust FDI which is a topic open for further research [18]. The development of linear FDI techniques impose limitations on the detection Chapter 1 Introduction and isolation of industrial process plants. The use of linear models not only restricts its applicability to a narrow operating range, but also limits the diagnostic abilities and FDI components to only linear additive fault types. As a consequence nonlinear faults that afflicts the system dynamics, such as abrupt changes in system parameters or unknown disturbances, will be approximated as linear additive faults [19]. This dissertation addresses the problem of fault detection (FD) of the Benfield chemical process at Sasol by investigating parameter identification as the fault detection method. Emphasis will be placed on the effective fault detection of nonlinear plants using linear process models for FD. Fault isolation (FI) is a logical next step in the FDI process, which will not be addressed in this dissertation. The Benfield process is a thermally regenerated cyclical solvent process, which is specifically used by Sasol Synfuels to extract and remove CO2 , and other gas components [20]. The Advanced Process Control (APC) solution used for the Benfield process must deal with significant nonlinearities due to the sensitivity of the process. Robust Multivariable Predictive Control Technology (RMPCT) which is a subclass of APC solutions of the Model-based Predictive Control (MPC) family is used in controlling, e.g., the Benfield process [21]. Due to the complexity of the process dynamics of the Benfield process and its associated APC solution it is necessary to investigate the various methodologies and techniques of process monitoring, and the implementation thereof; to ensure continuous process-control stability and performance. A background literature survey of FDI techniques will be provided in section 1.2 where the emphasis will be placed on parameter identification as the fault detection method. Section 1.2 will also discuss system identification principles and recent contributions. The motivation for this dissertation is given in section 1.3. Section 1.4 will elaborate on the objectives, where section 1.5 will provide a discussion on the proposed approach to be followed. Section 1.6 provides a discussion on the scientific contribution of this study, and lastly in section 1.7, the organisation of the thesis is given. Department of Electrical, Electronic and Computer Engineering 2 Chapter 1 1.2 Introduction Background To monitor a process accurately and effectively, it is essential to have an accurate model to represent the process, as well as a statistical methodology in how to evaluate the process behaviour. There exists an abundance of literature on system identification SID and FDI techniques, which will be the two fundamental cornerstones used for process monitoring in this dissertation. It would thus be insightful to study the different contributions in the field of system identification and fault detection. 1.2.1 Fault Detection Abnormal Event Management (AEM) involves the timely detection of an abnormal event, isolating its causal origins, and then taking appropriate supervisory control decisions and actions that can bring the process back to normal operation (the term diagnosis is frequently used when referring to detection and isolation). The automation of AEM systems would be advantageous as it would reduce the complete reliance on human operators to cope with such abnormal events. The automatic process of FDI forms the first step of AEM [22]. Due to the broad scope of process fault diagnosis problems, and the difficulties in real-time solutions, various computer aided approaches have been developed over the past years. They cover a wide variety of techniques which include early attempts using fault trees and digraphs, analytical approaches, knowledge based systems and neural networks [22]. From a modelling perspective there are methods that require accurate process models, semi-quantitive models, or qualitive models [23]. On the other side of the spectrum, there are methods that do not assume any form of model information, and solely rely on process history information. Fault diagnosing methods can broadly be classified into three parts: They are quantitive model based methods, qualitive model based methods, and process history based methods [22]. A comparative literature survey where the methods are compared and evaluated based on a common set of desirable characteristics for fault diagnostic systems are given in [22, 24]. Basic a-priori knowledge that is needed for FDI is the set of failures (faults), and the relationship between the observations (detection) and the set of failures. This a-priory knowledge can be explicitly available (through e.g., lookup tables), or it may be inferred from some source of domain knowledge based on historical Department of Electrical, Electronic and Computer Engineering 3 Chapter 1 Introduction process behaviour. The a-priori domain knowledge can be obtained from a fundamental understanding of the process using first-principles. Such knowledge is known as model based knowledge, which can further be expanded into quantitive or qualitive model based knowledge. In contrast to model based methods, process history based methods rely on the availability of large amounts of historical process data which can be transformed and presented in a-priori knowledge [22]. This dissertation will focus on parameter identification as a means for FD. Parameter estimation methods are classified as a quantitive model based approach[22]. Many process faults appear as changes in process coefficients ρ. These coefficients are contained in the process parameters θ of a process model. Process model parameters are constants or time dependant coefficients in the process, which appear in the mathematical description of the relationship between the input and output signals [25]. Iserman [26] proposed an approach where parameter estimation is used for fault detection. The approach can be described as follows: obtain a process model with only the measured inputs u(t) and outputs y(t) in the form: y(t) = f (u(t), θ). (1.1) Model parameters θ are estimated as measured y(t) and u(t) become readily available. θ parameters in turn are related to physical parameters ϕ in the process by θ = g(ϕ), (1.2) where g is a linear transformation function between the model parameters, θ, and the physical parameters, ϕ. Changes in the parameter ϕ, denoted by ∆φ are computed from this relationship, where the changes ∆φ relate to process faults. Iserman [26] investigated different parameter estimation methods which include Least Square (LS) methods, Instrumental Variable (IV) methods and estimation via discrete time models. The most important issue in using parameter estimation in fault diagnosis is the complexity of implementation. With nonlinear first principle models, the parameter estimation method turns out to be a nonlinear optimisation problem. Real-time solutions of nonlinear optimisation problems prove to be a serious bottleneck in the derivation of the model. Reduced order inputoutput process models can be used, where robustness of the approach needs to be addressed [22]. Department of Electrical, Electronic and Computer Engineering 4 Chapter 1 Introduction A more recent literature survey on FDI techniques was conducted by Katipamula and Brambley [27]. The survey focused primarily on the implementation of FDI techniques on applications of heating, ventilation, air conditioning and refrigeration (HVAC&R). It was concluded that quantitive model based approaches have an advantage in modelling the transient behaviour of the systems more precisely than any other modelling technique. Models are based on sound physical or engineering principles, and thus can be used to model normal and faulty operations more precisely. Faulty operations can be distinguished from normal operations with more ease. The weakness of quantitive models is the complex and computational intensity. With qualitive models, the required accurate mathematical model as derived from first principles can become extremely complicated for nonlinear processes and is in some cases not even feasible. An alternative solution to the first principles modelling technique is to use artificial intelligence (e.g., neural networks and adaptive genetic algorithms) methods [28]. Artificial neural networks provide an excellent mathematical tool in dealing with nonlinear problems. Another attractive characteristic of artificial intelligence in system modelling is the self learning capability. As a result, nonlinear systems can be modelled with great flexibility and accuracy. The above mentioned features allow one to employ artificial neural networks to model complex, unknown and nonlinear processes [29]. A disadvantage of artificial intelligence methods comes in with determining both the optimal network structure and the optimal neural-network parameters which are not trivial problems and are extremely important from the point of view of the identification quality [29]. 1.2.2 System Identification The open-loop system identification formulation in the early 80’s and 90’s was centered on the prediction error method (PEM) paradigm of Ljung [30]. The late 90’s saw an extension to closed-loop system identification where Zhu [31] proposed a multivariable identification methodology for the model predictive control (MPC) application called the Asymptotic Model (ASYM) method. The shift of interest from open-loop identification to closed-loop identification was due to the following advantages experienced with closed-loop identification over open-loop identification: [28]: • the process used for identification may be unstable, where a feedback controller is necessary to stabilise the process, Department of Electrical, Electronic and Computer Engineering 5 Chapter 1 Introduction • due to safety reasons, and product quality reasons, it is not permitted for the process to run in open-loop, • there may exists underlying feedback loops that cannot be manipulated or removed. The following three broad groups for closed-loop identification was identified [30]: • The Direct Approach: Apply the basis PEM in a straightforward manner by using the process input u(t) and process output y(t), ignoring any possible feedback. This results in an open-loop topology. • The Indirect Approach: In this approach the closed-loop system is identified from the reference input r(t) to the output y(t). The open-loop system is retrieved from the latter by making use of the known controller. • The Joint Input-Output Approach: In this approach the control inputs u(t) and system outputs y(t) are both considered as outputs driven by a reference signal r(t). Knowledge of the system can then be recovered form this joint model. De Klerk et. al., [7] proposed a direct approach methodology to identify a multiple-input-multiple-output (MIMO) closed-loop system under MPC. The direct identification method as proposed by De Klerk et. al., [7] showed that only through structured identification tests, and with the use of persistent exciting (PE) test signals, it is possible to identity a closed-loop MPC. Van den Hof et. al., [32] considered the indirect identification approach. They argued that the indirect identification approach had attractive properties in the sense that the method does not suffer from bias effects caused by noise correlation on the inputs. This is due to the fact that the input signal used for identification is taken as an external reference signal. The critical part of the indirect identification is the construction of the (open-loop) plant model in the second step, based on the estimated closed-loop transfer. If the resulting plant model order is not limited, this construction can be done exactly, provided the controller is known, and the closed-loop transfer function has been identified. Van den Hof et. al., [32] extended his argument on indirect identification as an identification methodology, and proposed the dual-Youla parameterisations method as an extension to the indirect method. The Youla method, additional to the indirect method, provides models which are guaranteed to be stabilised by the controller. Department of Electrical, Electronic and Computer Engineering 6 Chapter 1 Introduction State-space based approaches are typically better suited to model MIMO systems. The state-space form is very convenient for optimal estimation, filtering prediction, and control [?]. The main difficulty with the application of PEM methods to state-space models is to find a numerical robust canonical realisation, since the alternative, a full parameterisations of the state-space model, would involve a huge number of parameters. The most prominent representatives of these approaches in the early 90’s were se so called 4SID algorithms [28]. They extracted the extended observability matrix form the input and output data or from the estimations from the impulse responses [33]. The early 90’s witnessed the birth of a new type of linear system identification algorithms called subspace methods. Subspace methods originated between system theory, geometry and numerical linear algebra [1]. Linear subspace identification methods are concerned with systems and models that can be represented as state-space models. The state-space model of the system can be recovered from the obtained extended observability matrix. The main advantages of subspace identification techniques are [28]: • they require low computational demand since they use linear algebra toolsets, • they have the ability to deal with MIMO systems efficiently, • and they show good numerical robustness. A joint input-output subspace method was recently proposed by Katayama [8]. The method of identifying a closed-loop system was based on the orthogonal decomposition method. The proposed method can easily be applied to the MIMO case by making use of the direct approach, and a model reduction step is not needed, as which is often the case with joint-input methods. There are two kinds of model parameter identification techniques: conventional identification by implementing artificially excited signals or blind identification where natural excitation signals are used. Traditional subspace methods have trouble in identifying weak modes. These weak modes can be due to insufficient exciting signals or improper measurement points. A solution to this problem is the proposed blind subspace identification method of Zhang[34]. Based on the covariance-driven subspace method, the block Hankel matrix is reshaped and the signal subspace is changed accordingly, which leads to an increased participation of weak components, and the improved identifiability of the weaker characteristics. Department of Electrical, Electronic and Computer Engineering 7 Chapter 1 1.2.3 Introduction Recent research contributions A quantitive model based approach, in conjunction with the parameter estimation approach proposed by Simani et. al., [35] is used for FDI. Simani et. al., [35] defined a methodology for sensor fault detection by using a state estimation approach, in conjunction with a residual processing scheme which includes simple threshold detection, in a deterministic case, as well as statistical analysis when data are effected by noise. The suggested method does not require the physical model process under observation, since the input-output links are obtained by means of an identification scheme which uses AutoRegressive with eXogeneous input (ARX) models. The identification method is based on the Frisch scheme (traditionally used to analyze economic systems) which gives a reliable model of the plant under investigation. The Frisch scheme estimates the input and output noise variances accurately. By estimating the noise variances accurately, it is possible to then determine the parameter vector θ precisely. It is required that the input and output noise are uncorrelated in time for the Frisch scheme to be applicable [36]. A diagnostic tool should be able to monitor and detect the deterioration of a process, and as soon as it is significant, isolate the cause of deterioration, and correct it [37]. A parametric statistical approach was proposed by Wu and Campion[37]. The idea behind the methodology is to detect deterioration of system parameters in a process based on hypothesis tests. Using adequate information, all the faults considered can be reduced to that of detected changes in the mean of the Gaussian variables. The key tool used for this approach is the Asymptotic Local Approach, where the Gaussian variables are obtained asymptotically when the sample size tends to infinity and the magnitude of faults changes tend to zero. An advantage of this approach with respect to parameter estimation and system identification is that the problem of change detection may be less intensive and demanding than continuous re-identification of the system model. A fault diagnosis methodology based on nonlinear first principle methods, which includes parameter uncertainty was introduced by Rajaraman et. al., [38]. The advantage of including fundamental models derived from first principles into the procedure allows for accurate diagnosis even if operation conditions have changed, while the online estimation of model parameters takes care of modelplant mismatch. An augmented nonlinear observer is used for parameter observation. Robustness in the proposed methodology is assured by the implementation of Kharitonov’s theory concept [38]. The Kharitonov’s theory concept deals with Department of Electrical, Electronic and Computer Engineering 8 Chapter 1 Introduction the stability of the designed observer under parametric uncertainty. Fault detection is accomplished by the computation of residuals ( i.e., the mismatch between measured outputs and estimated outputs of the model ). The proposed method is a robust feasible solution for fault detection of nonlinear processes. As discussed by Katipamula and Brambley [27], the possible weakness of the proposed method is in the complexity of the nonlinear process model derivation from first principles, which in this case is the corner stone of success for the proposed methodology. In the work conducted by Simani et. al., [18], it is stressed that system complexity may not indicate a requirement for a complex physical or thermodynamic model. It was shown that a dynamic linear model identification method for FDI can be successfully used, thus eliminating the requirement for physical models. The concept was illustrated by implementing errors-in-variable (EIV) models and related identification algorithms for FDI of an industrial gas turbine prototype. The proposed method also designed linear output estimators, avoiding the complexity that would otherwise be inevitable when nonlinear models are used. The fault diagnosis is accomplished by using the linear models of the system under investigation, and residual methods. The paper suggested that under model-based approaches, linear identification models should be exploited, although the system considered is nonlinear. This is important to avoid complexities which are otherwise inevitable when nonlinear models are being used. A data driven approach to FDI is based directly on process data. The strength of data driven techniques is the ability to transform higher dimensional data into lower dimensional data, which is especially beneficial in large-scale systems that produce large amounts of multivariate data. Well known data driven techniques include principle component analysis (PCA), Fisher discriminant analysis (FDA), partial least squares (PLS) and canonical variate analysis (CVA) [39]. A solution to FDI based on PCA was proposed by Palma et. al.,[39]. The proposed methodology was implemented and tested under the assumptions that the plant is a single-input, single-output (SISO), linear and time-invariant (LTI) system. Modified recursive least squares (RLS) algorithms were used to perform on-line identification of system parameters of the ARX model. The advantage of using RLS is the computational efficiency. The RLS algorithm calculates a new update of the parameter vector θ each time new data comes in. The RLS requires a constant computation time for each parameter update, and therefore it is perfectly suited for online use in real-time applications. PCA was directly applied to the identified ARX parameters, where it was used to reduce the dimensionality of the Department of Electrical, Electronic and Computer Engineering 9 Chapter 1 Introduction ARX parameters. This allows a better visualization and understanding of the system behaviour, and can thus result in the detection of process faults. 1.3 Motivation An increased interest in model-based maintenance and process monitoring in a large number of industrial applications [4] serves as a motivation for further investigation and research into FD methodologies. A possible FD solution is the early detection of slight parametric deviations with respect to a parametric characterisation of the system in its normal working condition, without or limited artificial excitation or excessive controller switching used for parameter identification [4]. Indeed, if such an early detection can be performed while preserving robustness with respect to changes in normal operating conditions, one can prevent larger system deviations resulting from malfunction, fatigue, faults or hardware failure before they happen, and consequently increase the availability of the system. 1.4 Objectives The objectives of this research project are: • conduct a literature survey on all relevant FD techniques where parameter identification is used as the means to achieve FDI, • conduct a literature survey on the different approaches to closed-loop system identification where system models are realized by parameter identification methods, • develop or implement an existing closed-loop FD methodology based on parameter identification methods, • evaluate the proposed methodology using real plant data, and • improve the developed methodology to work unassisted on a scaled down portion of the Benfield Process. 1.5 Approach The approach to the research are: Department of Electrical, Electronic and Computer Engineering 10 Chapter 1 Introduction • review all the relevant FD techniques where parameter identification is used as a means to FDI, • review closed-loop re-identification methods where parameter identification is used to realise the process model, • develop or implement an existing FD methodology which uses parameter identification as a model identification method, • use simulations to validate the developed methodology by assessing a simulation of the modelled Benfield process, and • validate the developed methodology by assessing real-time plant data of the Benfield Process. 1.6 Contribution A recent study conducted by Simani et. al., [18] has concluded that robust FDI is regarded as a challenging problem open to further research. This research project will contribute to the field of FD by developing a robust FD technique by means of parameter identification. It would be advantageous if this robust FD technique can be automated. The automation of an on-line FD solution would provide necessary insight of the process and plant behaviour, which will reduce the reliance on human operators to timely identify possible process failures and faults due to abnormal process plant situations. The advantages of using nonlinear first principle methods to derive process models used in FD are discussed by Rajaraman et. al., [38] but the complexities in deriving these models as mentioned by Simani et. al., [18] motivated for SID methodology where the dynamics of nonlinear processes can be modelled accurately without the complexities involved in the mathematical model derivation . A contribution of this dissertation study is the investigation, and implementation of linear subspace SID methods, to obtain an accurate black box model of systems with nonlinear process dynamics operating in a closed-loop environment, to be used for parameter identification and estimation in the technique of on-line FD. Last but not least, a third contribution would be the implementation and validation and verification of the developed method using real proses data of the Benfield Process. Department of Electrical, Electronic and Computer Engineering 11 Chapter 1 1.7 Introduction Organisation of Dissertation Chapter 2 will cover the basic theory on closed-loop plant identification, parameter identification and parameter estimation. Parameter identification used in FD and different FD techniques with parameter identification will be discussed. Chapter 3 includes a discussion on the Benfield Process. A discussion will follow on the RMPCT being used to control the Benfield Process. Chapter 4 will include the modelling of the Benfield Process , where a discussion on the plant model will follow. From the discussion on the different methods in FD with parameter identification, and closed-loop identification theory in Chapter 2, the development of a robust FD method by means of parameter identification will follow with a discussion on the developed methodology in Chapter 5. In Chapter 6, a discussion will follow on preliminary simulation results of the developed methodology. Chapter 7 will provide an overview on the results obtained where the developed methodology was used to assess real plant data. In chapter 8, the proposed identification methodology and fault detection methodology are verified. A discussion on the results will follow. Chapter 9 will conclude the dissertation and direction for possible future research will be suggested. Department of Electrical, Electronic and Computer Engineering 12 Chapter 2 FD and SID Theory 2.1 Introduction Sasol Synfuels in Secunda, South Africa has implemented a large number of advanced controllers (MPC algorithms) for their chemical processes. Philosophically, MPC reflects human behaviour, where we select control actions which we think will produce the best possible predicted outcome over a limited time horizon [40]. Model predictive control can be considered as the most prominent development in the area of process control in the last two decades. Though the focus of the research in this area was mainly focussed on the design and stability of constrained MPC controllers, there is a growing realisation that efficient and effective monitoring of such advanced model based controllers is critical for its long term success [41]. Future research directions in a review article on MPC [42], stressed the need to incorporate a mechanism to detect and diagnose abnormality in process dynamics under closed-loop MPC, in order to sustain the benefits of MPC over a prolonged period of time. In this chapter the basic theory behind closed-loop system identification and fault detection is discussed. The chapter starts with a discussion on the relevance of fault detection in the chemical process industry. A discussion on the types of faults that can occur is covered in section 2.3.2, where the emphasis is placed on multiplicative faults associated with parameter discrepancies. In section 2.3, a model-based approach to fault detection is discussed. An overview of system identification theory is given in subsection 2.2, where emphasis is placed on the advantages associated with a closed-loop identification Chapter 2 FD and SID Theory approach. Closed-loop system identification methodologies in the Prediction Error Method framework and Subspace framework are discussed, with a brief introduction to statistical methods used for parameter change detection. 2.2 System Identification System Identification is a well-established field consisting of a number of methodologies which can broadly be classified into parametric methods and non-parametric methods [28, 3]. Parametric methods determine a relatively small number of system parameters, where these system parameters are optimised according to some objective. Non-parametric methods are more flexible than parametric methods, and can be used where less structure is imposed on the model [28]. Parametric methods include approaches found in the predictor error family, as was introduced by [30] and subspace approaches introduced by [43]. Non-parametric approaches are statistical methods which include correlation and spectral analysis methods [44]. Statistical methods like the Maximum Likelihood Estimation methods and Bayesian Estimation methods have not found much use and implementation in industry. This is due to the lack of probability information, and the fact that these complex methods reduce to the same least square calculation as used in prediction error methods under commonly made probability assumptions [45]. 2.2.1 A Closed-loop System Identification Approach In the last decade, the industry has shown a renewed interest in closed-loop system identification and control related identification [46]. Most practitioners of closed-loop identification have assumed that the existing controller is linear and the processes are SISO. These assumptions are not applicable for MPC applications, which are non-linear and often multivariate. The multivariate and nonlinear nature of the problem was investigated by several authors, [47, 28], where the use of closed-loop system identification was motivated for due to distinct advantages over open-loop system identification. Closed-loop system identification can be classified into three distinct approaches [30]. These approaches are: the direct approach, the indirect approach and the joint input-output approach. All closed-loop system identification methods, whether they are parametric methods or non-parametric methods can be associated with one of the three approaches. In figure 2.1, the general setup of a typical Department of Electrical, Electronic and Computer Engineering 14 Chapter 2 FD and SID Theory Figure 2.1: Closed-loop system configuration [3]. Department of Electrical, Electronic and Computer Engineering 15 Chapter 2 FD and SID Theory feedback system is depicted. The true plant in figure 2.1 can be defined as follows: y(t) = G(q −1 , θ)u(t) + H0 (q −1 , θ)e(t), (2.1) where y(t) is defined as the plant output, u(t) the plant input, and e(t) the white noise signal passed through some linear filter H0 . Without loss of generality, we ∞ P will assume that H0 is stably invertible and monic i.e., H0 (q) = h(k)q −k , h(0) = k=0 1 [45], and the white noise signal, e(t), has a mean of zero, and a covariance of Pe . The signal ra (t) is a designed external excitation signal, imposed on top of the process input or the setpoint. The signal rb (t) can be an additional designed external excitation signal, imposed on the controller. The symbol q −1 denotes the discrete time shift operator, where q −1 u(t) = u(t − 1). The parameter coefficients which realise the dynamic characteristics of the system model are denoted by θ. It is insightful to investigate the consistency and efficiency of the closed-loop identification approaches as defined by Ljung [30]. The consistency of the approaches is concerned with the bias of the parameter estimates, while the efficiency is concerned with the asymptotic variance of the plant estimate [48]. 2.2.2 Variance Expressions of Plant Estimates Variance expressions for plant estimates have been derived for both open-loop systems and closed-loop systems respectively in [30, 49]. For systems operating in open-loop, Ljung [30] has defined, for a plant estimate Ĝ(jω), the co-variance to be: i nΦ (ω) D , cov Ĝ(jω) ≈ N Φu (ω) h (2.2) where ΦD (ω) is the spectrum of the disturbance, Φu (ω) is the spectrum of the input, n is the model order and N is the number of data samples. Equation 2.2 shows that the asymptotic variance of the plant estimate Ĝ(jω) is proportional to the signal to noise ratio at any frequency. Equation 2.2 is asymptotic in both N and n. A variance expression for the closed-loop plant estimate Ĝ(jω) was derived in [49], and is defined as follows: h i var Ĝ(jω) ≈ nΦD (ω) nΦD (ω) = , 2 N Φu (ω) N |S(ω)| Φr (ω) (2.3) where Φru (ω) = |S(ω)|2 Φr (ω) is the spectrum of the part of the input signal(s) arising from external excitation i.e., ur = S0 (q −1 )r, and where Department of Electrical, Electronic and Computer Engineering 16 Chapter 2 FD and SID Theory −1 S0 (q −1 ) = 1 + G(q −1 )C(q −1 ) , (2.4) is the sensitivity function. Gevers et. al. [49] have shown that equation 2.3 is valid for both the direct and joint input-output approaches. The above asymptotic results for open-loop and closed-loop systems have led to the conclusion made by Esmaili et. al. [50] – when the output power is limited, closed-loop identification will generally give better identification (lower var[Ĝ(jω)] for the same output variance) than open-loop identification. This benefit of closedloop identification would realise, if we were to choose the spectrum of our designed excitation signal, r, for closed-loop identification to be such that its contribution to the input u is equal to the input signal applied in the open-loop situation i.e., Φru = |S|2 Φr = Φu|open−loop . 2.2.3 (2.5) Bias Distribution with Estimated System Parameters The consistency of the direct, indirect and joint input-output closed-loop identification approaches can be investigated by analysing estimated system parameter biases. The parameter estimates are obtained by applying the well known predictor error method proposed by Ljung [30] for bias analysis. Direct Approach: It was shown by Forssell and Ljung [3] that the parameter estimates for a system can be defined as follows by applying the direct approach in closed-loop identification: θ̂N →∞ 1 = arg min θ 2π Zπ −π |G(jω) + B(jω, θ) − Gθ (q, θ)|2 Φu (ω) + |H0 (jω) − H(jω)|2 Φre (ω) 1 dω, |H(jω)|2 (2.6) where B(jω, θ) = |H0 (jω) − H(jω)|2 Φeu (ω) λ0 . . Φu (ω) Φu (ω) (2.7) It was concluded by Forssell and Ljung [3], by inspection of equation 2.6 and equation 2.7 that the bias, B(jω, θ), experienced by implementing the direct approach in closed-loop systems, will be small in the frequency ranges where either of the following conditions hold: Department of Electrical, Electronic and Computer Engineering 17 Chapter 2 FD and SID Theory • the estimated noise model, H(jω), is accurate, i.e., [σ̄ (H0 − H)] is small, i h e • the feedback noise contribution to the input spectrum, σ̄(Φu )/σ̄(Φ ) , is u small, and, i h • the signal-to-noise ratio is high i.e., λ0/Φu (ω) , is small. Indirect Approach: If the controller and some extra input or reference signals are known, then the indirect approach can be used for closed-loop system identification [3]. This approach can be used with nonlinear feedback, but it has been the standard assumption in literature so far that linear feedback is used. The closed-loop system for the indirect approach can be defined as follows: y(t) = Gc (q, θ)r(t) + H(q)e(t), (2.8) Gc (q, θ) = G(q, θ) (1 + G(q, θ)C(q))−1 . (2.9) where If the controller C(q) is known, and the reference signal/excitation signal/set point, r(t), is measurable, we can identify the closed-loop system, and compute the estimate, ĜN (q, θ), of the open-loop system by solving the following equation [3]: −1 ĜcN (q, θ) = 1 + ĜN (q, θ)C(q) ĜN (q, θ). (2.10) The parameter estimate expression for the indirect approach was derived by Forssell and Ljung [3], and is: θ̂ N →∞ 1 = arg min θ 2π Zπ c G (jω) − Gcθ (jω, θ) |S (jω)|2 Φr (ω) 1 + G (jω, θ) C (jω) |H (jω)|2 dω. θ (2.11) −π It was concluded by Forssell and Ljung [3] by inspecting equation 2.11, that for the indirect approach in closed-loop identification, the following conditions hold: • the indirect approach can give constant plant estimates if the parameterisations is flexible enough, and, • in the case of undermoddeling, where not al the plant dynamics were identified, the resulting plant estimate will try to minimise the mismatch between the nominal plant and the plant estimate and at the same time try to minimise the model sensitivity function S. Department of Electrical, Electronic and Computer Engineering 18 Chapter 2 FD and SID Theory Joint Input-Output Approach: For the joint input-output approach, a model structure was defined by Forssell and Ljung [3] as follows: y(t) u(t) = Gc (q, θ) S i (q, θ) r(t) + H(q) e(t) d(t) , (2.12) where S i is the input sensitivity function, and d(t) and e(t) are independent noise sources, where d(t) does not necessary have to be white [3]. The basic assumption behind the joint input-output approach is that the input, u(t), is generated by an unknown controller, Cu (q), of the following form: u(t) = r(t) − Cu (q)y(t). (2.13) Forssell and Ljung [3] derived a parameter estimate expression for the joint input-output approach with a model structure defined in equation 2.12, as follows: θ̂ N →∞ 1 = arg min θ 2π Zπ |G(jω) + B(jω) − Gcθ (q, θ)|2 −π Φu (ω) dω, |H(jω)|2 (2.14) where B(jω) = G(ω)Φuu (ω)Φ−1 u (ω). (2.15) For the parameter estimate equation 2.14, that was derived for the joint inputoutput approach, Forssell and Ljung [3] stipulated the following conditions which hold for equation 2.14: • if the parameterisations of Gc and S i are flexible enough, then the joint input-output approach will give consistent estimates of G(q), • the controller need not to be known, and, • the joint input-output approach gives consistent estimates of G(q) regardless of the disturbance e(t). 2.3 2.3.1 Fault Detection The Relevance of Fault Detection in a Chemical Process Industry Fault detection is a scientific discipline which is used to detect system faults early and disturbances due to process anomalies. Undetected faults can lead to system Department of Electrical, Electronic and Computer Engineering 19 Chapter 2 FD and SID Theory failures and model-control plant mismatches. Undetected faults also prevent the controller from operating near constraint boundaries due to process disturbances which in effect will reduce the product quality of the process. Fault detection is a scientific process monitoring discipline which has been successfully applied to a variety of processes with diverse process dynamics. The nature of the process to which the fault detection methodology is applied, determines the typical characteristics and nature of the fault detection methodology. This dissertation will focus on the fault detection in processes in the Chemical process industry. These processes exhibit the following characteristics which need to be considered when developing a fault detection methodology [31]: • System complexity and scale: A chemical process under MPC will typically have 10 to 20 manipulated variables (MVs), and 20 to 40 controlled variables (CVs). Some CVs, such as product qualities, exhibit very slow process dynamics (with dominant time constants ranging from 30 minutes to several hours), while other CVs, such as valves, exhibit very fast process dynamics (with time constants of a few minutes). The aforementioned results oscillations and process time delays. • Dominant slow dynamics: The time to steady-state of a typical product ranges from 1 hour to several hours. This dictates long identification tests, and jeopardises the efficiency of fault detection methodologies which incorporate identified process parameters. • Variety of unidentified disturbances: A variety of unidentified disturbances can result in the use of unacceptable test signals with the parameter identification method in fault detection. This may lead to product quality specification deviation, as well as the possible excitation of process nonlinearities. Typical unidentified disturbances are feed composition variations, weather changes and disturbances from other parts of the unit. • Local unidentified process nonlinearity: Chemical process models are usually impossible to derive from first principles. General black box models are used for model representation. Although a general linear black box model is relevant for MPC for this class of process for a given range of operation, some nonlinear behaviour may still be excited. Examples are CVs that are very pure product qualities, and valve positions close to their operational limits. Department of Electrical, Electronic and Computer Engineering 20 Chapter 2 FD and SID Theory SISO control loop performance monitoring is well established in the process industries [51]. The SISO loop performance monitoring approach has shortcomings, however, because in the chemical process industries the control loops are not isolated from each other. Specifically, the poor performance in one control loop might be because that it is being upset and effected by a disturbance originating elsewhere. The basic idea behind process control in the chemical process industry is to divert process variability away from key process variables into places which can accommodate the process variability such as buffer tanks and plant utilities [52]. Unfortunately, process variability is often not sufficiently accounted for in the chemical process industries, and it may just appear elsewhere. The reason for this is that modern industrial chemical processes have reduced inventory, and make use of the recycle streams and heat integration of various processes in a plant, thereby increasing the economical feasibility and efficiency of a process plant [53]. A plant-wide approach for fault detection in the chemical process industry means that the distribution of a disturbance is mapped out, and the detection, isolation and nature of a disturbance can be determined with a high probability of being right the first time. A list of key requirements for efficient and effective fault detection [54, 55] are: • detection of presence of one or more periodic disturbance oscillations, • detection of non-periodic disturbances and process upsets, • automated, non-invasive stick-slip detection in control valves, • facility wide approaches including behaviour clustering, • automated model free causal analysis for fault detection, and, • incorporating the process knowledge such as the role of each controller. Three types of fault detection problems may occur in practice, according to the relative time constants/parameters of the process to be monitored, of the sampling of the data, and of the events to be detected [4]. • Process Validation: Given, on one hand, a reference parameter value θ 0 of the nominal model, and on the other hand, a new data sample, the problem is to decide whether the new data sample is still accurately described by the nominal model parameter θ 0 . The problem might be either stated off-line, where a fixed sample size N is used, or on-line, where a variable sample size n is used; Department of Electrical, Electronic and Computer Engineering 21 Chapter 2 FD and SID Theory Figure 2.2: Three types of faults: input faults γi and output faults γo act additively, whereas system faults act as a change in system parameters θ [4]. • On-line change detection: At every time instant ti , the problem of detection is to determine whether a significant parameter change has occurred, from the nominal model value θ 0 to the new value θ 1 , at an unknown time instant ti−1 ; • Off-line change detection: Given the fixed sample size N, the problem of detection is to determine whether in this sample space, a significant parameter change has occurred, from the nominal model parameter θ 0 to the new parameter value θ 1 , at an unknown time instant t. 2.3.2 Classification of Faults for Detection A fault is defined as the unwanted deviation of at least one characteristic property of a variable or system parameter from an acceptable behaviour [56]. The time dependency of faults can be distinguished between abrupt faults (stepwise), incipient faults (drift) and intermitted faults [56]. Faults at an early stage are referred to as incipient faults due to inherent difficulty in detection and isolation. The presence of incipient faults is often unnoticeable in system measurements. This means that traditional fault detection methods are less likely to successfully detect and isolate incipient faults [10]. Basseville [4] studied several multidimensional parameterised stochastic processes, which varied from static linear processes to dynamic nonlinear processes, with a state-space or input-output representation given. In all cases, as depicted in figure 2.2, the observed data Y are viewed as the system g parameterised by the parameter vector θ, and having different kinds of inputs. Inputs can vary between Department of Electrical, Electronic and Computer Engineering 22 Chapter 2 FD and SID Theory structured step tests, sinusoidal inputs and stochastic white noise, where a input is specifically designed to excite the dynamics of a process in a specific frequency band. The system output Y can be written as [4]: Y = g (θ, U + γ i , W s ) + γ 0 + W 0 (2.16) Input U is assumed to be known and can be measured. The unknown quantity W s represents non-measured inputs, unknown non-stationary excitation or perturbation of the system, and input noise. The unknown quantity W o is the additive output noise. Basseville [4] distinguished between three types of faults. The first type of fault, γo , occurs at the output of the system in an additive manner. It has been agreed that these faults appropriately represent sensor faults [4]. When considering the distribution of the observed output data Y , the first type of fault only affect the mean value, which changes to a different value when type one fault occurs. The second type of fault, γi , are faults that occur on the input of the system in an additive manner. The effect of these faults are also a shift of the mean value of the output distribution. It has been agreed that system faults of type two appropriately model actuator faults [4]. The third type of fault is modelled by any change on the parameter vector θ of the system. Faults of type three affect the generating mechanisms themselves and thus the variance, correlations, spectrum or higher order dynamics of the output distribution. These type of faults are often referred to as system or component faults [4]. In linear dynamic systems, this type of fault can be classified as a multiplicative fault, because this type of fault affect the input-output transfer function in a multiplicative manner [4]. Type one and type two faults, which appropriately model sensor and actuator faults accordingly, can further be classified as either hard faults or soft faults [41]. Hard faults occur due to sensor and/or actuator failures or process leaks, where soft faults are due to process drift and biased measurements. Sensor bias, actuator bias and bias states can lead to the degradation of closed-loop performance, where the conventional MPC formulation can expect to encounter control difficulties [41]. Department of Electrical, Electronic and Computer Engineering 23 Chapter 2 2.3.3 FD and SID Theory A Model-Based Approach Fault detection methods can be classified into model-based methods, knowledge based methods and history based methods [56]. The development of fault detection methods based on model-based methods began at various places in the early 1970’s [5]. A model-based approach in fault detection uses mathematical models of the plant under monitoring. A mathematical model of the monitored system can be obtained along two routes or a combination of them. The one route of obtaining a mathematical model is to divide the system into appropriate subsystems, where the properties of the subsystems are well understood from previous experience and natural and physical laws. These subsystems can be joined mathematically to obtain a complete model of the system. The other route is directly based on experimentation. Input-output signals from the system are recorded, and subjected to data analysis in order to derive a model [57]. Figure 2.3 illustrates the basic structure of model-based fault detection. Model-based methods can further be classified as either quantitive models or qualitive models [22, 58]. Quantitive models (differential equations, state-space methods, transfer functions, etc.,) are used to generally utilize results from the field of control theory [59]. Quantitive models use static and dynamic relations among system variables and parameters in order to describe a system’s behaviour in quantitive mathematical terms [5]. In qualitive models, the realisation between variables to obtain the expected system behaviour is expressed in terms of qualitive functions centered around different units in the process such as causal models and abstraction hierarchy [58]. Qualitive model types are usually used with large systems, with highly nonlinear dynamics present in the process under monitoring [58]. The use of explicit models in fault detection has a great potential due to the following advantages [58, 27]: • higher fault detection performance can be obtained, where mode types of faults can be detected, with shorter detection times, • fault detection can be performed over a large operating range and • fault detection can be performed passively without disturbing the process. A disadvantage of model-based fault detection methods are the prerequisite of accurate models of the process being monitored, and a possibly more complex deDepartment of Electrical, Electronic and Computer Engineering 24 Chapter 2 FD and SID Theory Figure 2.3: Model-based fault detection structure [5] . sign procedure [58]. The model accuracy is usually the major performance limiting factor for model-based fault detection [58]. The availability of a good model of the monitored system can significantly improve the performance of diagnostic tools, minimising the probability of false alarms [60]. Compared to model-based control, the quality of the model is much more important in fault detection [58]. Apart from deriving mathematical models of the process from first principles (which can be notoriously difficult and complex for typical nonlinear chemical processes) and in some cases even impossible, many different system identification methods can be used to obtain accurate models of the process being monitored. Such methods of system identification and system re-identification are discussed in section 2.2. Model-based approaches to FD are normally performed in two steps: residual generation and residual evaluation [58]. The inconsistency between the measured data of the process, and the corresponding signals generated from the derived model, is known as the residuals [60]. See figure 2.3 Department of Electrical, Electronic and Computer Engineering 25 Chapter 2 FD and SID Theory Based on the measured input signal U , and measured output signal Y , the fault detection method either generates residuals r, or parameter estimates θ̂, or state estimates x̂, which are called features [56]. By comparing normal features (nominal values as obtained from the initial nominal process model) with monitored features the method is able to detect change, which leads to analytical symptoms, s, used for fault detection. The residual generator model-based methods can generally be classified into three distinct categories; observer based approaches, parity based approaches, and parameter estimation approaches. Parameter changes in a process are regarded as multiplicative process faults. Changes of parameters can be detected by implementing parameter estimation methods [56]. 2.3.4 Identification via Prediction Error Framework Background Theory As stated earlier, identification formulations in the ’80s and ’90s have been traditionally been centered on the PEM paradigm, proposed by Ljung [30]. The advantage of PEM is that convergence and asymptotic variance are well established [61], where the disadvantage of the PEM is a rather complex parameterisations and a non-convex optimisation. The extension of PEM to closed-loop systems was first introduced in the late ’90s by the proposed ASYM method developed by Zhu [31]. The basic idea behind the PEM is to estimate system parameters, which in effect will minimise a prediction error objective function defined as follows: ε(t, θ) = y(t) − ŷ(t |θ ). (2.17) When a data set, Z N , where ZN = y(1), u(1), . . . , y(N ), u(N ) , (2.18) is known, these prediction errors can be computed for t = 1, 2, . . . , N . An accurate system model is a model that is accurate at predicting accurate outputs, thereby minimising the prediction error [30]. There exists two general approaches that are being used in minimising the prediction errors. The one approach is to from a scalar-valued norm or criterion function that measures the size of ε(t, θ), and then minimise this norm analytically. A well known criterion being used to minimise the prediction error norm is the least-squares criterion. Another approach to minimise the prediction error, is to demand that ε(t, θ̂ N ) be uncorrelated within a given data Department of Electrical, Electronic and Computer Engineering 26 Chapter 2 FD and SID Theory sequence. This requires that certain projections of ε(t, θ̂ N ) are zero [30]. In this literature survey on the PEM, the associated least-squares estimation methodology will be elaborated on further, and other criteria that involves prediction error norm minimisation will be discussed. Linear regression model structures are very useful in describing basic linear and nonlinear system [30]. A predictor of a linear regression function, which is linear in the system parameter, θ, can be defined as follows [30]: ŷ(t |θ ) = φT (t)θ + µ(t), (2.19) where φT , is the vector of regressors and µ(t) is a known dependant vector. From the prediction error equation 2.17, it follows, ε(t, θ) = y(t) − φT (t)θ, (2.20) where, for notational simplicity, we shall take the known data-dependant vector, µ(t) = 0. Assume a quadratic norm, 12 ε2 (t, θ̂ N ), to measure the prediction error. Then the least-squares criterion for the linear regression equation 2.19 can be defined as follows [30]: N 2 1 X1 T y(t) − φ (t)θ . VN (θ, Z ) = N t=1 2 N (2.21) The unique feature of this criterion developed from the linear parameterisations and quadratic criterion, is that it is a quadratic function of θ. The least-squares estimate can thus be obtained by minimising equation 2.21 analytically giving: LS θ̂ N #−1 N N 1 X 1 X T = arg min VN (θ, Z ) = φ(t)φ (t) φ(t)y(t). N t=1 N t=1 " N (2.22) Variants of the least squares estimation algorithm for parameter estimation in the PEM framework exists. Recursive Least-Squares (RLS) with forgetting factors [62], Robust RLS (RRLS) [63] and Weighted LS [28] are typically used for parameter estimation in the PEM framework. Prediction Error Methodologies The prediction error method (PEM) is concerned with the estimation of linear system models by making direct use of the prediction error as a model performance and quality norm. A general linear model used for system identification in the PEM framework can be defined as follows [28]: Department of Electrical, Electronic and Computer Engineering 27 Chapter 2 FD and SID Theory A(q)y(k) = B(q) C(q) u(k) + v(k). F (q) D(q) (2.23) Various model structures have been derived from the general linear model structure in equation 2.23. The most commonly used model structures that are concerned with output feedback are the Autoregressive with Exogenous Input (ARX) models and Autoregressive Moving Average with Exogenous Input (ARMAX) models [30]. The ARX model is by far the most widely applied linear dynamic model. The ARX model can match the structure of many real world processes and is thus very valuable. The popularity with the ARX model comes in the ease of computing its parameters. The parameters can be estimated by a linear least squares technique since the prediction error is linear in the parameters [28]. The parameter estimation of ARMAX models are more complicated. An extended least squares algorithm is needed in solving the nonlinear optimisation of the ARMAX model parameters [28]. The drawbacks of the nonlinear optimisation approach used for solving model parameters are the high computational demand and the existence of local optima. A suggested solution to this nonlinear optimisation problem is to implement Recursive Least Squares Algorithms (RLS) [28]. This section will focus on variations of LS algorithms used for estimating parameters in the PEM framework for different linear models used in system identification as well as other parameter estimation methods used in the PEM framework. RLS method with a static forgetting factor: A Recursive Least Squares (RLS) method using the direct approach to closed-loop system identification was proposed by Eker [64]. Eker [64] used an ARX model structure to model a threemass electromechanical system. The main benefits of RLS algorithms over Least Square (LS) algorithms are easy numerical solutions and fast parameter convergence. The RLS method gives a consistent modelling accuracy over a wide range of operating conditions and is recommended as the best linear unbiased estimate [64]. When the LS method is required to run online in real-time, the computational effort of the LS method grows with the number of samples collected. The RLS algorithm requires a constant computation time for each parameter update, and therefore is perfectly suited for online parameter estimation [28]. The estimated ARX model output can be defined as follows: ŷ(k) = ϕT (k)θ̂(k − 1). Department of Electrical, Electronic and Computer Engineering (2.24) 28 Chapter 2 FD and SID Theory The RLS algorithm uses the prediction error to update the model parameters. From equation 2.24, the prediction error can be defined as follows: ε(k) = y(k) − ϕT (k)θ̂(k − 1). (2.25) The prediction error in equation 2.25 is used in to update the system parameter as θ̂ RLS (k) = θ̂(k − 1) + P (k)ϕ(k)ε(k), (2.26) where the estimator covariance matrix, P (k), is updated using ϕ(k)ϕT (k)P (k − 1) 1 , P (k) = P (k − 1) Ip − λ λ − ϕT (k)P (k − 1)ϕ(k) (2.27) and the subscript p is the rank of the identity matrix and λ is the static forgetting factor used in the RLS algorithm. The forgetting factor λ determines the convergence speed where decreasing values of λ results in an increase in parameter convergence. Making λ too small can lead to noise susceptibility. It is recommended that λ is chosen in the range, 0.98 ≤ λ ≤ 0.995 [65]. Eker [64] recommended choosing the initial values of P (0) and θ(0) as follows: P (0) = αIZ where 0 ≤ α ≤ 107 and θ̂(0) = 0. Eker [64] identified a fourth order ARX model where the parameters were estimated via the RLS algorithm with a static forgetting factor. The tradeoff between the parameter convergence speed and the noise susceptibility of the RLS algorithm limited the performance of the proposed methodology. BLFRLS method: A Bi-Loop Forgetting factor Recursive Least-Squares (BLFRLS) algorithm was proposed by Yu and Shih [62]. The motivation behind the proposed BLFRLS algorithm was to overcome the shortcomings of the RLS algorithm with a static forgetting factor, which has a slow tracking capability and high prediction errors [62]. The slow convergence of the RLS algorithm is due to the large tracking errors at each sampling instant in time varying systems. The proposed BLFRLS algorithm improves the tracking of time varying parameters, with the same sampling rate, as used in the RLS implementation, without the need of additional measured data samples. The BLFRLS is in principle two Forgetting factor Recursive Least-Square (FRLS) algorithms where the outer loop FRLS algorithm computes parameter estimates at every sampling instant, and where the inner loop FRLS (IFRLS) recursively recalculates and refines the Department of Electrical, Electronic and Computer Engineering 29 Chapter 2 FD and SID Theory parameter estimates an N amount of time, thereby reducing the large tracking errors associated with the RLS algorithm. The IFRLS is in principle exactly the same as the RLS method proposed by Eker [64]. The initial values of the IFRLS algorithm at sampling interval t = k, are set as follows: y 0 = y(k), x0 = x(k), θ 0 (0) = θ(k), (2.28) P in (0) = P (k − 1). With the initial values set in equation 2.28, the estimator covariance matrix, P in and the parameter estimates, θ̂ in , are calculated iteratively in exactly the same manner as for the RLS algorithm in equations 2.26-2.27. The proposed algorithm of Yu and Shih [62] was used for closed-loop parameter estimation under the PEM framework. An ARX model structure was used, where the efficiency of the proposed algorithm was simulated for three different scenarios, which include: estimation of a fast varying dynamic parameter, estimation of a slow varying dynamic parameter and the tracking of a sinusoidal parameter. Yu and Shih [62] have shown that when the simulation settings are N = 10 and λ = 0.98, the BLFRLS algorithm can handle abrupt parameter changes more efficiently and effectively than the RLS algorithm. RRLS method: A Robust Recursive Least-Squares (RRLS) method was proposed by Chao et. al., [63]. The RRLS method was proposed for on-line estimation of time-varying parameters of an AR model using a weighted LS method with forgetting factors. With conventional LS estimation, the sum of the squared prediction errors is minimised, where the distribution of the prediction errors are considered to be Gaussian in the LS procedure. However, with remote sensing, automatically obtained data inevitably carry some false data and gross errors which will result in a different prediction error distribution. This will deteriorate the efficiency of the LS procedure, since the LS procedure weights all the prediction errors equally. A robust solution proposed by Chao et. al., [63], is to assign a weight as a function to the prediction errors, where a loss function is used to assign more weight to the bulk of small prediction errors, and less weight to gross errors termed as outliers. The proposed RRLS algorithm differs from the conventional RLS algorithm in that it inserts a nonlinear transformation function for the prediction errors. By transforming large outliers, and assigning a small weight to these outliers, the bias of the RRLS estimation can be reduced dramatically [63]. Department of Electrical, Electronic and Computer Engineering 30 Chapter 2 FD and SID Theory With conventional LS estimation, the objective is to minimise a cost function defined as follows: 1 J(θ̂) = E ε2 (t, θ(t)) . (2.29) 2 The nonlinear transformation of the outliers to ensure robust estimation leads to the reformulation of the cost function as follows [63]: 1 (2.30) J(θ̂) = E ρ ε2 (t, θ(t)) , 2 where ρ(·) is a nonlinear loss function, which suppresses outliers. Chao et. al., [63] stated that the loss function, ρ(·), should resemble a quadratic function for small prediction error values of the argument. Another requirement stated by 0 Chao et. al., [63], was that the derivative of ρ(·), ψ = ρ (·) should be bounded 0 and continuous. If ψ = ρ (·) is bounded, then no single observation can have an arbitrary large influence on the parameter estimation, while continuity ensures that rounding and quantisation errors will not have a major effect. Chao et. al., [63] defined the loss function, ρ(·), the derivative of the loss function, ψ, and the weighting factor, w, as follows: εi/ |εi | ≤ k1 σ 2 k1 . |εi | k1 σ < |εi | ≤ k2 σ, ρ(εi ) = k2 |εi | > k2 σ εi |εi | ≤ k1 σ k1 .sgn(·) k1 σ < |εi | ≤ k2 σ, ψ(εi ) = 0 |εi | > k2 σ 1 |εi | ≤ k1 σ k1 /εi k1 σ < |εi | ≤ k2 σ, w(εi ) = 0 |εi | > k2 σ (2.31) (2.32) (2.33) where k1 and k2 are nonlinear tuning constants. Reasonably good values for k1 = 1.5 and k2 = 2.5 were proposed by Zhou [66]. The effect of equation 2.33 is to assign less weight to outliers which can influence the accuracy of the parameter estimates. Department of Electrical, Electronic and Computer Engineering 31 Chapter 2 FD and SID Theory The weighting function, w, depends on the parameter variance, σ 2 , where σ depends on θ determined by w. The weighting function, w, is thus determined by iteration, where w(0) = 1 is equivalent to the RLS method [63]. Chao et. al., [63] proposed the inclusion of a dynamic forgetting weighting factor, used for weighting more recent data more heavily in the computation of parameter estimates. A dynamic forgetting factor was proposed as follows: λ = β t−1 . The estimator covariance matrix, P , and the parameter estimate , θ̂ derived by Chao et. al., [63] and can be defined as follow: θ̂ RRLS (2.34) RRLS was −1 RRLS (t + 1) = θ̂ (t) + w(t + 1)P t X t+1 λ + w(t + 1)X Tt+1 P t X t+1 h i RRLS T × y(t + 1) − X t+1 θ̂ (t) , (2.35) n −1 T o T T P t+1 = λ I − w(t + 1)P t X t+1 × λ + w(t + 1)X t+1 P t X t+1 X t+1 P t . (2.36) Chao et. al., [63] implemented the RRLS algorithm on real data in flood forecasting. The parameters of an AR model were estimated. It was concluded that the RRLS algorithm produce less biased estimates than the conventional RLS algorithm. The RRLS model further proved to be much more robust to outliers in real-time data. Kalman filtering method: Ghaffari et al. [57] proposed a closed-loop system identification method using the direct approach in the PEM framework, where the a Kalman filter was used to estimate the model parameters instead of conventional state estimation. The Kalman filtering approach is very closely related to the RLS algorithm. Kalman filtering is usually applied as an observer for the estimation of states and not parameters [28]. However, the parameter estimation problem can be formulated in the following state-space form: θ(k + 1) = θ(k) + v(k) , y(k) = xT (k)θ(k) + e(k), (2.37) where v(k) is an n-dimensional vector representing white noise with an n × n dimensional covariance matrix V , where n is the number of parameters. The time variance of the parameters are modelled as a random walk or drift [67]. The Department of Electrical, Electronic and Computer Engineering 32 Chapter 2 FD and SID Theory covariance matrix is typically chosen to be diagonal. The diagonal entries can be interpreted as the strength of time variance of the n individual parameters. The corresponding entry for V should be chosen large if the parameters are known to have a large variance and rapid movement [28]. The advantage of the Kalman filter over the RLS algorithm is that each parameter has its own forgetting factor [57]. It is thus possible to control the convergence of individual parameters, where the convergence of the each parameter is bounded by the statistical characteristics of the parameter, by setting the forgetting factor accordingly. If no statistical knowledge is available about the individual parameters, then V can be set to ζI [28]. The forgetting factor λ = 1, is equivalent to the case where V = 0. The Kalman filter algorithm for parameter estimation can be formulated as follows [57]: θ̂(k) = θ̂(k − 1) + γ(k)e(k) e(k) = y(k) − xT (k)θ̂(k − 1) 1 P (k − 1)x(k), γ(k) = T x (k)P (k − 1)x(k) + 1/q(k) P (k) = I − γ(k)xT (k) P (k − 1) + V (2.38) where the adaptation vector , γ(k), is known as the Kalman gain. In the Kalman filter algorithm the matrix P does not increase exponentially as with the RLS algorithm, but only linearly P (k) = P (k − 1) + V in the case of non-persistent excitation. Persistent excitation is accomplished when the signal used for excitation manages to excite the process dynamics of a plant over the entire frequency band of operation. Ghaffari et. al. [57] tested their proposed method, by identifying the AR model of a nonlinear aerospace launch vehicle system. They concluded that the proposed method of estimating system parameters by the implementation of a Kalman filter increased the precision and convergence of the model parameters in comparison to the conventional RLS algorithm. A well known technique to identify system parameters of nonlinear (processes with discontinuities in the model) dynamic systems is to use the extended Kalman Filter (EKF). In the EKF approach, parameters are treated as states, similar to the case study discussed by Ghaffari et. al. [57], where a nonlinear state equation is used to compute the estimates. However, the EKF approach has the restriction in that the nonlinear state equation needs to be differentiable with respect to each state variable [68]. The EKF approach is also difficult to implement, difficult to Department of Electrical, Electronic and Computer Engineering 33 Chapter 2 FD and SID Theory tune, and only reliable for systems that are almost linear on the time scale of the updated intervals [69]. The unscented Kalman Filter (UKF) was proposed by Julier et. al. [69] for solving the problem of a nonlinear non-differentiable process model. The UKF is a linear estimator with performance equivalent to the Kalman filter for linear systems, yet the UKF method can be generalised elegantly to nonlinear systems without the liberalisation steps necessary in the EKF approach [69]. Unlike the EKF, the UKF does not approximate nonlinear functions, but instead uses a set of deterministically chosen weighted sample points to calculate the estimator of the state variables [68]. The UKF method is thus applicable when a nonlinear system with discontinuities is used for estimation. In the context of an industrial process, maintenance or unplanned plant shutdowns can result in parametric discontinuities, which can result in the loss of accurate parameter tracking of the conventional Kalman filer and EKF estimation approaches. Araki et. al. [68] implemented the UKF method for estimating unknown parameters of a 2-link underactuated acrobat robot, where the unknown parameters were regarded as unknown states for estimation. Julier et. al. [69] developed the UKF based on the intuition that it is easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear function or transformation. Based on the intuition, a nonlinear transformation function is applied to a set of points whose sample mean and sample covariance are x̂(k |k − 1) and P (k |k − 1 ) respectively. The transformation of this set results in a new set of predicted mean and covariance values. Although the latter resembles a Monte Carlo method, the samples are not chosen randomly, but in such a way that specific information is captured about the distribution of the states. The UKF is minimum mean square error (MMSE) state estimator for a nonlinear system, [70], x(k + 1) = f [x(k), u(k), k] , z(k + 1) = h [x(k + 1), u(k + 1), k + 1] + w(k + 1), (2.39) where x(k) is the state of the system at time step k, u(k) is the input vector, z(k) is the observation vector and w(k) is the additive measurement noise. The nonlinear system and measurement functions are denoted f (·) and h(·) respectively. It is assumed that the state vector, x(k), is augmented with the noise vector v(k) [69]. It is further assumed that the process noise, v(k), and measurement noise, w(k), are zero mean [70] with covariances defined as follows: E v(k)v T (j) = δkj Q(k) E w(k)wT (j) = δkj R(k), E v(k)wT (j) = 0 ∀k, j. Department of Electrical, Electronic and Computer Engineering (2.40) 34 Chapter 2 FD and SID Theory The UKF chooses 2n + 1 regression points, χi , in state-space with weights W i (i = 1, .., n) as, χ0 (k |j ) = x̂(k |j ) 2κ W0 = 2 (n + κ) χi (k |j ) = x̂(k |j ) + p (n + κ) P (k |j ) i (2.41) 1 Wi = 2 (n + κ) χi+n (k |j ) = x̂(k |j ) − p (n + κ) P (k |j ) i 1 W i+n = , 2 (n + κ) p p where (n + κ) P (k |j ) is the i-th row or column of (n + κ) P (k |j ) . κ i is an extra degree of freedom used to refine the higher order moments of the approximation, with n + κ 6= 0, [69], in the choice of regression points χi . Given the set of samples generated by (2.41), the prediction procedure is as follow [69]: 1. Each χ point is instantiated through the process model to yield a set of transformed samples: χi (k + 1 |k ) = f [χi (k |k − 1), u(k), k] . (2.42) 2. The predicted mean is computed as follows: x̂(k + 1 |k ) = 2n X W i χi (k + 1 |k ). (2.43) i=0 3. The predicted covariance is computed as follow: P (k + 1 |k ) = 2n X W {χi (k + 1 |k ) − x̂(k + 1 |k )} i=0 × {χi (k + 1 |k ) − x̂(k + 1 |k )}T (2.44) . The mean and covariance is calculated using standard vector and matrix operations which means that the algorithm is suitable for any process model [69]. Discussion System identification methodologies under the PEM framework were investigated in this section. Numerous methods, in particular in parameter estimation, were investigated. The conventional LS method used for solving the parameter estimation Department of Electrical, Electronic and Computer Engineering 35 Chapter 2 FD and SID Theory problem under PEM framework is elegant but not efficient for online parameter estimation. A RLS algorithm was proposed for solving online parameter estimation, since the computational effort of the LS method grows with the number of samples collected [64]. The RLS requires a constant computational time for each parameter update, and therefore is perfectly suited for online use in real-time applications. A Bi-Loop RLS algorithm was proposed that addressed the tradeoff experienced between the parameter convergence speed and the noise susceptibility of the RLS method [62]. The forgetting factor, λ, used with parameter estimation to reduce the weight of influence of past sampled data in the parameter estimation process, can increase the convergence of the parameters to their true values. To prevent the parameter estimation method from being too sensitive and susceptible to noise, a second recursive RLS loop was proposed to recursively refine parameter estimates. A RRLS method was proposed which is robust in the sense that data outliers will not affect the parameter estimation process [63]. The prediction errors used with the parameter estimation process are weighted (transformed nonlinearly) accordingly, to prevent the influence of outliers. The Kalman filter approach used for state estimation is similar to the RLS method. Ghaffari et. al. [57] proposed a parameter estimation approach under the PEM framework where the Kalman filter is used to estimate parameters instead of states. An extension of the Kalman filter approach is the EKF approach used for estimating states as well as process parameters of a nonlinear system. The UKF method was developed [69], to address the drawbacks of the EKF approach. Unlike the EKF, the UKF does not approximate nonlinear functions, but uses a set of deterministically chosen weighted sample points to calculate the estimator of the state variables [68]. The UKF method can be generalised elegantly to nonlinear systems without the liberalisation steps necessary in the EKF approach [69]. 2.3.5 Identification via Subspace Framework Background Theory The early ’90s witnessed the birth of a new type of linear system identification algorithms known as subspace methods. Subspace methods originated between Department of Electrical, Electronic and Computer Engineering 36 Chapter 2 FD and SID Theory the fields of system theory, geometry and numerical linear algebra [1]. Linear subspace identification methods are concerned with systems and models that can be represented as state-space models. State-space based approaches are typically better suited to model MIMO systems. The main difficulty with applying predictor error methods to state-space models is to find a numerically robust canonical realisation, since the alternative, a full parameterisations of the state-space model, would involve a huge number of parameters [28]. State-space models used in linear subspace identification can be defined as follows: xk+1 yk = A B C D v Tk (τ ) xk uk + Kwk vk , (2.45) δt−τ ≥ 0. (2.46) with E wk (t) v k (t) wTk (τ ) = Q S ST R The vectors uk ∈ Rm×1 and y k ∈ Rl×1 are measurements taken at time instants k of, respectively, the m inputs, and l outputs of a process.The vector xk is the state vector of the system, where v k ∈ Rl×1 and wk ∈ Rn×1 are unobserved vector signals. The vector v k is the measurement noise, while the vector wk is the process noise. The matrix K is the Kalman gain.The system matrix is defined as A ∈ Rn×n , B ∈ Rn×m is the input matrix, C ∈ Rl×n is the output matrix, while D ∈ Rl×m is the direct feed through matrix. The matrices Q ∈ Rn×n , S ∈ Rn×l and R ∈ Rl×l are the covariance matrices of the noise sequences wk and v k respectively [1]. E denotes the expected operator,and δ the Kronecker delta. The problem behind subspace identification is to determine the order n of the unknown system, the system matrices, A, B, C, D, to a similarity transformation and to estimate the covariance matrices, Q, S, R, of the measurement and process noise. A large number of measurements of the input, uk , and output, y k , generated by an unknown system (2.45-2.46) are needed. It is generally assumed that the amount of data points goes to infinity and that the data is ergodic [1]. All subspace methods consists of two basic steps in identifying a system. The first step involves the projection of certain subspaces generated from the measured data sets, {uk , y k }, to find an estimate of the extended observability matrix, Γ̂, and/or an estimate of the states, X̂k , of the unknown system. The second step involves the retrieving of system matrices, A, B, C, D from either the obtained extended observability matrix or the estimated system states [1]. Figure 2.4 illustrates the two general steps associated with subspace algorithms. Department of Electrical, Electronic and Computer Engineering 37 Chapter 2 FD and SID Theory Figure 2.4: Basic steps to subspace identification methods [1]. The following input-output matrix equation (extended state-space model of equation 2.45) played a very important role in the development of subspace identification [1]: Y f = Γi X i + H di U f + H si M f + N f . (2.47) The term Γi is the extended observability matrix, where H di and H si are, respectively, the deterministic lower block triangular Toeplitz matrix, and the stochastic lower block triangular Toeplitz matrix. The future block Hankel matrices formed with process noise, wk , and measurement noise, v k , are defined respectively as M f and N f . All subspace methods start from the input-output equation 2.47 [1]. It states that the block Hankel matrix containing the future outputs Y f is related in a linear way to the future input block Hankel matrix, U f , and the future state sequence, X i . The basic idea of subspace identification is to recover the Γi X i term of equation 2.47. The knowledge of Γi or X i leads to the system parameters, where the singular value decomposition (SVD) of the term Γi X i gives the order of the system. This is possible due to the fact that Γi X i is rank deficient [1]. The extended observability or system states are estimated by orthogonal projection, which is only possible under the assumption that the noise is uncorrelated with the inputs. Department of Electrical, Electronic and Computer Engineering 38 Chapter 2 FD and SID Theory The first step in obtaining the estimate of the term Γi X i is by projecting the row space of Y f into the orthogonal complement of the row space of U f as follow [1]: Y f U⊥ f −1 = Γi X i U ⊥ f −1 −1 + H si M f U ⊥ f −1 + H si M f + N f . + H di U f U ⊥ f −1 −1 + Nf U⊥ , f (2.48) where it follows, Y f U⊥ f −1 = Γi X i U ⊥ f (2.49) The next step is in weighting equation 2.49 to the left and the right with matrices W 1 and W 2 as follows: h h −1 i i ⊥ −1 W 1 Y f U⊥ W = W Γ X U W 2 + W 1 [H si M f + N f ] W 2 . 2 1 i i f f (2.50) The chosen weight matrices W 1 and W 2 together with the input U f must be chosen in such a way that the following conditions are satisfied [1]: 1.rank (W 1 Γi ) = rank (Γi ) i h ⊥ −1 W = rank (X i ) . 2.rank Y f U f (2.51) 3.W 1 [H si M f + N f ] W 2 = 0 The first two conditions guarantee that the system rank, n (a property of Γi X i ) is preserved after projection, where the third condition expresses the necessity of weight W 2 to be uncorrelated with the noise sequences wk and v k [1]. From the conditions stipulated, we have the weighed input-output matrix definition, h −1 i def Oi = W 1 Y f U ⊥ W 2, f (2.52) where the SVD of equation 2.55 is defined as: svd(O i ) = U1 U2 S1 0 0 S2 V T1 V T2 ! . (2.53) where U 1 and U 2 are the unitary matrices of output singular vectors, V 1 and V 2 are the unitary matrices of input singular vectors, and S 1 and S 2 are the eigenvalues. From expression 2.53 the following results are obtained [1]: n = rank (O i ) , Department of Electrical, Electronic and Computer Engineering (2.54) 39 Chapter 2 FD and SID Theory 1,/2 W 1 Γi = U 1 S 1 Xi U ⊥ f −1 (2.55) 1/ W 2 = S 1 2 V T2 . (2.56) From equations 2.54-2.56, it is possible to obtain the system parameter estimates by either using the extended matrix or the estimates system observability A B states. The system parameters, , can be obtained by using the estiC D mated system states and thereby solving equation 2.57 by using the least-squares method. X̂ i+1 ! Yi = A B C D X̂ i ! ρw + Ui ρv ! , (2.57) where ρw and ρv are the residuals matrices of the estimation process. The least-squares expression to be solved can be defined as follows [1]: q tr E H min LS E , (2.58) A,B,C,D where X̂ i+1 Yi ! − A B C D X̂ i ! Ui , (2.59) and the trace tr is the sum of the diagonal elements, and E H LS is the complex conjugate transpose of E LS . The covariance matrices, Q, S, R, are estimated by using the residuals ρw and ρv as follows [1]: Q S ST R i 1 = i " ρw ρv ! # ρTw ρTv ≥0 (2.60) where i denotes an finite introduced bias which disappears as i → ∞. Consult Favoreel et. al.[1] for the solution the the system parameters by using the estimated extended observability matrix. Subspace Methodologies Subspace identification methods is regarded as an alternative to PEM identification methods. Subspace identification yields a multivariable system model Department of Electrical, Electronic and Computer Engineering 40 Chapter 2 FD and SID Theory without the need for special parameterisations, which requires significant prior knowledge and non-convex optimisation [45]. Most subspace methods can fail, (when the unbiased estimate property of the subspace method is lost) when used with closed-loop system data, even with large data sets [7, 45]. With closed-loop data, consistency for the first least squares estimation (see equation 2.58) breaks down. The most commonly used subspace identification methods that have been developed over the last 20 years are numerical algorithms for subspace system identification (N4SID) [71], multivariable output error state-space (MOESP) [72], canonical variate analysis (CVA) [73] and the basic-4SID and IV-4SID [74]. Choosing the appropriate weighting matrices, W 1 and W 2 , all subspace algorithms for LTI systems can be interpreted as one of the mentioned algorithms. Table 2.1 defines the weighting matrices for the corresponding subspace identification methods. Table 2.1: Different existing subspace identification algorithms in a unifying framework [1] Method W1 W2 −1 † Wp N4SID I li W p (U ⊥ f) 1 h i− /2 ⊥ −1 T −1 −1 † −1 CVA Y f (U ⊥ ) Y (U ) W p (U ⊥ W p (U ⊥ f f f f) f) −1 † −1 MOESP I li W p (U ⊥ W p (U ⊥ f) f) Basic-4SID I li Ij IV-4SID I li Φ From Table 2.1, the first two algorithms use the state estimates X̂ i (the right singular vectors) to find the system matrices while the last three algorithms are based on the extended observability matrix Γ̂i (the left singular vectors). The matrix Φ of the IV-4SID method is a matrix containing the instrumental variables. †denotes the Moore-Penrose pseudo-inverse. The subspace identification methods listed in Table 2.1 all work well in openloop identification, where the N4SID method proposed by Overschee and De Boor [71] was modified for closed-loop system identification [75]. The aim of this section is to conduct a survey of new subspace identification methodologies and results for closed-loop systems. The interested reader can consult [71, 74] for a more in depth discussion of open-loop subspace identification algorithms and the applicable theory. The technical report of Bauer [76] is especially insightful where a comparative subspace algorithm survey was done, focussing on weaknesses and strengths of the subspace identification techniques listed in Table 2.1. Department of Electrical, Electronic and Computer Engineering 41 Chapter 2 FD and SID Theory A Free Model Reduction Algorithm: A free model reduction algorithm for closed-loop subspace identification was proposed by Mathieu and Mohammed [77]. The subspace identification method splits into three steps: 1. The first step consists of determining the Markov parameters of the control systems sensitivity functions, Gcl . The control systems’ sensitivity functions, Gcl , can be defined as follows: S r1 u S r2 u S r1 y S r2 y Gcl = (I m + CG)−1 (I m + CG)−1 C , = G (I m + CG)−1 G (I m + CG)−1 C (2.61) where the sensitivity functions are defined for the closed-loop system between the reference signals r 1 , r2 and the inputs and outputs u, y for the controller the C and plant G. One determines the first 2i − 1 Markov parameters of Gcl , i.e. H cl/2i [77]. The involved Markov parameters identification can be performed using the PEM appropriately and can be initialized by the N4SID method [77]. 2. In the second step, the impulse response of the system is identified from the results obtained in the first step. The focus of step two is to estimate a finite amount of Markov parameters of the system G, i.e. the product of Γ∆G/i , where Γ and ∆ are the extended observability matrix and the reverse extended controllability matrix. Mathieu and Mohammed [77] derived useful expressions of Γcl/i , ∆cl/i and H cl/i respectively, which can be defined as follows: Γcl/i = LTi −W −1 W −1 i H C/i ΓG/i i ΓC/i −1 I ip − H G/i W i H C/i ΓC/i H G/i W −1 i ΓC/i ∆cl/i = T T −1 , (2.62) ∆G/i W −1 ∆G/i W −1 i i H C/i −1 −∆C/i H G/i W i ∆C/i I ip − H G/i W −1 H C/i i Li , (2.63) H cl/i = LTi W −1 W −1 i i H C/i −1 H G/i W i H G/i W −1 i H C/i Department of Electrical, Electronic and Computer Engineering Li . (2.64) 42 Chapter 2 FD and SID Theory The matrix W i = I im + H C/i H G/i and T are non singular nc l × nc l transformation matrices and Li is a non singular i(m + p) × i(m + p) matrix [77]. From equation 2.62-2.63 it follows that [77] Γ∆G/i 0 0 Γ∆C/i −H G/i I ip I ip −H C/i T = Li Γ∆cl/i Li . I im H C/i H G/i I im (2.65) Mathieu and Mohammed [77] introduced the following form for Li Γ∆cl/i LTi as follows: H 11 0 H H 21 11 Li Γ∆cl/i LTi = H 31 0 H 41 H 31 H 13 0 H 23 H 13 = H 1:2/1:2 H 33 0 H 3:4/1:2 H 43 H 33 H 1:2/3:4 H 3:4/3:4 . (2.66) Depending on the composition of the external signal r(t), it is possible to determine the Markov parameters, H G/2i , as follows [77]: H G/2i −1 ∀ r1 (t) H 3:4/1:2 H 1:2/1:2 −1 H 3:4/3:4 H 1:2/3:4 ∀ r2 (t) = . −1 H 3:4/1:4 H 1:2/1:4 ∀ r1 (t); r2 (t) (2.67) where r 1 , r 2 are the reference signals acting on the outputs y and control inputs u of the closed-loop system respectively. The term Γ∆G/i can be extracted from H G/2i [77], or estimated from equation 2.65 as follows: Γ∆G/i = −H G/i I ip H 21 H 23 H 41 H 43 I ip H G/i ! . (2.68) 3. The third step consists in determining the order of the system, together with a state-space realisation of the system. The order is determined using the singular value decomposition method. See equation 2.53 and 2.54 to determine the system rank. The determination of the state-space realisation is carried out by using available algorithms proposed by Zeiger and McEwen [43] and Qin and Ljung [78]. Department of Electrical, Electronic and Computer Engineering 43 Chapter 2 FD and SID Theory Mathieu and Mohammed [77] used the proposed closed-loop subspace identification method to identify a plant setup of two circular plates rotated by an electrical servo motor with flexible shafts. The proposed method was compared with the direct identification approach under the PEM algorithm, initialized with N4SID estimate. The direct identification approach led to biased models while the proposed subspace method performed very well for the identification of the modes which were unbiased. PARSIM-E algorithm: Qin and Ljung [79] revealed that typical subspace identification algorithms actually use a non-parsimonious model formulation, with extra terms in the model that appear to be non-causal. These extra terms are included to conveniently perform subspace projection, but are the cause for inflated variance in the estimates, and partially responsible for the loss of closed-loop identifiably. Qin and Ljung [79] proposed a subspace method where these noncausal terms are removed, making the model parsimonious. The proposed method also addresses and removes the condition of no correlation between the future input uk and the past innovation ek , which is not the case for closed-loop data. The idea of the proposed method to exclude non-causal terms in the model is accomplished by partitioning the extended state-space model (see 2.47) in a row wise sense[79]. The partitioned extended state-space model can be defined as follows: Y f i = Γf i X k + H f i U i + Gf i E i ∀i = 1, 2, ..., f, (2.69) where subscript f denotes the future horizon. The above equation is guaranteed to be causal, resulting ! in a parsimonious w(t) model representation [79]. By eliminating e(t) = in the innovation v(t) model, 2.45 -through iteration- it is possible to reformulate the partitioned extended state-space model as follows [79]: Y f i = Γf i Lz X p + Γf i ApK X k−p + H f i U i + Gf i E i ∀i = 1, 2, ..., f, (2.70) where subscript p denotes the past horizon and Department of Electrical, Electronic and Computer Engineering 44 Chapter 2 FD and SID Theory def ∆p (AK , K) ∆p (AK , B K ) def ∆p (A, B) = Ap−1 B · · · AB B Lz = def (2.71) AK = A − KC def B K = B − KD def Z p = Y Tp U Tp . The second term on the right hand side of equation 2.71 tends to zero as p tends to infinity. The least squares estimates for the parsimonious model can be defined as follows [79]: " Γ̂f i Lz Ĥf i = Y fi Zp # ; Ui ∀i = 1, 2, ..., f. (2.72) The estimates obtained using equation 2.72 are biased for closed-loop identification. Qin and Ljung [79] treat the estimated innovation as known data, which results in the subsequent projections not requiring future inputs to be uncorrelated with past innovations. Qin and Ljung [79] derived the least square estimate that does not require the future input uk to be uncorrelated with the past innovation ek as follows: Γ̂f i Lz Ĥf i Ĝf i Zp = Yf i U i , (2.73) Ê i−1 Ĝf i = CAi−2 K CAi−3 K · · · CK . The innovation data can be calculated recursively as follows [79]: " Ê i = Ê i−1 Ê f i # . (2.74) Qin and Ljung [79] have done a comparative simulation study between the proposed PARSIM-E algorithm and the well known MOESP algorithm. Simulations were conducted for both open-loop and closed-loop data. It was concluded that for open-loop data, both the PARSIM-E method and MOESP method performs equally well with no observed difference for open-loop identification, while closedloop identification results are very different. The PARSIM-E algorithm gives the best estimates without biases, while the MOESP algorithm fails with closed-loop identification. Department of Electrical, Electronic and Computer Engineering 45 Chapter 2 FD and SID Theory SSARX algorithm: Jansson [80] proposed a Stacked Outputs ARX (SSARX) algorithm for subspace identification. The proposed method combines CCA subspace identification theory with ARX modelling, and is able to handle data in open-loop and closed-loop. A set of estimated Markov parameters are used to enforce more structure into the data model used by the subspace identification approaches. The state-space equations in equation 2.45 can be reformulated as follows [80]: x(t + 1) = Ãx(t) + B̃u(t) + Ky(t) y(t) = Cx(t) + Du(t) + e(t). (2.75) The matrices Ã and B̃ are defined as follows: Ã = (A − KC) (2.76) B̃ = (B − KD). From equation 2.75, the extended state-space equation 2.47 can be redefined as follows: y f (t) = Γ̂x(t) + Φ̂uf (t) + Ψ̂y f (t) + ef (t)), (2.77) where subscript f denotes the future horizon. The matrices Γ̂, Φ̂ and Ψ̂ are defined as follows [80]: C C Â Γ̂ = . .. C Â Φ̂ = D 0 C B̂ .. . D f −1 , ··· ... (2.78) 0 .. . , 0 f −2 C Â B̂ C B̂D 0 0 ··· 0 .. CK 0 . . Ψ̂ = .. .. . 0 . f −2 C Â K CK 0 (2.79) (2.80) Equation 2.77 is regarded as the stacked outputs of the ARX model, which in general is of infinite order. However, given the assumption that the matrix Â is Department of Electrical, Electronic and Computer Engineering 46 Chapter 2 FD and SID Theory stable, it can be approximated by truncating the ARX model similarly to what is done in the CCA method [80]. The main idea behind the proposed method of Jansson [80] is to first estimate a high order ARX model to get estimates of k k the impulse response coefficients D, C Â B̂ and C Â K for k = 0, 1, .., f − 2. If the estimated impulse response coefficients to estimate Φ̂ and Ψ̂ are used, we can write [80] def z(t) = y f (t) − Φ̂uf (t) − Ψ̂y f (t) = Γ̂x̂(t) + ef (t), (2.81) where x̂(t) is the state estimate which can be defined as x̂(t) = ∆p(t), (2.82) where ∆ is a matrix of unknown coefficients and p(t) a vector containing delayed inputs and outputs p steps back [80]: p(t) = y T (t − 1) y T (t − 2) · · · y T (t − p) uT (t − 1) uT (t − 2) · · · uT (t − p) . (2.83) Similar to CCA, equation 2.81 can be regarded as a low linear regression in Γ̂∆ [80]. The main interest in CCA lies in the estimation of ∆, so that the state in equation 2.82 can subsequently be estimated. The latter can be done by performing a correlation analysis on equation 2.81, z(t) and p(t) as follows [80]: 1 1 M = (Rzz )− /2 (Rzp ) (Rpp )− /2 . (2.84) The sample correlation matrix between two signals z(t) and p(t) are defined as follows: Rzp N 1 X z(t)pT (N − t). = N t=1 (2.85) The next step of CCA is to compute the singular value decomposition of M , U SV T = M , where the CCA estimate of ∆ is then given by ˆ = V Tn (Rpp )−1/2 , ∆ (2.86) and the estimated state sequence is [80] 1 x̂(t) = V Tn (Rpp )− /2 p(t). (2.87) The system matrices can be obtained accordingly by linear regression in the state-space model equations 2.75 by replacing the true state with the estimated Department of Electrical, Electronic and Computer Engineering 47 Chapter 2 FD and SID Theory state in equation 2.87 [80]. The SSARX subspace identification method proposed by Jansson [80] outperformed many existing subspace identification methods (MOESP, CCA, N4SID), with results very close to the performance of the PEM method. A comparative simulation study between the system identification methods was done for both open-loop and closed-loop data. Virtual Closed Loop algorithm: Agüero and Goodwin [6] proposed a method based on the indirect approach for closed-loop system identification. Typically, indirect procedures require knowledge of the true controller. Problems that may arise with the use of a true controller is the fact that the controller is often non-linear, e.g. MPC, and may have a high gain in critical areas, and thus mask the plant response [6]. Agüero and Goodwin [6] developed a subspace identification method that is based on virtual feedback by using a known linear virtual controller in the analysis, irrespective of the true typical non-linear controller. The virtual controller needs to have no dependence with the true controller. The key assumption is that a linear controller is known that will stabilise the process [6]. Such a controller can be defined as follow: P . (2.88) L An observer polynomial, E, is introduced, where the expression of the virtual closed-loop is N = E−L. The construction of the virtual closed-loop is illustrated in figure 2.5, where the virtual controller is added an subtracted not to modify the true closed-loop. A virtual controller is used when the system, G, to be estimated, is open-loop unstable or marginally stable and a subspace method is used [6]. A virtual reference for the system can be defined as follows [6]: C̄ = L P N P ut + y t = ut − ut + y t . (2.89) E E E E From equation 2.89, and the system illustrated in figure 2.5, the following relationship is satisfied [6]: ūt = A0 L B0E yt A0 L + B 0 P ūt + A0 L + B 0 P v t = T̄ 0 ūt + H̄ vcl = (2.90) 0 vt, A0 E A0 P ut − A0 L + B 0 P A0 L + B 0 P T is the virtual where the process is given by G = B 0 /A0 , and T̄ 0 = T y0 T u0 u T vcl closed-loop function, and H̄ 0 = H̄ 0 H̄ 0 the noise transfer function [6]. P " # Department of Electrical, Electronic and Computer Engineering 48 Chapter 2 FD and SID Theory Figure 2.5: Virtual Closed Loop construction [6]. Department of Electrical, Electronic and Computer Engineering 49 Chapter 2 FD and SID Theory and L are polynomials and E is an observer polynomial. Agüero and Goodwin [6] state that a consistent estimate of T̄0 can be obtained by executing three consecutive steps. The first step involves the calculation of a matrix that contains the d-step ahead predictions. By using singular value decomposition, the states are then calculated accordingly. Finally in step three, the state-space matrices are estimated as a linear regression. y T̂ 0 For the direct approach, the process estimate can be calculated from either the u or T̂ 0 estimates as follows: u y Ĝ = T̂ L̄ y 1 − T̂ P̄ or Ĝ = 1 − T̂ L̄ u T̂ P̄ . (2.91) An alternative to recover the process estimate is to use the joint input-output approach as follows [6]: Ĝ = T̂ T̂ y u. (2.92) The method proposed by Agüero and Goodwin [6] allows the identification of accurate models with long prediction horizons operating in closed-loop. The restrictions usually associated with the indirect approach are avoided by the implementation of a virtual closed-loop. Advantages of the proposed method are that the true controller needs not to be known or used in the identification process. The method consequently allows the use of subspace methods, avoiding numerical difficulties [6]. Two Stage ORT-based subspace method: Katayama and Tanaka [8] proposed a subspace method for identifying open-loop systems operating in closedloop. The proposed method is based on two successive orthogonal decompositions (ORT). The first LQ decomposition, used for data processing, calculates the deterministic components of the joint input-output process. The second decomposition uses the ORT method to compute the system matrices. The proposed method is a subspace version of the two-step projection method, and is consequently known as the Two Stage Orthogonal Decomposition Subspace [TSODS] method. Suppose we are given a finite amount of input-output data and exogenous inputs. A subspace of finite history of second order variables of exogenous inputs can be defined as < [8]. Since the measurement noise, v k , and the process noise, wk , are orthogonal to the space <, the orthogonal projection of equation 2.45 onto the subspace < yields the following projected state-space result [8]: Department of Electrical, Electronic and Computer Engineering 50 Chapter 2 FD and SID Theory x̂pd (t + 1) = Apd x̂pd (t) + B pd û(t) ŷ(t) = C pd x̂pd (t) + D pd û(t) (2.93) x̂pd (0) = Ê {xpd (0) |< } . The orthogonal projections of the state vector, input and output are respectively [8], x̂pd (t) = Ê {xpd (t) |<} û(t) = Ê {ud (t) |< } (2.94) ŷ(t) = Ê {y d (t) |< } , where the subscript d represents the deterministic portions after the projection of the state-space onto <, and p denotes the past samples. Fromthe projection Apd B pd one notes that ŷ(t) is described by the same state-space matrices . C pd D pd It is thus desirable to use the projected data which consist of deterministic values instead of the original data with stochastic components. The projected data can easily be obtained by an LQ decomposition [8]. It should be noted, however, that the projection cannot remove all the stochastic components from the joint input-output process. The projected data will contain some residuals. It is thus essential to apply the ORT method in the estimation of the state-space matrices to cope with the residuals of the projected data [8]. The extended state input-output matrix (after LQ decomposition) equation can be defined from equation 2.93 as, Ŷ f = Γk X̂ k + Ψk Û f , (2.95) where f denotes the future. The matrices Γk and Ψk are the extended observability matrix and Toeplitz matrix respectively [8]. The ORT-based method to identify Apd B pd the state-space matrices can be described as follow given the block C pd D pd Hankel matrices R, U and Y [8]: 1. Compute the LQ decomposition, 2. compute the LQ decomposition of the deterministic components, 3. estimate the extended observability matrix by singular value decomposition, 4. by using the shift-invariance property of the extended observability matrix, compute the system matrices C pd Apd respectively, Department of Electrical, Electronic and Computer Engineering 51 Chapter 2 FD and SID Theory 5. the term Ψk B pd D pd is linear with respect to the parameters, so that the parameters B pd D pd can easily be obtained. The proposed method of Katayama and Tanaka [8] removes stochastic components in the data by preprocessing. This results in an accurate estimate of the model of the plant. An advantage of the Two Stage ORT method is that it can easily be applied to multivariable systems in the manner of the direct approach, where the algorithm is quite simple with little bias errors [8]. Discussion This section highlighted some of the most prominent subspace algorithms used in system identification. Subspace algorithms for both open-loop and closed-loop data were considered where the emphasis was placed on closed-loop subspace system identification algorithms. Conventional open-loop subspace algorithms (N4SID, MOESP, CCA, IV-4SID) are not suitable for closed-loop system identification. Subspace identification methods are difficult to apply to closed-loop process data, because of the use of an extended future horizon that introduces correlation between past input data and noise [48]. A drawback of subspace algorithms was revealed by Qin and Ljung [79], where added non-causal terms in the system model used for subspace projection leads to inflated variances. Another drawback experienced with closed-loop subspace algorithms is the required condition that inputs, uk (t), must be uncorrelated with noise, ek (t). The proposed SSARX method of Jansson [80] and the PARSIM-E method proposed by Qin and Ljung [79] addressed the problem of the required condition of no correlation between the past inputs and noise by using pre-estimation and to separate these two terms. The Two Stage ORT-based subspace method proposed by Katayama and Tanaka [8] used ORT projection to filter out stochastic noise components. The indirect approach does not require accurate noise models, however, a disadvantage is that a-priori controller information is required, which complicates the implementation thereof [45]. The Virtual Feedback solution proposed by Agüero and Goodwin [6] eliminates the requirement of a known controller model, making the indirect approach feasible. 2.3.6 Statistical Approaches to Change Detection in System Parameters This section considers the problem of detecting changes in the dynamics of a system under surveillance in real-time, without being disturbed by changes in the Department of Electrical, Electronic and Computer Engineering 52 Chapter 2 FD and SID Theory dynamics of the input signal. Two statistical methods used for fault detection is the Asymptotic Statistic Local Approach [4, 37] and Principle Component Analysis (PCA) [81, 82, 39]. Statistic based methods using PCA for fault detection are regarded as knowledge-based fault detection methods [81]. The strength of knowledge-based methods is that high dimensional data can be transformed into lower dimensions, in which the important statistical data are captured and used for process monitoring [39]. The PCA method will not be considered since it is a model free fault detection approach, and does not explicitly focus on system parameters for fault monitoring. However, Palma et. al. [39] have proposed a method where PCA is used to reduce the dimensionality of system parameters instead of the conventional input-output data to detect faulty parameter deviations. The Asymptotic Local Approach is a methodology for parameter change detection based on hypothesis tests. Using adequate information, all the fault detection problems can be reduced to that of detecting changes in the mean of a Gaussian variable [4]. The Gaussian property is obtained asymptotically when the sample size tends to infinity, and the magnitude of fault tends to zero [37]. The main advantage of the Asymptotic Local Approach is the ability in assessing the level of significance of parameter discrepancies with respect to uncertainties. The statistical tests based on the local approach also tells us if the relative size of the parameter discrepancy of the monitored system is significant or not [4]. The Asymptotic Local Approach distinguishes between two residuals namely primary residuals and improved residuals. A primary residual is a vector valued function or process K (θ 0 , U i , Y i ) whose mean value switches from zero, when the process is operating normally, towards non zero when a fault occurs [4]. The primary residual function can be defined as the gradient of the least squares prediction error, K (θ 0 , U i , Y i ) = ∂ T f (θ, U i , Y i ) |θ=θ0 [Y i − f (θ 0 , U i , Y i )] , ∂θ (2.96) where θ 0 denotes the nominal values of the parameters of the process f and i indicates the sample corresponding to the time instant i [37]. The improved residual is a vector valued function ζ N (θ 0 ) which builds on K (θ 0 , U i , Y i ), with a known distribution [4]. The improved residual can be defined as [37], N 1 X ζ N (θ 0 ) = √ K (θ 0 , U i , Y i ), N i=1 Department of Electrical, Electronic and Computer Engineering (2.97) 53 Chapter 2 FD and SID Theory where N denotes the sample size. The quantity ζ N (θ 0 ) is evaluated to perform fault detection [37]. With N >> 0, ζ N (θ 0 ) has the following distribution [37], ζ N (θ 0 ) → N (0, Σ) ⇒ θ = θ0 , N (M ∆θ, Σ) ⇒ θ = θ 0 + ∆θ (2.98) where ∂ ∂ T f (θ, U i , Y i ) f (θ, U i , Y i ) |θ=θ0 , (2.99) M =E ∂θ ∂θ denotes the incidence matrix, and Σ denotes the covariance matrix of the improved residuals. The incidence matrix can be approximated according to the local approach as follows [37]: N 1 X ∂ T ∂ M∼ f (θ, U i , Y i ) f (θ, U i , Y i ) |θ=θ0 . = N i=1 ∂θ ∂θ (2.100) Within the framework of hypothesis tests, the problem of fault detection can be stated as testing the null hypothesis defined as follows: H0 : θ = θ 0 ⇒ no fault H1 : θ = θ 0 + ∆θ ⇒ fault. (2.101) The decision in this case can be taken on the basis of the Generalised Likelihood Ratio which, for these hypothesis, can be defined as [37]: t = 2 ln max ∆θP H 1 (ζ N ) , P H 0 (ζ N ) (2.102) which is the ratio between the probability that a fault may occur (max∆θ 6= 0) and the probability that no fault has occurred (max∆θ = 0) for a given improved residual vector ζ N (θ 0 ). When the incidence matrix, M , is a non singular matrix, the t-test reduces to t = ζ TN Σ−1 ζ N . (2.103) If t > λε then the alternative hypothesis H 1 is accepted. The threshold λε can be set accordingly to a false alarm probability P H 0 (t > λε ) = ε [37]. 2.4 Conclusion Fault detection and system re-identification are the fundamental building blocks of process monitoring. System identification is an intensively computational exercise, and needs to be limited while fault detection is necessary to drive system Department of Electrical, Electronic and Computer Engineering 54 Chapter 2 FD and SID Theory performance to an optimal threshold near constraint boundaries. The prevalent system identification approach used by industry is based on the Prediction Error Method (PEM) framework. The method was developed for closed-loop system identification, which can either be implemented by using the direct, indirect or joint input-output approach. Work done by Ljung [30] concluded that the PEM approach can successfully be implemented for system identification using closed-loop data, with consistent accurate parameter estimations. However, it is required that data used for identification must be informative, and the model set must contain the true system [7]. The prediction error approach with a quadratic norm in estimating system parameters gives the least squares method. The least squares method is an efficient and effective tool estimating the system parameters. However, the estimation problem proves to be a non-linear optimisation problem with more complex model structures used in system identification. A solution to this non-linear optimisation problem is the use of Recursive Least Square (RLS) algorithms [28]. Weighting the RLS algorithm [63] and adding variable forgetting factors [63, 64] makes the RLS algorithm more robust and applicable for real-time online parameter estimation. The Kalman filter is closely related to the RLS algorithm with similar properties. However, the Kalman filter is mainly used for estimating process states. The Kalman filter is usually implemented as a state observer [28], but Ghaffari et. al. [57] proposed a methodology where the Kalman filter is used in estimating system parameters in stead of states. The extension of the Kalman filter to the Unscented Kalman Filter has led to the solution of state estimations of a non-linear process, where Araki et al. [68] have implemented the UKF method to estimate system parameters of a non-linear process. Subspace identification is regarded as an alternative to PEM identification methods. Subspace identification yields a multivariable system model without the need for special parameterisations, which requires significant prior knowledge and non convex optimisation [45]. The conventional open-loop subspace system identification algorithms (N4SID, MOESP, CCA and IV-4SID) fail when implemented with closed-loop data, but the advantages of subspace system identification over PEM in closed-loop systems have led to the development of numerous advanced closed-loop subspace algorithms which can be used with closed-loop system data. Qin and Ljung [79] revealed that typical subspace algorithms use models that have non-causal terms, used for subspace projection. The inclusion of these non-causal terms make the model non-parsimonious, which leads to inflated variances. A Department of Electrical, Electronic and Computer Engineering 55 Chapter 2 FD and SID Theory subspace method proposed by Qin and Ljung [79] removes these extra non-causal terms, making the model parsimonious, leading to more consistent and accurate system identification results. The condition that past input data must be uncorrelated with noise is one of the prevalent conditions for successful subspace system identification, but this condition is rarely met. Qin and Ljung [79] and Katayama and Tanaka [8] proposed solutions in addressing and removing this condition. The Asymptotic Local Approach is a mathematical statistics theory used to monitor and detect changes in system parameters. This tool can be used to monitor system parameters without the computational burden of re-identifying the system parameters. The inclusion of a threshold for fault detection set to the false alarm probability leads to a robust monitoring statistical tool. Subspace methods are characterised by several advantages with respect to PEMs [1]. The parameterisations problem, which is non-trivial for MIMO systems, is addressed by the subspace approach, where the model is parameterised by a full state-space model, and the model order is determined during the identification procedure. Subspace methods also experience no complication when SISO system subspace methods are modified for MIMO systems. The non-zero initial state poses no additional problems in terms of parameterisations, which is not the case with PEMs. The price to be paid with subspace methods is that they are suboptimal when compared to PEMs. A study conducted by Favoreel et. al. [1] concludes from ten practical implementations of PEM and subspace methods respectively, that subspace methods consistently have 15% higher estimation errors with respect to PEMs. This is due to the fact that subspace algorithms search for global optima where the PEM algorithm searches for local optima with increased precision. The subspace approach can thus be regarded as a good initialisation candidate for the PEM, but extensive use of such MIMO identification methods in industry is not at all evident [45]. Department of Electrical, Electronic and Computer Engineering 56 Chapter 3 The Benfield Process 3.1 Introduction In this chapter, the Benfield process which requires closed-loop SID and fault detection, is described. A brief overview of the industrial background on the Benfield process and similar processes is given in section 3.2.1. The Benfield process is a regenerated cyclical solvent process which uses Hot Potassium Carbonate as an activator and inhibitor to remove CO2 gas. The Benfield operational process flow is discussed in section 3.2.2 where a discussion on the methodology used by Sasol Sastech to identify the original process model is given in section 3.3.1. The Advanced Process Control (APC) solution used for the Benfield process is the Robust Multivariable Predictive Control Technology (RMPCT). The RMPCT is a subclass control solution of the MPC family, and is elaborated on further in section 3.4.1-3.4.2. 3.2 3.2.1 Benfield Process Industrial Background The Benfield process is a thermally regenerated cyclical solvent process that uses an activated inhibited Hot Potassium Carbonate solution to remove carbon dioxide (CO2 ), H2 S and other acid gas components [83]. The recovery of carbon dioxide from flue gasses is being propelled by multiple factors; the market shows renewed interest in Enhanced Oil Recovery (EOR), and Chapter 3 The Benfield Process is continuously seeking new ways to reduce greenhouse gas emissions [20]. Additional use of carbon dioxide is found in the food industry in carbonated beverages, brewing, and flash drying. Industrial uses include EOR, welding, chemical feedstock, inert gas, firefighting and solvent extraction as a supercritical fluid. The largest potential market for CO2 is in EOR. The most economical sources of CO2 are CO2 wells and natural gas sweetening or synthesis gas purification gas byproducts. The price of crude oil in 2008 ($120 per barrel) can justify flue gas derived CO2 for EOR [20]. The Sasol group of companies specialises in diverse fuel, chemical and related manufacturing and marketing operations. Sasol has interests in oil and gas exploration and production, in crude oil refining and liquid fuels marketing [84]. The efficient and economical recovery of CO2 , used in various processes at Sasol, is accomplished by utilizing the Benfield process for CO2 extraction. The Benfield process is classified as an Activated Hot Potassium Carbonate (AHPC) system. Processes similar to the Benfield process are the Catacarb process, Exxon Flexorb HP process and the Glammarco-Vetrocoke process [20]. The Benfield process was introduced over 30 years ago, where as of January, 2000, over 700 Benfield units have been put into commercial service in the world. The use of low cost chemicals available on the world market makes the Benfield process an economically feasible solution for CO2 extraction [20]. 3.2.2 Operational Process Flow A Benfield process plant has been implemented by Sasol Synfuels for the sole purpose of extracting CO2 from tail gas. Figure 3.1 illustrates a simplified version of the Benfield Process. The Benfield Process consists of two distinct process stages which involves the extraction of CO2 . The first stage is the potassium carbonate wash. The purpose of the potassium carbonate wash stage is to remove the bulk of CO2 gas from the CO2 rich tail gas as received from Sasol Synthol. The second stage of the Benfield Process is the DEA solution wash. The purpose of the DEA solution wash stage is to trim the CO2 levels further down (CO2 removal polishing unit) after the bulk of CO2 gas has been removed in the Potassium Carbonate wash stage [2]. Department of Electrical, Electronic and Computer Engineering 58 Chapter 3 The Benfield Process Figure 3.1: Benfield Process Department of Electrical, Electronic and Computer Engineering 59 Chapter 3 The Benfield Process Both the Potassium Carbonate wash stage and the DEA solution wash stage consist of a wash column and a regeneration column. The purpose of the wash column is to absorb CO2 by a particular wash solution (Potassium Carbonate absorption medium). The particular wash solution in the Benfield process absorbs large amounts of CO2 when Hot Potassium Carbonate (HPC) is added as a propriety activator and inhibitor [20]. The regeneration column is responsible to strip the CO2 gas, which has been absorbed in the wash solution in the wash column. The stripping of CO2 in the regeneration column is accomplished by adding Manipulated Pressurised (MP) steam to the wash solution [2]. The CO2 absorption and extraction process flow through the potassium carbonate wash stage and the DEA solution wash stage is described next. CO2 rich tail gas is directly received from the Sasol Synthol HP gas header, where the gas passes through a knock-out drum and a filter to remove any liquid droplets from the feed gas. This CO2 rich tail gas supply varies in supply demand, and can only be controlled above a certain supply trip limit. The cold CO2 rich tail gas from Synthol is heated to very high temperatures by a heat exchanger, before being fed into the bottom bed of the wash column of the Potassium Carbonate wash stage. Lean Potassium Carbonate is recirculated to the wash column from the regeneration column in two separate streams. A portion of the lean Potassium Carbonate liquid is cooled down before being injected into the top bed of the wash column, where the second portion of the lean potassium carbonate liquid is fed directly into the middle bed of the wash column. The removal of CO2 in the wash column is controlled by manipulating the wash flow rates (controlling the corresponding feeds on the bottom, middle and top beds in the wash column). CO2 is absorbed into the wash solution by adding HPC as propriety activators and inhibitors. Cleaned gas passes from the Potassium Carbonate wash stage through a knockout drum into the DEA system. The DEA solution wash stage is similar to the Potassium Carbonate wash stage where additional focus is placed on trimming the CO2 tail gas down to acceptable limits before the CO2 free tail gas is being fed to a cryogenic separation unit known as the Cold Separation unit. The CO2 content of the CO2 free tail gas is continuously being monitored. If CO2 content of the treated gas from the Benfield process exceeds 80ppm (in the case of severe CO2 loads or process upsets), the Cold Separation unit will be shut down. The Benfield gas feed flow may be cut back to assist in keeping the CO2 slip below Department of Electrical, Electronic and Computer Engineering 60 Chapter 3 The Benfield Process the trip limit. In such a case, the HP tail gas control system will transfer some of the feed gas load from the affected phase to the other phase (either the Potassium Carbonate wash stage or DEA solution wash stage depending on which phase is effected), provided that it has additional capacity, or else allow the excess gas to be flared. The rich (loaded with CO2 ) Potassium Carbonate solution (in the wash column in both the Potassium Carbonate wash stage and the DEA solution wash stage) is fed into the corresponding regeneration columns of each stage. The rich solution is regenerated by flashing and boiling the solution in the regeneration column by adding MP steam and heat. Lean Potassium Carbonate is recirculated to the wash column. In the regeneration column of the Hot Potassium Carbonate wash stage, the CO2 saturated wash solution that could not be stripped of CO2 by adding MP steam, is being cooled down before being recirculated into the regeneration column where the CO2 stripping by adding MP steam; will be executed again. The DEA system works on the same principle as that of the Potassium Carbonate Wash stage. In the regeneration column, CO2 gas that could be stripped of the rich potassium carbonate solution (received from the wash column), by adding steam, is fed through to the regeneration column of the Potassium Carbonate Wash stage. Lean Potassium Carbonate is recirculated to the wash column where further CO2 gas absorption is accomplished before the cleaned gas is passed through to the Cold Separation unit. The difference between the DEA solution wash stage and the Potassium Carbonate wash stage is that the Potassium Carbonate wash stage removes the bulk of CO2 , where the DEA stage trims the CO2 levels further down to 40ppm by increasing the recirculation between the wash and regeneration tower. 3.2.3 Process Economic Feasibility and Common Operational Problems The gas circuit product quality of Sasol determines to a great extent the profit feasibility of down-stream units. The current operating philosophy for the Benfield process is to keep it simultaneously hydraulic and loaded with CO2 , as far as possible; to meet optimal profit margins from the gas circuit operations [85]. The hydraulic load is defined as the maximum volume of CO2 that can be processed Department of Electrical, Electronic and Computer Engineering 61 Chapter 3 The Benfield Process for the Benfield process. Analysis of the Benfield theoretical feed relationship between process throughput and composition (done by Sastech [85])concluded that more would be gained (in terms of future growth and plant economic feasibility) by hydraulically debottlenecking (either by reducing Synthol tail gas or increasing the hydraulic capacity at Benfield), than to improve the chemical capacity of the Benfield unit. The operating philosophy ideally requires 100% availability and utilisation which implies that the Benfield process needs to remove the maximum possible amount of CO2 , with as little as possible process upsets and operational delays. Various factors that can cause suboptimal process availability and utilisation include the following [85]: foaming, corrosion, insufficient regeneration, poor mass transfer, high or abnormally low bed differential pressures, pump cavitations and CO2 slip concentration. Foaming due to high or abnormally low bed differential pressures is a common recurring problem [21] where anti-foam dosing is used to alleviate the problem. Regeneration efficiency has been identified as one of the major efficiency measures by UOP (Benfield technology licensor). Regeneration efficiency is a measure of how much steam is required per unit volume of CO2 removed and gives an indication of the unit cost, the overall pressure drop in the regeneration column and the solution health [85]. 3.3 3.3.1 Benfield Process Identification Benfield Process Model The Benfield process was modeled by Sastech [2], where the process model consists of 20 dependant controlled variables (CV), six independent manipulated variables (MV) and two feed-forward disturbance variables (DV) classifying it as a tall system. The identification of the Benfield process model has proven to be a very complex and daunting task. The major challenge lies in identifying the non-linear dynamics of the process. This task becomes very difficult with so few MVs available and so many CVs. The process dynamics also change on a regular basis due to contaminants carried over from the Synthol unit tail gas into the Benfield process. The contaminants leads to an increase in salt content which eventually lead to crystallization in the fin fans. Some oxygenated components cause foaming in the Department of Electrical, Electronic and Computer Engineering 62 Chapter 3 The Benfield Process carbonate solution, which is detrimental to the mass transfer and CO2 removal [2]. The MVs and CVs of the Benfield process are highly interactive. Sastech concluded [2] that the complete process identification can only proceed by isolating process portions, (select MV and CV pairs which are known to be interactive) and locking good model fits and re-identify other isolated process portions using different data sets [2]. 3.3.2 Benfield Process Model Isolation Measured disturbances acting on the Benfield process is the % CO2 concentration deviation present in the tail gas from Synthol, as well as the gas feed flow-rate. The maximum chemical capacity of the Benfield unit is 55 km3 /h CO2 removal. As the flow rate increases, the % CO2 should decrease to keep the maximum CO2 load at 55km3 /h. The stochastic nature of these measured disturbances makes the control of the Benfield process difficult; but a benefit associated with these disturbances is natural excitation of the process which helps the system identification process. Hydrocarbons which are heavier than C8 and some oxygenated components cause foaming in the wash columns. Foaming is also caused by abnormally high differential pressures on either the top, middle or bottom bed. Bed differential pressures increase with increasing liquid and gas loads. High differential pressures that change erratically indicates flooding or foaming, whereas high and stable differential pressures indicate partial blockages in packed beds or liquid/gas distributors. Misaligned distributors, collapsed packed beds, channelling and other issues related to internal column dynamics would manifest as either abnormally high or abnormally low differential pressures [85]. CO2 absorption into the Potassium Carbonate wash solution is controlled by manipulating the wash flows, which is accordingly determined by valve manipulations either on the wash columns (used for controlling wash flow) or valve manipulation on the regeneration column (used for manipulating steam supply for extracting CO2 from the wash solution). Failure of any valve will obviously have severe implications: valve failure will lead to irregular wash flow rates, which can consequently lead to unacceptable fluctuations in the differential bed pressures, causing either foaming or flooding. Regeneration efficiency is identified is one of the major efficiency measures for the Benfield process [83]. This regeneration is directly dependant on the CO2 Department of Electrical, Electronic and Computer Engineering 63 Chapter 3 The Benfield Process absorption in the wash columns, which in its turn is again affected by the wash flow-rates. Regular foaming and flooding diminishes the efficient CO2 absorption into the Potassium Carbonate wash solution, resulting in inefficient regeneration. The isolation and identification of the process dynamics associated with the wash column of the Potassium Carbonate wash stage will be considered for validating the system identification methodology. Possible plant faults that can occur include the failure of the valve which controls wash flows and abnormal differential bed pressures causing flooding and foaming. The isolated process to be identified will have four independent MVs and four dependant CVs and one feed through disturbance DV. Table 3.1 defines the MVs,CVs and DV accordingly. Table 3.1: CVs, MVs and DV definitions. [2] Variable Type Tag Name CO2 slip concentration Bottom bed differential pressure Middle bed differential pressure Top bed differential pressure CO2 feed flow Top carbonate wash flow Mid carbonate wash flow CO2 system steam pressure CO2 gas feed concentration 3.4 3.4.1 CV CV CV CV MV MV MV MV DV C2A105A0.PV C2P1004B.PV C2P1004C.PV C2P1004D.PV C2F1001B.SP C2F1002.SP C2F1003.SP C2P1037.SP C2A101A0 Controller Model Predictive Control Strategy A Model Predictive Control (MPC) strategy involves a linear step or impulse response model (convolutional model) of a process as part of an optimisation scheme to predict future outputs based on future control actions [86]. The identification of these process models must be done carefully, where persistent excitation is necessary at all frequencies that are of interest for control purposes. Since all real chemical processes exhibit some degree of nonlinear behaviour and are subjected to unmeasured disturbances, there will be plant/model mismatch associated with this linear modelling strategy [86]. Consider a single-input, single-output (SISO) process defined by Department of Electrical, Electronic and Computer Engineering 64 Chapter 3 The Benfield Process Figure 3.2: MPC control law for a SISO process [7]. xk+1 = xk + uk yk = C T xk , (3.1) where y is the output, x ∈ Rn is the vector of states, and u is the system inputs at each kth instant. Figure 3.2 illustrates the MPC control law for a SISO model defined by equation 3.1. All MPC formulations allow the forecasting of the process performance into the future, which allows the controller to predict the future outputs and the corresponding optimal future control actions necessary to achieve these predicted outputs. With MPC the predicted future outputs ŷ(k + 1|k), ŷ(k + 2|k), · · · , ŷ(k + pk |k) for the prediction horizon pk are calculated at each time instant using the process model. These depend upon the known past inputs and outputs up to instance t = k, including the current output (initial condition) y(t) and on the future control signals u(k + 1|k), u(k + 2|k), · · · , u(k + mk |k) to be calculated (mk < pk ). The sequence of future control signals is computed to optimise a performance criterion (see equation 3.2), often to minimise the error between a reference trajectory and the predicted process output. Usually the control effort is included in the performance criterion. Only the current control signal u(k) is transmitted to the process. At the next sampling instant y(k + 1|k) is measured where the calcuDepartment of Electrical, Electronic and Computer Engineering 65 Chapter 3 The Benfield Process lation of the control efforts and predicted outcomes is repeated and all sequences brought up to date [7]. pk X min u(k|k),··· ,u(k+1|k) + kΓyi [ŷ(k + i|k) − r(k + i)]k2 i=1 m k X kΓui (3.2) 2 [u(k + i − 1|k)]k , i=1 where kxk2 = n X !1/2 |xi |2 . (3.3) i=1 Γyi Γui In equation 3.2 and are weighting matrices to penalise particular components of y(t) or u(k) at certain time intervals. r(k + i) is the vector of future reference values or set-points. Quadratic Dynamic Matrix Control (QDMC) is one particular implementation of MPC. In QDMC, expression 3.1 is solved, subject to constraints on input, output, and rate of change on the input; by minimizing a quadratic objective function that is composed of the error between the predicted output and actual output. 3.4.2 Robust Multivariate Predictive Control Technology One particular MPC technology called Robust Multivariate Predictive Control Technology (RMPCT), developed by Honeywell, is a control strategy used for systems that are known to be complex and difficult to control. Sasol, Synfuel has implemented an RMPCT control philosophy to control the Benfield Process. The advantages of RMPCT over QDMC is that the former permits the constraints on some or all controlled variables to be de-activated during optimisation. This allows the optimisation to find an unconstrained solution in a reasonable time. The RMPCT control philosophy is based on the MPC principle, where it uses a finite impulse (FIR) model from, one for each controlled-manipulated variable pair. The test data collected are used to determine the FIR coefficients. RMPCT uses two different kinds of optimisation functions namely error and profit optimisation. The objective function of the error optimisation function can be defined as follows [86]: Department of Electrical, Electronic and Computer Engineering 66 Chapter 3 The Benfield Process Jerror = X λi (yi − r0i )2 , (3.4) i where y i and r 0i are the current process output and the reference or set-point respectively. The objective function of the profit optimisation function can be defined as follows [86]: Jprof it = X i bi y i + X i ai (yi − y0i )2 + X j bj u j + X aj (ui − u0i )2 . (3.5) j Parameters λi , ai , aj , bi , bj are the weights and the linear objective coefficients, bi , bj are the tuning parameters for maximizing the operating profit. RMPCT addresses the plant/model mismatch problem associated with illdefined process models using singular-value threshold (SVT) and min-max design [86]. With the min-max design, the controller determines the control effort over all possible plant realisations [86]. SVT is based on the condition number of the process models. A matrix with a large condition number indicates that the matrix is ill-conditioned [87]. The threshold value is specified in such a way that all models whose singular values are smaller than the threshold is removed from the optimisation. This method does not alter the usual tradeoff between the controller robustness and the system performance [86]. A generic Advanced Process Control (APC) structure is used by Sasol Synfuels, where a Profit Controller based on the RMPCT architecture is used for controlling the Benfield process in a robust manner. The Benfield process has proven itself to be ill-conditioned with nonlinear process dynamics, some of which cannot readily be identified. The control objectives of the APC Profit Controller are [2]: • Maintain the differential pressures over the top, middle and bottom beds in the wash column in the Potassium Carbonate wash stage below acceptable limits. Unacceptable differential pressures over the beds can lead to foaming and flooding. Department of Electrical, Electronic and Computer Engineering 67 Chapter 3 The Benfield Process • Maintain effective regeneration of Potassium Carbonate and DEA wash solutions by controlling the boiling temperatures in the Potassium Carbonate and DEA regeneration columns. • Maintain the CO2 slip of the Potassium Carbonate and DEA stage at safe and acceptable limits. The optimisation objectives of the profit controller are [2]: • Maximize the Benfield unit throughput up to acceptable constraint limits. • Minimize the top bed and middle bed wash flows in co-ordination with carbonate regeneration steam up to acceptable slip limits. • Minimize DEA wash flow in coordination with DEA regeneration steam up to acceptable CO2 slip limits. 3.5 Conclusion The Benfield process has been implemented by Sasol Synfuels solely for the purpose of extracting CO2 from CO2 rich tail gas received from Synthol. The operating philosophy of the Benfield process is to keep the process simultaneously hydraulic and CO2 loaded, as far as possible, as to meet optimal profit margins from the gas circuit operations. The operating philosophy ideally requires 100% availability and utilisation. The control solution used with the Benfield process is the Robust Multivariate Predictive Control Technology (RMPCT). RMPCT is a subclass solution to MPC. The advantage of RMPCT over MPC is that control optimisation is done with deactivated constraints. This allows RMPCT to calculate a control solution much faster, thereby being more responsive to process changes and demands. The RMPCT control philosophy allows Sasol to control the Benfield process, which exhibits highly nonlinear process dynamics, by making future control efforts and process performance based on a linear process model. The Benfield process model was identified by Sasol Sastech [2], where the process consists of six independent MVs, 20 dependant CVs and two feed-through DVs. The Benfield process CVs and MVs are highly interactive [2]. The latter process characteristic has made the identification a challenging process. Process isolation and partial process identification was used to obtain a complete Benfield Department of Electrical, Electronic and Computer Engineering 68 Chapter 3 The Benfield Process process model. The wash flow-rates responsible for CO2 absorption and the steam flow rates responsible for CO2 regeneration will be isolated in the Potassium Carbonate solution wash stage, and used to validate the system identification methodology. The system to be modeled and identified will consists of four independent MVs and four dependant CVs. Valve failure which can cause irregular fluctuation in liquid/gas flow can cause abnormal differential bed pressures, leading to flooding, foaming and regeneration inefficiency. Department of Electrical, Electronic and Computer Engineering 69 Chapter 4 Nominal Process Modeling and Analysis 4.1 Introduction In this chapter the procedure followed to identify and validate the model of the Benfield process using open-loop step-test data and closed-loop raw plant data, is discussed. The different techniques that were used for data preprocessing are discussed in section 4.2. Different parametric model structures with parameter estimation methods that were considered for the estimation and identification process are discussed in section 4.3, where the most applicable structure is used to model the process. A portion of the preprocessed data is used to validate the identified system model. Section 4.4 discuses the process validation results. Finally, the identified system model is analysed in terms of stability and controllability in section 4.5. 4.2 Data Analysis and Preprocessing Sampled and measured process data used for system identification are often not suitable to be used directly for system identification. These raw data may exhibit certain characteristics which will jeopardise the accurate system parameter estimation process. There are several possible deficiencies in the raw data which need to be addressed [30]: • Portions of raw data may exhibit occasional bursts and outliers, missing data and may be discontinuous, • raw data measurements may experience drift and bias, low-frequency disturbances, possibly of periodic character. Chapter 4 4.2.1 Nominal Process Modeling and Analysis Data detrending and drift removal Trends, low frequency disturbances and periodic variations are not uncommon in measured data sets. The cause of these phenomenons typically stem from an external source such as process and measurement noise and environmental changes, and is unavoidable. There exist basically two methods which are used to alleviate these problems [30]: • Removal of disturbances by explicit data pretreatment, and/or, • estimate a noise model that takes care of the disturbances. When noisy data are used in identification, a compact model with a known model structure and properly selected model order will provide more accurate parameters in the estimation process. Moreover, a model with a disturbance model will give a consistent estimate for closed-loop data, meaning that the effect of the disturbance will decrease when test time increases; whereas a model without a disturbance model will deliver a biased estimate when using closed-loop data [46]. The inclusion of a disturbance model in the identification process increases the complexity of the identification process. An alternative to the inclusion of a disturbance model is to pretreat the measured data and partially remove all disturbances. LQ decomposition is a viable method to remove stochastic components from noisy measured data [8]. 4.2.2 Data outliers and discontinuous data Data acquisition equipment in practice is not perfect. Faulty data acquisition instrumentation typically leads to discontinuous data segment, which cannot be used for system identification. It may also be that certain measured data values are in obvious error due to measurement failures. These measured data values are classified as outliers, and may have a substantial negative effect on the estimation of the system parameters [30]. With discontinuous data segments and outliers, one can remove these data discontinuities from the collected data record and only use data segments that are continuous and do not contain obvious outliers observed by data inspection. This method can limit the amount of data samples available for system identification, where the possibility of isolating clean data segments in multivariate systems becomes difficult. Department of Electrical, Electronic and Computer Engineering 71 Chapter 4 Nominal Process Modeling and Analysis Isolated data segments which are continuous and do not contain obvious outliers can be merged so that more data samples are available for system identification. However, it is important to note that one cannot simply concatenate the data segments, because the connection points can cause transients that may destroy the estimate [30]. Missing input data can be regarded as unknown parameters, and can be estimated for a linear system using a linear regression (least square) procedure. Missing output data can be obtained by using the time varying Kalman filter to predict the system outputs. The above mentioned methods are computationally intensive and should be avoided if possible. 4.2.3 Preprocessing of data Sampled data segments of the Benfield process were obtained and provided by Sasol Synfuels [2], where the individual manipulated and controlled variables were sampled every 15 seconds. These data segments were obtained in open-loop (or under limited feedback) where individual manipulated variable step-testing was done on the Benfield process. Through visual inspection it was concluded that there exist large portions of data that cannot be used for system identification due to the obvious presence of data outliers and data discontinuities. The first task was thus to isolate data segments that do not contain any data outliers or discontinuities. Data segments were preprocessed by removing means and trends in the measured variables. Trends can be seen as time varying equilibria. The removal of means in the measured data variables are accomplished by letting the measured outputs y(t) and measured inputs u(t) be deviations from a physical equilibrium [30]. The deviations from the equilibrium point is defined as [30]: y(t) = y m (t) − ȳ u(t) = um (t) − ū, (4.1) where [30]: ȳ = N 1 X y (t) N t=1 m N 1 X ū = um (t). N t=1 (4.2) Trends in the measured data are removed by fitting a linear periodic function to the measured variables and subtracting it from the measured variable segments. Department of Electrical, Electronic and Computer Engineering 72 Chapter 4 Nominal Process Modeling and Analysis With step-testing it was only possible to step the manipulated variables associated with the wash flows and steam pressure. The CO2 gas feed flow exhibits stochastic behaviour which cannot be controlled below a certain limit, and acts as an external disturbance on the system. Although this characteristic is helpful in system identification as explained in Chapter 2, the effect of this disturbance on the output needs to be considered when identifying manipulated and controlled variable pairs. A band-reject filter was used to filter the slow time-varying data signals (wash flow and steam pressure manipulations) from the fast time varying (CO2 ) gas feed. By inspecting the fourier transform of the CO2 gas feed, it was concluded that data in the frequency band 0.47Hz-3.65Hz needs to be filtered out from the output to remove the disturbance of the CO2 gas feed. Note that the input-output relation for a linear system will not be changed by pre-filtering the input and the output data through the same filter [30]. This is seen by inspecting equation 4.3. y(t) = G0 (q)u(t) + H 0 e(t) ⇒ L(q)y(t) = G0 (q)L(q)u(t) + H 0 L(q)e(t). (4.3) 4.3 Process Modeling An initial model of the Benfield process, which will serve as a basis to work from, will be used to validate the identified system model obtained by implementing the Two Stage ORT-based subspace method proposed by Katayama and Tanaka [8] in a closed-loop system environment. The initial process model was identified by using the step-test data obtained from open-loop step-tests conducted by Sasol Synfuels. Since the plant is operating partially in an open-loop configuration, it was possible to use standard system identification methods in the modeling process. Only parametric system modeling methods were considered, since fault detection will be accomplished by the careful monitoring of any deviations in the identified system parameters. The parametric methods that were considered for system identification includes the: • Autoregressive with Exogenous Input (ARX) structure where the least squares (LS) or instrumental variables (IV) method is used to estimate the system parameters, Department of Electrical, Electronic and Computer Engineering 73 Chapter 4 Nominal Process Modeling and Analysis • Autoregressive Moving Average with Exogenous Input (ARMAX), Output Error (OE) and Box Jenkins (BJ) structures where the Prediction Error Method (PEM) are used to estimate the system parameters, • state-space structures where either the PEM or N4SID method is used to estimate the system parameters. The methodology that was followed in identifying the process dynamics of the Benfield process was to firstly isolate the individual MVs of the process and then to model the corresponding MV-CV pairs. The MVs were isolated from each other (only one MV was stepped while the other MVs were kept at steady-state), where the MV-CV data segment pairs were pre-treated by removing trends and means, and filtering out high frequency disturbances as discussed in section 4.2.3. The final models of the Benfield process are all based on the ARX structure. The ARX structure uses the PEM framework to estimate system parameters which is the traditional and most efficient method used to identify process models [7]. This result is not surprising since the ARX structure is well suited for a wide variety of processes with distinct process dynamics [28]. For simulation, validation and analysis purposes, the identified process ARX structures were transformed to state-space form which is better suited for analysis. The identified system dimensions are as follows: state-space model with four outputs, four inputs, 154 states, and 640 free parameters. 4.4 Process Validation The process modeling and estimation procedure discussed in section 4.3 identifies a model that represents the actual process dynamics most accurately. In process validation, a different data set is being used to validate the identified process model. It is necessary that this data set is sufficiently informative, representing the process dynamics thoroughly. The problem with model validation is whether the identified model agrees sufficiently well with the observed data. The degree in which the identified model matches the actual process dynamics can be determined by considering the model residuals. 4.4.1 Identified Model Fit The identified model and actual process fit can be calculated as, Department of Electrical, Electronic and Computer Engineering 74 Chapter 4 Nominal Process Modeling and Analysis 1− %F IT = 100 × y vo −y so ky vo −y so k y vo −ȳ ky vo −ȳk , (4.4) where y vo , y so and ȳ are the validation data output, simulated model output and the mean of the validation data output. %F IT represents the ratio between the normalised value of the difference between the validation data and simulation data, and the normalised value of the difference between the validation data and the expected value of the system outputs. Accurate estimated system parameters and process structure will result in simulated process outputs to converge to the measured outputs used for validation, as the data samples, N tends to infinity, resulting in a 100% validation fit. Two data sets were used for validating the identified process. The first data set was obtained from the open-loop step-tests done in the identification process. The data segment was chosen in such a way that all the M V s were controlled simultaneously. The second data set used for validation consisted of raw plant data, where the plant was operating in closed-loop. Process models as identified by Sasol Sastech was also validated with the same open-loop and closed-loop data, where the validation fit results for these models were compared with validation results obtained for the nominal identified process models. The data set consisting of open-loop step-test data was chosen to have N = 1500 samples, where each sample was sampled every Ts = 15 seconds. The data set consisting of closed-loop raw plant data was chosen to have N = 700 samples. The data sets were chosen to be as large as possible, without any outliers or abnormal process disturbances visible in the measured data. Figure 4.1 illustrates the fit between the simulated model output and the steptest validation data. A 65% average fit was obtained. For the same sample horizon, the models identified by Sasol [21] only gave an 58% average fit. Figure 4.2 illustrates the fit between the simulated model output and raw plant data where the plant was operating in closed-loop. A 63% average fit was obtained. The identified models of Sasol [21] obtained an average fit of 64%.In figures 4.1 and 4.2, GN represents the nominal model as identified, where CV1 , CV2 , CV3 and CV4 represents the CO2 slip and the differential pressures of Bed1 , Bed2 and Bed3 respectively. Department of Electrical, Electronic and Computer Engineering 75 Chapter 4 Nominal Process Modeling and Analysis Figure 4.1: Validation fit of nominal process model with open-loop step-test data. Department of Electrical, Electronic and Computer Engineering 76 Chapter 4 Nominal Process Modeling and Analysis Figure 4.2: Validation fit of nominal process model with closed-loop raw plant data. Department of Electrical, Electronic and Computer Engineering 77 Chapter 4 4.4.2 Nominal Process Modeling and Analysis Residual analysis Residual analysis bears information about the quality of the identified model. The quality of the identified model is determined by how accurate the model can predict or estimate actual system outputs. The part of the data that the identified model could not reproduce is called the residuals, and is simply the difference between the model and the true outputs. For residual analysis it would be advantageous to consider the maximum residual error that has occurred over a specific time horizon for a validation data set or true data set, Z N . The average residual error can also be considered. These basic statistics are defined as follows [30]: S1 = max |ε (t)| t S22 N 1 X 2 ε (t), = N t=1 (4.5) (4.6) where ε is the residuals. By considering residual statistics for historical data, it can then also be assumed that these statistics will hold for future data sets. However, the use of these statistics has an implicit invariance assumption: the residuals do not depend on measured process data that is likely to change [30]. The latter implies that the residuals must not be directly dependant and related to the system inputs used in the data set, Z N . If the residuals were related to system inputs, the equations 4.5-4.6 would not be accurate model validation statistics for future data sets. The values of equations 4.5-4.6 would be limited, since the model will only work for a range of possible inputs. To determine if there is any direct relationship between past inputs and the model residuals, it would be pragmatic to consider the covariance between the model residuals and the past inputs [30]: N R̂εu N 1 X (τ ) = ε (t) u (t − |τ |). N t=1 (4.7) Deficiency in the identified model can also be determined by considering the correlation among the residuals themselves as defined by equation 4.8. If the correlation among the residuals are not small for τ 6= 0, then part of ε (t) could have been predicted from past data [30]. This implies that the model outputs could have been better predicted. N R̂ε N 1 X ε (t) ε (t − τ ) (τ ) = N t=1 (4.8) Figure 4.3 illustrates the auto-correlation between the residuals of CV1 output (CO2 slip) and the cross-correlation between the residuals of CV1 and past inputs. Department of Electrical, Electronic and Computer Engineering 78 Chapter 4 Nominal Process Modeling and Analysis Figure 4.3: The auto-correlation and cross-correlation of residuals and M V1 with output CV1 . The remainder of the residual analysis results for the identified Benfield process model are illustrated in Appendix A. The auto-correlation of the residuals of all the system outputs are illustrated, where it can be concluded from the results that the auto-correlation value is small for τ 6= 0, which implies that the model outputs could not have been predicted from past data. The cross-correlation between the residuals and the past inputs are also illustrated, where it is concluded that the statistics defined by equations 4.5-4.6 are valid, and can be used to predict future model residual characteristics. 4.5 4.5.1 Process Analysis Singular Value Decomposition Through singular value decomposition (SVD), it is possible to obtain better information about the gains of the plant. The SVD plot for the identified Benfield process model is illustrated in figure 4.4. It is concluded by considering the condition number of the process (see expression 4.9), that the process model is illconditioned: that is, some combinations on the input have a strong effect on the output. The latter are illustrated in figure 4.4 where the maximum and minimum singular values have substantial different directional gains in the bandwidth region ωB < 0.01rad/sec. Department of Electrical, Electronic and Computer Engineering 79 Chapter 4 Nominal Process Modeling and Analysis Figure 4.4: The singular value decomposition of the identified Benfield process. def γ (G) = 4.5.2 σ̄ (G) σ (G) − (4.9) Poles and Zeros of Identified Process Right half plane (RHP) zeros are common in many multivariate systems [87]. The limitations these RHP zeros impose on MIMO systems are similar to SISO systems, however due to the input directionality, the limitations are not so severe. In figure 4.5, the poles and the zeros of the identified Benfield process are illustrated. The RHP zeros close to the origin will have a detrimental effect on the control performance at low frequencies. It can thus be concluded that it will be inherently difficult to obtain tight control around low frequencies which is typically the case for the Benfield process. The spectral radius ( RHP pole = 0.95) for CV1 -MV1 lie within the unit circle boundary which implies that the model identified for the CV1 -MV1 pair is stable. Similar conclusions are drawn for the other CV-MV pairs. Department of Electrical, Electronic and Computer Engineering 80 Chapter 4 4.5.3 Nominal Process Modeling and Analysis Step and Impulse Response The step and impulse response of the identified Benfield process are illustrated in in figures 4.6-4.7. With the step response, it is observed that for a bounded input, the identified process model produces a bounded output. The model is thus stable. 4.6 Conclusion The identification, validation and analysis of a process model for the Benfield process was discussed in this chapter. The identification process consists of three steps which are executed sequentially as many times as necessary to provide a process model suitable for further analysis and implementation. The identification process first involves the preprocessing of data used for system identification and validation. This data preprocessing involves the removal of trends and means, as well as the removal of possible low- and high frequency disturbances, which may influence the identification process. The second step in the identification process is the estimation of system parameters for different model structures. The model structure with associated estimated parameters, with the best validation results (step three) in terms of percentage fit and residual analysis, is selected. The identified system model was analysed. The step- and impulse responses were considered, where it was concluded that the identified process is stable. Considering the poles and zeros of the identified system, it was concluded that tight control will be difficult to achieve at low frequencies, due to the presence of RHP zeros near the origin. The singular value decomposition of the identified system indicates that the system is ill-conditioned at low frequencies. Different combinations of inputs have a strong effect on the output, where the individual M V s and CV s are highly interactive. Department of Electrical, Electronic and Computer Engineering 81 Chapter 4 Nominal Process Modeling and Analysis Figure 4.5: The poles and zeros for the individual M V and CV pairs. Department of Electrical, Electronic and Computer Engineering 82 Chapter 4 Nominal Process Modeling and Analysis Figure 4.6: The step response of the identified Benfield process model. Department of Electrical, Electronic and Computer Engineering 83 Chapter 4 Nominal Process Modeling and Analysis Figure 4.7: The impulse response of the identified Benfield process model. Department of Electrical, Electronic and Computer Engineering 84 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process 5.1 Introduction In this chapter, the relevance and development of a subspace linear system identification methodology, and a fault detection methodology for the Benfield process are discussed. This chapter introduces and elaborates on the subspace methodology implemented to identify the Benfield process that operates in a closed-loop environment under Robust Multivariate Predictive Control Technology (RMPCT). Additional to the development of the subspace identification methodology is the development and discussion of the fault detection methodology used to detect abrupt and incipient faults based on system parametric discrepancies. Prior to the development of the corresponding subspace system identification methodology, and the fault detection methodology, the assumptions and requirements under which such methodologies are developed and implemented are stipulated in section 5.3. Section 5.4.1 elaborates on the requirements of the fault detection methodology. The successful implementation of these requirements will contribute to the efficiency, effectiveness and robustness of the system identification and fault detection methodologies. Section 5.4.1 also discusses the assumptions under which the fault detection methodology is developed. Section 5.3.1 elaborates on the subspace identification methodology, where it has been generalised for a multiple input, multiple output system. A discussion Chapter 5 A Subspace SID and FD Methodology for the Benfield Process on how to guarantee stability of the identified multiple input, multiple output system is found in section 5.3.2. A discussion on the developed fault detection methodology is found in section 5.4.2, where the hypothesis test used to evaluate parametric discrepancies is stated. 5.2 A Subspace Approach for System Identification State-space based approaches are typically better suited to model and represent MIMO systems. It was concluded from the literature survey done in chapter 2 that the application of prediction error methods to state-space models becomes very complex in terms of generating a numerically robust canonical realisation. The alternative, a full parametrisation of the state-space model would involve a huge amount of parameters [28]. Subspace identification approaches make use of state-space based representation, where the main advantage is their low computational demand since welldefined and known linear algebra tools (QR and singular value decomposition) are used in the identification process. Subspace based approaches are also able to deal well with MIMO systems, where subspace approaches are known to have good numerical robustness [28]. A main difficulty in the identification of systems that operate in closed-loop lies in the fact that any correlation between the past inputs and the disturbances results in biased estimates, which degrade model integrity. Conventional subspace identification methods (N4SID, MOEPS and CVA) are inferior to prediction error methods when the data used for system identification are obtained in closed-loop. Non-parsimonious models used for projection is directly responsible for inflated parameter variance estimates [79]. A solution to this problem is to transform these non-parsimonious models into parsimonious models by removing the noncausal terms in the model. This is accomplished by partitioning the state-space model row wise, where the partitioned equations are guaranteed to be causal [79]. This methodology was adopted by Katayama and Tanaka [8] to remove noncausal terms. It was decided to implement the subspace methodology proposed by Katayama and Tanaka [8] for SISO closed-loop identification. The proposed method of Katayama and Tanaka [8] uses extended state-space models that are guaranteed causal. The condition of no correlation between the past inputs and Department of Electrical, Electronic and Computer Engineering 86 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process disturbances was also addressed by Katayama and Tanaka [8], where the condition is made absolute by doing orthogonal decomposition on the data sets. A merit of the proposed subspace method of Katayama and Tanaka [8] is that it can easily be applied to MIMO systems in the manner of the direct approach, and that it does not include a model reduction step, needed in the joint inputoutput approach. 5.3 Generalization of the 2-Stage ORT Subspace Method for MIMO Systems The subspace closed-loop system identification method proposed by Katayama and Tanaka [8] was developed and tested for SISO systems, so it necessary to generalise the methodology for MIMO systems. Figure 5.1 illustrates a closed-loop system, which will be used in the subspace identification procedure. Figure 5.1: Closed-loop system [8]. The plant depicted in figure 5.1 is expressed as [8]: y(t) = P (z)u(t) + H p (z)η y (t), (5.1) where P (z) and H p (z) are the plant and measurement noise filter, respectively. The input and the output of the plant P (z) are u ∈ <p and y ∈ <m respectively where the m×p is the input×output dimensions of the plant P (z). The control effort exerted by the controller C(z) to control the plant P (z) is expressed as: u(t) = r 2 (t) + C(z) [r 1 (t) − y(t)] + H c (z)η u (t), (5.2) Department of Electrical, Electronic and Computer Engineering 87 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process where r 1 (t) ∈ <m is a reference or setpoint signal. The exogenous input r 2 (t) ∈ <p is a dither signal used to excite the plant dynamics for system identification. A minimum phase transfer matrix, H c (z), filters a white noise source, which results in coloured noise inflicted on the control effort signal expressed by equation 5.2. Katayama and Tanaka [8] formulated the following assumptions, under which the derived subspace approach is valid. These assumptions are as follows: A1: The feedback system is well posed in the sense that (u, y) are determined uniquely if all the external signals are given. A2: The feedback system is internally stable. A3: The exogenous inputs (r 1 , r 2 ) are sufficiently persistently exciting, and are uncorrelated with white noises η u (t) and η y (t). A4: There is no feedback from (u, y) to (r 1 , r 2 ). A5: Exogenous inputs and noises (r 1 , r 2 , u, y) are second order jointly stationary processes with zero mean. Assumption A2 can be relaxed for the exogenous input r 1 , since we are only interested in identifying the plant model P (z). 5.3.1 Subspace identification method The subspace method proposed by Katayama and Tanaka [8] is based on the Two Stage method originally proposed by Van den Hof and Scharma [88]. Based on the Two Stage method, Katayama and Tanaka [8] developed a joint input-output subspace method of identifying closed-loop systems based on the orthogonal decomposition method (ORT), and is consequently called the Two stage ORT subspace method. The Two stage ORT method uses two consecutive orthogonal decompositions to identify a system in closed-loop. The first orthogonal decomposition is solely used for data pretreatment. Instead of estimating unmeasured disturbances as was done by Ljung and Qin [79], before estimating the plant model, the measured data are projected on a Hilbert space, generated by second order random variables of the exogenous inputs, and joint input-output signals. This projection filters out the stochastic data components which is directly related to the unmeasured disturbances. It is however noted that since the projection onto the finite data space Department of Electrical, Electronic and Computer Engineering 88 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process <[0,T ] cannot completely remove the stochastic components from the joint inputoutput process, the projected data do contain some stochastic residuals, or noise. In order to cope with these residuals the orthogonal decomposition (ORT) method is employed to identify the system matrices. Subspace methods make use of three different types of matrices to compute and estimate the extended observability matrix, Γ̂, and system states, X̂. The first type of matrix used in subspace methods is the block Hankel matrix, which is defined as follows for the exogenous inputs (r 1 , r 2 ), inputs u and outputs y respectively: r(0) r(1) .. . r(1) r(2) .. . " # Rpast r(k) r(k − 1) = R= r(k) r(k + 1) Rf uture r(k + 1) r(k + 2) .. .. . . r(2k − 1) r(2k) " # U f uture U= = U past ··· ··· ... r(N − 1) r(N ) .. . · · · r(k + N − 2) , · · · r(k + N − 1) ··· r(k + N ) .. .. . . · · · r(2k + N − 2) ∈<2k(p+m)×N (5.3) u(k + 1) · · · u(k + N − 1) u(k + 2) · · · u(k + N ) .. .. .. . . . u(2k − 1) u(2k) · · · u(2k + N − 2) u(0) u(1) ··· u(N − 1) u(1) u(2) ··· u(N ) .. .. .. .. . . . . u(k − 1) u(k) · · · u(k + N − 2) u(k) u(k + 1) .. . , ∈<2km×N (5.4) y(0) y(1) .. . y(1) y(2) .. . " # Y past y(k) y(k − 1) Y = = y(k) y(k + 1) Y f uture y(k + 1) y(k + 2) .. .. . . y(2k − 1) y(2k) ··· ··· ... y(N − 1) y(N ) .. . · · · y(k + N − 2) , (5.5) · · · y(k + N − 1) ··· y(k + N ) .. ... . · · · y(2k + N − 2) ∈<2kp×N where k is defined as the depth of the Hankel matrix, and k is a predetermined Department of Electrical, Electronic and Computer Engineering 89 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process value chosen such that k > n, where n is the order of the system [8]. The variables, p and m are the output and input dimensionality of the system respectively. The second type of matrix used in subspace methods is the extended observability matrix, Γ̂ of kth order which is defined as follows: Γk = C CA .. . CAk−1 . (5.6) ∈<k×p When all states of a process are observable, then (A, C) is observable. For a state to be observable it is assumed that the initial state x0 can be determined uniquely assumed the input is completely known. By constructing a kth order extended observability matrix, the estimation of the (A, C) matrices are more accurate. The third type of matrix used in subspace methods is known as the Toeplitz matrix Ψk , defined as follows: Ψk = D CB .. . 0 D .. . CAk−2 B · · · 0 0 . D 0 CB D ∈<km×N ··· ··· (5.7) The Toeplitz matrix is used to estimate the (B, D) matrices of the process. The Hankel, extended observability and Toeplitz matrices can easily be derived by extending the input-output matrix equation Y t = CX t + DU t for k-steps, using the state matrix equation, X t+1 = AX t + BU t , under the assumption that that the input remains relatively constant (U t=k ≈ U t=k+1 ) over the extended period. From equations 5.3-5.7, the estimated extended state input-output matrix equation can be defined as follows [8]: Y f uture = Γk X k + Ψk U f uture , (5.8) Given the block Hankel matrices, the extended observability matrix, Γ, is determined by first computing the linear orthogonal decomposition of the Hankel matrices as follows [8]: Department of Electrical, Electronic and Computer Engineering 90 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process R M 11 0 0 0 U = M 21 M 22 M M M 31 32 33 Y QT 1 QT2 , T Q3 (5.9) where M 11 ∈ <2k(p+m)×2k(p+m) , M 22 ∈ <2km×2km , M 33 ∈ <2kp×2kp are lower triangular (this reduces the non-causal terms of the estimated system model [89] making the model parsimonious), and where QTi Qj = Iδij [8]. From QT1 = M −1 11 R in equation 5.9 can be seen that the deterministic components ,(Y, U ), which are linearly related to the exogenous inputs R, are given by: " Ud # U fd # " M 21 U pd = QT1 , Y = M pd 31 def Yd (5.10) Y fd where M 21 ∈ <2km×2k(p+m) and M 31 ∈ <2kp×2k(p+m) [8] and subscripts f d and pd are the future and past deterministic values. From a successive orthogonal decomposition of the deterministic data components, U fd L11 0 0 U pd L21 L22 0 Y = L31 L32 L33 pd L41 L42 L43 Y fd S T1 0 T S 0 2, T 0 S3 L44 S T4 (5.11) where L11 , L22 ∈ <km×km , L33 , L44 ∈ <kp×kp are lower triangular and S i , i = 1, .., 4 are S Ti S j = Iδij , it follows from equations 5.8 and 5.11 that the future output Y f d can be expressed as [8]: Y f d = Γk X k + Ψk L11 S T1 = L41 S T1 + L42 S T2 + L43 S T3 + L44 S T4 . (5.12) From expression 5.12, post multiplying 5.12 by S 2 , we have L42 = Γk X k S 2 , and further assuming X k S 2 has full rank (refer to [90] for theoretical justification), the extended observability matrix can be defined as follows [8]: Im(Γk ) = Im(L42 ), (5.13) where the singular value decomposition of L42 , Department of Electrical, Electronic and Computer Engineering 91 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process SV D(L42 ) = U1 U2 Σ1 0 0 Σ2 " V T1 V T 2 # ≈ U 1 Σ1 V T1 . (5.14) leads to the estimation of the extended observability matrix defined as: Γk = U 1 p Σ1 . (5.15) Only the most significant singular values Σ1 , up to the nth order, are considered to approximate the extended observability matrix defined by equation 5.15, where the least significant singular values Σ2 are discarded in equation 5.14. The estimated system matrices (Â, B̂, Ĉ, D̂) can be determined from the estimated extended observability matrix by making use of the shift-invariance property of the extended observability matrix. Refer to Appendix B for an in depth discussion of the shift-invariance property. The estimate of Ĉ can be determined as follow: Ĉ = Γk (1 : p, 1 : n) , (5.16) where n is the order of the estimated plant (determined by the order of the most significant singular values Σ1 ) and p the number of outputs. The estimate of Â can be determined as follows: Â = Γ†k−1 Γk ↑, (5.17) where the dagger † is the pseudo inverse, and ↑ denotes the shift invariance operator. Since U T2 Γk = 0, and U T2 L42 = 0, pre-multiplying equation 5.12 by U T2 , and post-multiplying by Ŝ 1 yields [8] U T2 Ψk (B, D)L11 = U T2 L41 . (5.18) The term Ψk (B, D) is linear with respect to the (B, D) parameters, given the estimate of the extended observability matrix. The estimates of the (B, D) parameters can easily be obtained by the least squares method. Refer to Appendix B for more algorithm theory on how the (B, D) parameters can be estimated by first estimating the Toeplitz matrix. Department of Electrical, Electronic and Computer Engineering 92 Chapter 5 5.3.2 A Subspace SID and FD Methodology for the Benfield Process Guaranteed estimated plant stability The system matrix, Â, is estimated by using the shift invariance property of the estimated observability matrix. The system matrix is estimated as follow: Â = † Γ̂k−1 Γ̂k ↑ . (5.19) For the estimated system matrix to be stable, it is necessary that the spectral radius must lie within the unit circle: ∆ h i ρ Â = max λi Â ≤ 1. i (5.20) The spectral radius of the estimated system matrix is not guaranteed to lie within the unit circle when the Two stage ORT method is used. The estimated plant model may thus be unstable, which will render it inadequate for fault monitoring. A possible solution which guarantees a stable system matrix was proposed by Maciejowski [91]. The proposed method of Maciejowski [91] guarantees the estimation of a stable system matrix. However, the cost of this benefit is the loss of accurate estimation results, but in some applications that is outweighed by the advantage of guaranteed stability. Possible applications that can take advantage of this guaranteed stability is subspace algorithms that run online and unsupervised, such as adaptive control or fault monitoring [91]. Maciejowski [91] proves that, by using the shift invariance property of the extended observability matrix and appending a block of zeros to the shifted extended observability matrix, the estimated system matrix is guaranteed to be stable. The guaranteed stable system matrix can be estimated as follows [91]: † Â = Γ̂k 5.4 Γ̂k ↑ 0 ... .. 0 . ! . (5.21) pk A Parametric Fault Detection Methodology The utilization of advanced methods for process supervision, fault detection and fault diagnosis becomes increasingly important for many technical processes. The efficiency in which these advanced process monitoring methods are implemented results directly in the improvement of process control reliability, safety and process control efficiency. Department of Electrical, Electronic and Computer Engineering 93 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process Classic process monitoring approaches that have been used for process monitoring in the past include trend monitoring of measurable output variables. The disadvantage of these classical methods are that they do not provide deeper insight of the faults, and there is no possible way of isolating or detecting faults in advance [12]. More recent solutions to fault detection and process monitoring are based on model based platforms, where input and output signals and the dynamics of process models are used to detect faults. Well known model based fault detection methods include parameter estimation, parity equations and state observers [56]. This dissertation investigates and tests the hypothesis, which states that efficient and reliable fault detection of a process is possible by considering and monitoring process parameters. It is necessary to define requirements of such a developed parametric fault detection methodology, as well as assumptions under which such a fault detection method will operate. 5.4.1 Assumptions and Requirements Prior to the development of a fault detection methodology, it is necessary to state some requirements and assumptions for such a fault detection method. The requirements of such a method are as follows: R1: The developed fault detection methodology must not be computationally intensive because it must be able to run online in real time. R2: The method must not be dependant on the acquisition of large data sample bundles to detect a fault. It is thus required of the fault detection method to monitor the process by recursively calculating the process states or parameters. R3: The developed fault detection methodology must have a fast parameter tracking and convergence capability. However, R4: the parameter tracking and convergence capability must be robust enough to such an extent that its efficiency and accuracy is not jeopardised by the unwanted influence of unmeasured and measured disturbances. R5: The time in which a fault is detected must be minimised. It is thus essential to combine fault detection with event forecast and consider fault prediction in scenarios where parameter drift causes slight or no obvious change on the measured system outputs. Department of Electrical, Electronic and Computer Engineering 94 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process The assumptions of such a method are as follows: A1: A very accurate system model (at least 65% process output prediction accuracy) is available to serve as a model based platform to estimate and predict process states and parameters used for fault detection. A2: Process parameters are considered to be time invariant, where an abrupt change of a system parameter, or continuous parameter drift will be considered and detected as a system fault. A3: Measured and unmeasured noise models are either available, or the influence on system behaviour of these noise sources are minimised. This assumption limits the amount of false fault detection alarms. 5.4.2 A Parametric Fault Detection Method using Kalman filtering Process monitoring and fault detection methods using process parameters has been studied extensively since the early ’80s [26]. However, due to the difficulty of identifying accurate system models from engineering first principles, parameter monitoring fault detection methods have not been the predominant choice in process fault detection. The estimation of process parameters using the least squares estimation method is also computationally intensive and is thus not a feasible solution for online process parameter monitoring. Subspace methods have proven to be computationally efficient where no apriori process knowledge is necessary to estimate a system model. Subspace system identification methods thus allow the user to identify black box models, which can be used to monitor processes. The challenge in fault detection with subspace methods comes in how to monitor and evaluate the vast amount of system parameters efficiently and accurately. Re-identification of the process using subspace methods, necessary to track parameter changes, is also not a feasible solution to fault detection. The reason is due to the vast amounts of data samples that are necessary and which must contain well excited process dynamics. An elegant solution to the fault detection problem using subspace identification methods would thus be to identify a system process model using a subspace method where the parameters of the subspace model are updated periodically as new data becomes available. By updating the process parameters, without complete system re-identification, the user is able to track parameter changes, which Department of Electrical, Electronic and Computer Engineering 95 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process contribute to the fault detection problem. Unmeasured and Measured Noise Attenuation A requirement of a reliable and effective fault detection methodology is that it needs to be robust and not generate false fault detection alarms. A cause of false fault detection alarms is due to unmeasured noise sources that are inflicted on the process. These noise sources must either be estimated and incorporated into the noise model, or the influence of these noise sources needs to be attenuated. An elegant solution to partially diminish the influence of unmeasured disturbances is to take the orthogonal projection of measured data on a Hilbert space [8], thus partially removing the stochastic noise components in the data used for parameter updates and fault detection. This solution is similar to the data preprocessing proposed by Katayama and Tanaka [8]. Extension of Kalman filtering theory for Parameter Estimation The re-identification of accurate system parameters for a process that operates in steady-state is a very time consuming operation. For processes in the chemical process industry, this can take hours, if not days, to fully identify all the process dynamics through sufficient process excitation. A feasible solution to reidentify the process is to incorporate previous process model structures and only to update model parameters as new data becomes available. The least square estimation method is the preferred choice of parameter estimation in the Prediction Error Method (PEM) [30, 7, 26]. Complementing alternatives to the least square estimation method includes the recursive least square estimation method, which is computational efficient and suited for real time process parameter estimation, as well as the weighted least square estimation method, which caters for data outliers and enhance the robustness of the parameter estimation process [44, 46]. Section 2.3.4 gives a complete overview of the mentioned methods with their corresponding advantages and weaknesses. The Kalman filter is very closely related to the recursive least squares estimation method [28]. However, the primary use of the Kalman filter is to be implemented as a process state observer, and not a parameter observer. Modifications and extensions of the Kalman filter algorithm includes the extended Kalman filter (EKF) algorithm [57, 92] and the unscented Kalman filter (UKF)[69]. The EKF appends the states of the process that need to be estimated, with unidenDepartment of Electrical, Electronic and Computer Engineering 96 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process tified process parameters for further additional estimation, where the UKF is the extension of the EKF to nonlinear systems that have discontinuities. An in depth discussion is found in section 2.3.4. Although the primary use of the EKF is to enlarge the state vector with unknown parameters for estimation, the estimation of the process states can become a computational burden if not truly needed [92]. In this dissertation, only the estimation of the system parameters needed for fault detection is considered. A fault detection method, that incorporates the EKF is proposes, which only estimates the system parameters as new data vectors U i , Y i becomes readily available. For completeness sake, the state-space system of a process can be defined as [92]: x(k + 1) = Ax(k) + Bu(k) + v(k) y(k) = Cx(k) + Du(k) + w(k) , (5.22) where u(k), y(k) and x(k) are the input, output and process states respectively, and v(k), w(k) are independent stochastic variables which act as unmeasured noise sources on the process dynamics. To reduce the probability of unmeasured disturbances causing false fault detection alarms, data vectors U i , Y i generated by the system equation 5.22 is projected on a Hilbert space, generated by second order random variables [8]. Since the stochastic components y s (k), us (k), v(k), w(k) are orthogonal on the defined Hilbert space, equation 5.22 can be defined as follows: xd (k + 1) = Axd (k) + Bud (k) y d (k) = Cxd (k) + Dud (k) . (5.23) The deterministic system state-space equation 5.23 can formally be redefined for the parameter estimation problem as follows [28]: θ(k + 1) = θ(k) + V (k) Y (k) = X(k)θ(k) , (5.24) where V (k) is a n-dimensional matrix representing the time variance of the parameters, which is modelled as random walk or drift [28]. However, this drift or random walk is regarded as static and time invariant over a short period of time (at least by definition of the settling time of a process). For the state-space representation (equation 5.23) the parameters estimated by the EKF are defined as: Department of Electrical, Electronic and Computer Engineering 97 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process A(θ) B(θ) θ(k) = C(θ) D(θ) " # xd (k + 1) Y (k) = y d (k) " # xd (k) X(k) = . ud (k) (5.25) The extended Kalman filter used for estimating the process parameters can formally be defined as follow [92, 28]: θ̂(k) = θ̂(k − 1) + γ(k)e(k) e(k) = Y (k) − θ̂(k − 1)X(k) 1 P (k − 1)X(k) γ(k) = † X(k) P (k − 1)X(k) + λ P (k) = I − γ(k)X † (k) P (k − 1) + V , (5.26) where θ̂(k), X(k) and e(k) are the system parameters, estimated system states and error in parameter deviation respectively. The variance matrix of the parameters, V is updated periodically as new estimated parameters at time t = k, k + 1, k + 2, · · · becomes readily available. This allows the EKF to track each parameter individually with its own forgetting factor, which is a significant advantage over the recursive least squares algorithm [28]. If no knowledge about the speed of the time variant behaviour is available, the covariance matrix V can simply be set to V = ςI; 0 < ς 1 [28]. The latter implies a uniform tracking speed for all the parameters until tracking speed statistics have been determined for the parameters. The adaptation matrix γ(k), which determines the amount of parameter adjustment, is known as the Kalman gain. Parameter Matrix Norm Measurement and Fault Detection Hypothesis A(θ) B(θ) The system parameters θ(k) = can be updated on line in real C(θ) D(θ) time. This is accomplished by solving equation 5.26 for each new available data sample, U i , Y i , recursively, until a new setof system parameters converge to an Â(θ) B̂(θ) updated system parameter set θ̂(k + 1) = . This parameter set Ĉ(θ) D̂(θ) is in essence a prediction of what the parameters of the system will be in the Department of Electrical, Electronic and Computer Engineering 98 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process following time step given the data sample, U i , Y i . To detect a fault in the predicted parameter set, it is thus necessary to compare the predicted parameter set, θ̂(k + 1), with the previous parameter set, θ̂(k). This can be accomplished by considering appropriate matrix measures, which will accentuate any fundamental parameter matrix differences. The k•k∞ matrix norm is proposed for detecting discrepancies between consecutive parameter sets. The k•k∞ matrix norm used on the parameter sets can formally be defined as follows: dθ̂ = dt θ̂(ki+1 ) − θ̂(ki ) ∞ ki+1 − ki . (5.27) From equation 5.27, the rate of change for a parameter set is calculated using the k•k∞ matrix norm, where an abrupt change or continuous drift will be classified as a fault. The rate of change for consecutive parameter set matrix norms needs to be evaluated, before a sound decision can be made whether a fault has occurred. This can be done by formulating a hypothesis test, which evaluates the rate of change for the parametric matrix norm. The hypothesis test is defined as follow: dθ̂ ' 0 → No fault dt (5.28) dθ̂ H1 : > ξ → Fault . dt The threshold value, ξ, determines the aggressiveness of the fault detection evaluation criteria, and is set accordingly to three (this depends on the unidentified measurement and process noise present). By choosing a large threshold value, ξ, the fault detection method will be robust against false alarms caused by noise interference. However, small changes in the process dynamics, reflected in the process parameters, might not be detected. By choosing a small threshold value, ξ, the fault detection method will be sensitive to parameter changes but the probability of false fault detection alarms will increase. The threshold value, ξ, is the percentage change on the previous maximum system parameter deviation error. A typical fault will thus be detected if there is a sudden error deviation of 3% or more on the previous maximum calculated parameter error deviation. H0 : Department of Electrical, Electronic and Computer Engineering 99 Chapter 5 5.5 A Subspace SID and FD Methodology for the Benfield Process Conclusion In this chapter, the development of a multivariable linear subspace identification methodology for the Benfield process, operating in a closed-loop environment under Robust Multivariate Predictive Control Technology (RMPCT), was discussed. The developed subspace identification methodology is based on the work of Katayama and Tanaka [8], who have developed a subspace identification methodology for SISO systems. The same assumptions under which the subspace identification methodology for SISO systems were developed for, stated by Katayama and Tanaka [8], were used for the development of the multivariable subspace identification methodology. A weakness of the developed subspace identification methodology for SISO systems proposed by Katayama and Tanaka [8], is that the stability of the identified system model cannot be guaranteed. This shortcoming was addressed by implementing a solution proposed by Maciejowski [91], which guarantees system stability. The solution proposed by Maciejowski [91] states that the system stability of the identified system model can be guaranteed. This guaranteed stability is achieved by first appending the estimated observability matrix by a block of zeros, before using the shift invariance property of the estimated extended observability matrix. The proposed method of Maciejowski [91] guarantees the estimation of a stable system matrix, however the cost of this benefit is the lost of accurate estimation results, but in some applications that is outweighed by the advantage of guaranteed stability. Possible applications that can take advantage of this guaranteed stability is subspace algorithms that run online and unsupervised, such as adaptive control or fault monitoring [91]. In this chapter a model based fault detection methodology, used to monitor parametric discrepancies in the system model was also developed. The accurate identification of a system using a multivariable linear subspace identification methodology requires a vast amount of sampled data, where the data must be rich and informative. It is thus not a feasible solution to re-identify a subspace model of the Benfield process operating in steady-state, where the newly identified model is used for fault detection. A model based solution to fault detection proposed in this chapter makes use of the extended Kalman filter to estimate the subspace parameters, instead of the system states, as new measured system data Ui , Yi becomes available. The estimated system parameters is thus updated periodically with each new data sample Department of Electrical, Electronic and Computer Engineering 100 Chapter 5 A Subspace SID and FD Methodology for the Benfield Process Ui , Yi , where discrepancies between the system parameters are monitored by considering the infinity matrix measure norm. A fault detection hypothesis test was defined which evaluates the infinity matrix measure norms, and determines whether a fault has occurred or not. A threshold value, ξ, is used to adjust the aggressiveness of the fault detection methodology. By choosing a large threshold value, ξ, the fault detection method will be robust against false alarms caused by noise interference, but small changes in the process dynamics, reflected in the process parameters, might not be detected. By choosing a small threshold value, ξ = 3, the fault detection method will be sensitive to parameter changes but the probability of false fault detection alarms will increase. Preprocessing of measured data samples Ui , Yi was considered to reduce the probability of false alarms. Preprocessing includes the projection of the measured data on a defined Hilbert space, orthogonal to stochastic data components (disturbances present in the data). This idea is similar to the data prepossessing proposed by Katayama and Tanaka [8] prior to system identification. Department of Electrical, Electronic and Computer Engineering 101 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations 6.1 Introduction This chapter validates the developed subspace system identification and fault detection methodology that was proposed and developed in chapter 5. The evaluation and validation of these methodologies is accomplished by extensive simulation and analysis of the simulation results. Prior to the experimental results of each developed methodology, the experimental setup of the simulation environment will be discussed. In section 6.2 a discussion follows on the environmental factors that will dominantly determine the experimental outcome of the subspace methodology testing. The environmental factors include the controller dynamics, the nominal process, the measured and unmeasured disturbances and the persistent excitation signals used for system identification. Four experiments will be conducted for the subspace methodology. These experiments will determine the capability of the subspace identification methodology in identifying the model of a process operating in a closed-loop environment with white noise (flat power spectrum; see section 6.3.1) interference, as well a coloured noise interference (see section 6.3.2). Further experiments that verify the robustness of the subspace identification methodology and stability of the identified models will be conducted where the results will be discussed in sections 6.3.3 to 6.3.4 respectively. Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations For the fault detection methodology, the experimental environment is defined prior to any experiments in section 6.4. A discussion on the types of faults and a motivation on how these faults are applicable in a real operating process will be discussed in section 6.4.1. Experiments for the fault detection methodology include the fault detection of abrupt faults (see section 6.5.1), fault detection of incipient faults (see section 6.5.2) and validation of the robustness of the fault detect methodology (see section 6.5.3). 6.2 Experimental Setup: Subspace Identification To evaluate the subspace identification methodology, it is necessary to define a closed-loop environment that can be used for the validation and evaluation process of the proposed method. Figure 6.1 illustrates the closed-loop environment used to test the proposed subspace methodology. From figure 6.1, it is necessary to define and discuss the four principle factors that will determine the dynamics of the environment and outcome of the experiment. These are: • The RMPCT controller, C(G0 /P, M ), as discussed in section 3.4.2. • The nominal plant model, G0 (θ̂) that was identified and analysed in section 3.3. • Unmeasured and measured disturbances that act on the measured and controlled variables. • The persistent excitation signals, P E1 and P E2 , used for the identification process. 6.2.1 Controller Dynamics and Simulation Configuration One particular MPC technology called Robust Multivariate Predictive Control Technology (RMPCT), developed by Honeywell, is a control strategy used for systems that are known to be complex and difficult to control. Synfuel has implemented an RMPCT control philosophy to control the Benfield Process. The advantage of RMPCT is that it permits the constraints on some or all controlled variables to be de-activated during optimisation. This allows the optimisation process to find an unconstrained solution with in a reasonable time [85]. The Department of Electrical, Electronic and Computer Engineering 103 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.1: Closed-loop environment used for subspace SID evaluation. RMPCT control philosophy is based on the MPC principles, where it uses a finite impulse (FIR) model form, one for each controlled-manipulated variable pair. RMPCT uses two different kinds of optimisation functions namely error and profit optimisation. Since RMPCT is closely related to MPC, an MPC controller will be used to control the nominal plant model. Features of the MPC control algorithm that can be configured for closed-loop control is the prediction horizon, P , and the control horizon, M . Additional features which can be specified for the MPC control algorithm includes structured weights on the inputs ,Υul and input increments and weights on the outputs, Υyl . The stability of the closed-loop system primarily depends on the prediction horizon, P , the control horizon, M , and the input-output weighting matrices ,Υul and Υyl respectively. The aggressiveness of the control action is decreased by decreasing the control horizon relative to the prediction horizon. Tuning the input weighting matrix, Υul , also has the effect of making the control action less aggressive [84]. For the simulation case study, the execution time of the controller will be 15s, which is similar to what the controller used by Sasol, Synfuel is configured to [85]. The prediction horizon, P , and control horizon, M , will be chosen accordingly which results in a stable closed-loop system. The real prediction horizon, P , and control horizon, M , values used by Sasol will be different from the values chosen since it is impossible to replicate a simulation environment which is subjected to all the factors that has an influence on the control philosophy. Table 6.1 defines the configuration parameters used in the simulation case study. Department of Electrical, Electronic and Computer Engineering 104 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Table 6.1: MPC configuration parameter settings. Parameter Symbol Value Prediction Horizon P 10 samples Control Horizon M 2 samples Execution Time Tk 15s u Input Weights Υl [1 1 1 1] Output Weights Υyl [1 1 1 1] 6.2.2 Nominal Model of Benfield Process In Chapter 4, a nominal model of the Benfield process operating in a closed-loop system was identified and validated by using raw plant data as well as open-loop step response data. Only the Potassium Carbonate wash stage was identified and modeled. The plant which will be used for simulation is a linear multivariable plant with four controlled variables (four inputs), and four manipulated variables (four outputs). The definition of the controlled variables and manipulated variables are defined in table 3.1. The process model of the Potassium Carbonate wash stage, that will be used for simulation: yM V1 yM V2 = y M V3 yM V4 + 0.0275 10.3s+1 0.0535(1.59s+1) 2.04s2 +3.35s+1 0.0541(1.38s+1) 0.855s2 +2.54s+1 0.0538(2.44s+1) (1.54s2 +2.49s+1) uCV1 + −0.00511(33.4s2 +5.74s+1) −3s e 166s3 +42.2s2 +10.9s+1 0.00865 1.27s2 +2.64s+1 0.0052 1.55s+1 0 −0.0129(89.7s2 +11.7s+1) −3s e 345s3 +92.5s2 +18s+1 0.0121 −8s e 3.8s+1 0.02 e−6s 3.36s+1 0.0159(2.29s+1) 1.43s2 +3.08s+1 uCV3 + −0.0176 −5s e 24.9s+1 0 0 0 uCV2 (6.1) uCV4 . Although the validation results of the identified model of the Potassium Carbonate wash stage (using raw plant data) was the same for both the identified model obtained by Sasol (64% average fit), and the identified model obtained (see section 4.4 for discussion) by using the System Identification Toolbox of MATLAB (63% average fit), it was decided to use the identified model of Sasol. The system model identified by Sasol had a 1% improvement on the validation fit. 6.2.3 Measured and Controlled Variable Disturbances To create a realistic closed-loop simulation environment, it is important to introduce applicable noise sources to the system. These noise sources can either be measured disturbances or unmeasured disturbances. The properties of these Department of Electrical, Electronic and Computer Engineering 105 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations noise spectrums can also vary between white noise spectrums or coloured noise spectrums. It has been shown that the best linear unbiased estimate of the true parameters, θ̂ Opt , can be found when the noise afflicted on the system is white [67]. If the noise is not white, it is intuitively clear that the noise properties have to be considered in the estimation. With the least square estimation method the system parameters can be estimated when coloured noise is introduced to the system, as follows [28]: θ̂ = X † Ω−1 X X † Ω−1 y, (6.2) where Ω is the covariance noise matrix of the coloured noise, and X and y are the regression matrix and the output of the system respectively. Although 6.2 can improve the estimation quality considerably if the noise is highly correlated, the major problem in practice is to determine the noise covariance matrix, Ω [28]. The strength of the subspace identification methodology proposed in section 5.3.1 is that no a-priori information about the noise, even for coloured noise, is necessary. The projection of measured variables on a subspace orthogonal to stochastic noise components (proposed by Katayama and Tanaka [8]) partially removes the deteriorating influence that noise has on the estimation procedure. To further attenuate the influence of unwanted noise signals, the estimation of the system parameters is accomplished by implementing the ORT method. It would be beneficial to consider the influence of both coloured noise and white noise on the closed-loop system. It is further assumed that all noise sources are unmeasured, since any measured noise source will automatically be incorporated into the modelling of the system as well as the control of the system. The coloured noise models in figure 6.1 are given by [8]: z 3 − 1.56z 2 + 1.045z − 0.3338 , (6.3) z 3 − 2.35z 2 + 2.09z − 0.6675 where it is assumed that coloured noise is only present on the measured outputs. Hp (z) = Figure 6.2 illustrates the Bode plot of Hp (z), which shows that the noise spectrum has a sharp peak around 0.75rad/s. Figure 6.3 illustrates the FFT of the coloured noise model, which clearly shows the influence of noise in the low frequency band where the Benfield process is especially sensitive to noise interference. Department of Electrical, Electronic and Computer Engineering 106 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.2: Bode plot of the coloured noise model. Figure 6.3: FFT of the coloured noise model. Department of Electrical, Electronic and Computer Engineering 107 Chapter 6 6.2.4 Subspace SID and FD Methodology Evaluation: Simulations Persistent Excitation Signals Persistent exciting signals on the reference inputs are necessary (P E1 , P E2 illustrated in figure 6.1) that excite all the plant dynamics sufficiently, so that the measured data sets are informative enough for system identification. Multivariate pseudo random binary signals (PRBS) have been proposed and used by De Klerk [7]. However, PBRS is inappropriate for nonlinear dynamic systems [28]. The amplitude of PBRS is uniform which is not suitable to excite all non-linear plant dynamics. Since the subspace identification method is a linear SID method, PRBS should suffice for process identification. However, an amplitude modulated PRBS (APRBS) signal will be used for identifying the process dynamics. The objective is to generate data sets as rich and informative as possible for identification. By considering the cross correlation of the residuals, it can be determined whether the subspace SID method is appropriate to model and identify the Benfield process. Synfuel has used structured open-loop step tests to excite the plant dynamics [21] similar to APRBS signals used for system identification in the simulation environment. The Benfield process is additionally naturally excited by disturbances on the rich (loaded with CO2 ) Potassium Carbonate solution feed. However, this will not be included in the simulation, since the amplitude modulated PRBS used for identification will exhibit sufficient process excitation. 6.3 Validation of the Subspace Methodology: Simulations Structured simulation tests were conducted in order to validate and test the 2ORT Subspace SID methodology. This section describes the different experiments that were conducted for the closed-loop system illustrated by figure 6.1. 6.3.1 Experiment 1: System Identification with White Noise Interference Objective The objective of this experiment is to investigate the robustness of the 2-ORT subspace identification method, when the closed-loop system is subjected to white noise interference. Persistently exciting signals (noise signals with zero mean) will be applied to both reference inputs, where unmeasured white noise sources will Department of Electrical, Electronic and Computer Engineering 108 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations be introduced on both the measured variables and controlled variables respectively. Experimental parameters Parameters that are of importance for experiment 1 are tabulated in table 6.2. Table 6.2: Experiment 1: Parameter configuration setup. Description Value Unit P E1 P E2 ηu ηy Hp (z) Hc (z) -10 -10 -30 -30 1 1 dB dB dB dB Gain Gain From table 6.2 one will notice that the gain of Hc (z) is one. This allows an additional white noise disturbance to be present on the controlled variables which is not applicable in the simulation case study where a coloured noise source is used. Graphs of measurement Apart from only using the 2-ORT subspace method to identify the Benfield process, it would be insightful to implement the well-known and used N 4SID and ARX system identification methods in order to compare these methods. Table 6.3 tabulates the validation fit results for the three identification methodologies for a closed-loop system as illustrated in figure 6.1. Table 6.3: Validation fit results for 2-ORT, N4SID and ARX with white noise interference. ARX N4SID 2-ORT y1 y2 y3 y4 75.85% 28.21% 75.86% 3.72% 75.70% 0.3897% 76.40% 0.6069% 71.60% 77.04% 74.22% 77.84% The validation fit for the 2-ORT subspace identification method is illustrated in figure 6.4. This validation fit spans over 1000 samples, where the sampling is exact to the controller execution rate, Ts = 15s. Department of Electrical, Electronic and Computer Engineering 109 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.4: Validation fit for the 2-ORT subspace identification method. Department of Electrical, Electronic and Computer Engineering 110 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Residual analysis bears information about the quality of the identified model. The quality of the identified model is determined in how accurate the model can predict or estimate actual system outputs. The residual analysis results for the identified Benfield process model are illustrated in Appendix C, where the autocorrelation and cross correlation of the residuals were considered. Description of results From the validation fit results, tabulated in table 6.3, one concludes that the bias estimation problem that is experienced with traditional subspace methods implemented for closed-loop system identification was addressed. The 2-ORT subspace method has managed to identify the Benfield process operating in a closed-loop system environment, and has estimated system parameters that are accurate and unbiased in closed-loop plant operation. The 2-ORT subspace method performed just as well as the traditional ARX identification method where P EM was used to estimate the system parameters. The advantage of using the 2-ORT subspace method is that it has much less of a computational burden to estimate the system matrices. Also, no a-priori system structure knowledge or system order is necessary, which is not the case with the traditional system identification methods. The residual analysis results of the identified Benfield process as illustrated in figure 6.5 (see Appendix C for additional residual analysis results) clearly indicates that the model that has been identified is a reasonably good representation of the real process. This conclusion is drawn from the fact that the autocorrelation and cross correlation results of the residuals all lie within or close to the 99.9% confidence interval boundaries. 6.3.2 Experiment 2: System Identification with Coloured Noise Interference Objective The objective of this experiment is to investigate the influence of coloured noise in the system identification process. Department of Electrical, Electronic and Computer Engineering 111 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.5: The auto-correlation and cross-correlation of residuals and M V1 with output CV2 . In the discussion in section 6.2.3 it was stated that with white noise interference it is possible to obtain unbiased parameter estimation results. Coloured noise will lead to biased parameter estimates, and thus render the identified process model not suitable for further use. In practice it is usually assumed that the disturbances that are inflicted on the system has a flat noise spectrum, but when coloured noise is present, further identification of accurate noise models is necessary for accurate system identification results. This experiment will verify the effectiveness of projecting the measured data on a subspace which is orthogonal to stochastic noise components, thereby removing the interference of any noise. Again it would be insightful to compare the 2-ORT subspace identification method with the traditional ARX method, using PEM to estimate system parameters, as well as the N4SID subspace method. Experimental parameters Parameters that are of importance for experiment 2 are tabulated in table 6.4. A discussion on the coloured noise filter, Hp (z), is found in section 6.2.3. Coloured noise is introduced in the frequency band where the Benfield process is the most Department of Electrical, Electronic and Computer Engineering 112 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations sensitive to interference. Table 6.4: Experiment 2: Parameter configuration setup. Description Value Unit P E1 P E2 ηu ηy Hp (z) Hc (z) -0.1 -0.1 -25 -25 z 3 −1.56z 2 +1.045z−0.3338 z 3 −2.35z 2 +2.09z−0.6675 1 dB dB dB dB Gain Gain Graphs of measurements Table 6.5 tabulates the validation fit results for the three identification methodologies, similar to Experiment 1, for a closed-loop system as illustrated in figure 6.1. Table 6.5: Validation fit results for 2-ORT, N4SID and ARX with coloured noise interference. ARX N4SID 2-ORT y1 y2 y3 y4 (−∞, ∞) (−∞, ∞) (−∞, ∞) (−∞, ∞) (−∞, ∞) (−∞, ∞) (−∞, ∞) (−∞, ∞) 69.67% 64.67% 66.76% 67.13% The validation fit for the 2-ORT subspace identification method, with coloured noise interference, is illustrated in figure 6.6. This validation fit spans over 1000 samples, where the sampling is similar to the controller execution rate, Ts =15s. The residual analysis results for the identified Benfield process model, where coloured noise is present on the measured outputs, are illustrated in figure 6.7 (see Appendix C for complete residual analysis results). The autocorrelation and cross correlation of the residuals were considered. Description of results The results in table 6.5 clearly indicate the devastating effect that coloured noise has on the identification process. In table 6.5, the power of the persistent excitation signals has been increased. The power of the coloured noise source has Department of Electrical, Electronic and Computer Engineering 113 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.6: Validation fit for the 2-ORT subspace identification method. Department of Electrical, Electronic and Computer Engineering 114 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.7: The auto-correlation and cross-correlation of residuals and M V1 with output CV2 . also been increased slightly, so that an acceptable signal to noise ratio is obtained for simulation. The 2-ORT subspace identification method was able to produce acceptable model validation results. The traditional system identification methods, N4SID and ARX, failed to produce any consistent or reliable results. The coloured noise present resulted in biased estimates for the PEM framework used with the ARX identification methodology. The N4SID methodology had increased inflated variances and biased parameter estimates since closed-loop data were used for identification, where the presence of coloured noise further deteriorated the parameter estimation results. Even with an increase in persistent excitation power and a drastic decrease in the power of the coloured noise source, both the N4SID and ARX methods were unable to produce stable models. The coloured noise present on the measured outputs results in measured data sets that produce biased parameter estimates. These biased parameters lead to unstable systems. The only solution to prevent biased parameters is to introduce white noise power levels that are substantially higher than the noise levels of the coloured noise. This diminishes the effect of the coloured noise source, where one portion of the frequency band of the plant dynamics is excited more, with higher gains, than other sections of the frequency band. White noise leads to process dynamic excitation over the entire frequency band of plant operation, thus preDepartment of Electrical, Electronic and Computer Engineering 115 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations venting the estimation of biased parameters. The projection of measured data on a subspace that is orthogonal to stochastic components in the measured data is thus a feasible solution to prevent coloured noise producing biased parameters. From the residual analysis results illustrated by figures C.16-C.30, the deteriorating effects of coloured noise can clearly be observed. The autocorrelation of the residuals needs to have a flat spectrum, for an estimated model, to be unbiased. It can thus be concluded that the estimated model is biased, since the autocorrelation of the residuals lies outside the 99.9% confidence boundaries. However, the validation fit is still acceptable which indicated that the estimated model can be used for fault detection. The cross correlation of the residuals indicates the independence between residuals and past inputs. The model estimated by the 2-ORT method is good, since the cross correlation results of the residuals and past inputs that lie within the 99.9% confidence region. 6.3.3 Experiment 3: Signal-to-Noise Ratio Robustness Objective The objective of this experiment is to determine the robustness and efficiency of the 2-ORT subspace identification method for the Benfield process operating in closed-loop under different levels of persistent excitation power and noise levels. In this experiment, it will be assumed that the noise levels will remain constant, while the power levels of the reference signals will be adjusted accordingly. Experimental parameters The experimental parameters are tabulated in table 6.6. It is assumed that white noise will act as a disturbance on both the measured variables, as well as the controlled variables. Graphs of measurement Figure 6.8 illustrates the average percentage validation fit for the four measured T variable outputs, yM V1 yM V2 yM V3 yM V4 . The model to be identified, was subjected to constant white noise interference on both the controlled variables and measured variables, while the reference signals’ power used to excite the plant Department of Electrical, Electronic and Computer Engineering 116 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Table 6.6: Experiment 3: Parameter configuration setup. Description Value Unit P E1 P E2 ηu ηy Hp (z) Hc (z) [-20,9] [-20,9] -30 -30 1 1 dB dB dB dB Gain Gain dynamics was varied according to table 6.6. Description of results From figure 6.8 it can be concluded that the 2-ORT subspace identification method is robust against white noise interference on both the controlled variables and measured variables respectively. The 2-ORT subspace identification method model identify valid and stable process models consistently. It is further observed that the 2-ORT subspace identification method is able to produce accurate model validation fits, with a 50% validation fit, for signal to noise ratio’s (SNR) defined by equation 6.4: 6.3.4 P Ei N0 > 4.5. (6.4) SN R Experiment 4: Identified System Stability Investigation Objective The objective of this experiment is to verify the stability of the identified system. The identified system stability is guaranteed for the proposed method of Maciejowski [91]. The stability of the system will be evaluated by considering the spectral radius (see expression 5.20) of the system matrix. Experimental parameters In this experiment, the closed-loop system will be subjected to excessive levels of unwanted unmeasured noise on both the controlled variables and measured variables. Excessive noise can cause the 2-ORT subspace identification method Department of Electrical, Electronic and Computer Engineering 117 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.8: Robustness and model identification consistency of the 2-ORT subspace identification method. Department of Electrical, Electronic and Computer Engineering 118 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations to identify models that are unstable, rendering them unsuitable for fault detection. The 2-ORT subspace identification method will be used to identify and update a nominal identified model with each new available data set U N , Y N . If there is no improvement on the nominal model, then the new data set U N , Y N will be discarded since it is not informative enough to provide new insight into unidentified dynamics. The eigenvalues of the system matrix of the nominal model will be monitored for signs of unstable poles. The parameters used for experiment 4 are tabulated in table 6.7. Table 6.7: Experiment 4: Parameter configuration setup. Description Value Unit P E1 P E2 ηu ηy Hp (z) Hc (z) N 10 10 -0.1 -0.1 1 1 2000 dB dB dB dB Gain Gain Samples Graphs of measurement Figure 6.9 illustrates the spectral radius for the two case scenarios. Case 1 for the case where no system matrix stability monitoring is done. In case 2 the extended observability matrix was derived as proposed by Maciejowski [91] to guarantee system stability. Figure 6.10 illustrates the corresponding average validation fit percentage for case 1 and case 2 respectively. Description of results From figure 6.9 it can be concluded that system stability is guaranteed when the extended observability matrix is appended by a block of zeros as proposed by Maciejowski [91]. The spectral radius for case 1 indicates that unstable models are unavoidable; but in case 2, the maximum pole location always lies within the unit circle. Department of Electrical, Electronic and Computer Engineering 119 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.9: Spectral radius inspection for nominal model updates. The drawback of guaranteed stability can be observed in figure 6.10. For guaranteed system stability, the average validation fit is not as good compared to when there is no restriction on pole locations. In Case 1, where there is no restriction on pole locations, the results of validation fits are slightly better (78.172%), compared to validation fits when there is a restriction on pole locations (77.796%) (case 2). For fault detection and monitoring, the cost of this benefit is outweighed by guaranteed system stability. 6.4 Experimental Setup: Fault Detection Although the fault detection methodology was developed as a separate unit, it relies and functions closely to the 2-ORT subspace identification methodology. The 2-ORT subspace identification methodology needs a vast amount of measured data to estimate accurate system models, and it can take a considerable amount of time for an identified model to be updated with system parameters that are more accurate, which results in a better data validation fit. The fault detection methodology on the other hand can update the system parameters Department of Electrical, Electronic and Computer Engineering 120 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.10: Average validation fit percentage for nominal model updates. Figure 6.11: Fault detection and subspace SID system overview. using the extended Kalman filter - with each new available measured data sample, uk , yk . However, it is unable to accurately estimate the initial parameter values, the system order and the structure structure. The role of the 2-ORT subspace identification methodology is thus to identify and determine accurate system parameters, system order and structure, and to update the system parameters with each measured data validation fit improvement. The task of the fault detection methodology is to track system parameters between updates periodically by the 2-ORT subspace identification methodology. Figure 6.11 illustrates the interconnected relationship between the fault detection methodology and the 2-ORT subspace identification methodology. Department of Electrical, Electronic and Computer Engineering 121 Chapter 6 6.4.1 Subspace SID and FD Methodology Evaluation: Simulations Classification of Faults for Simulation In order to test the proposed fault detection methodology, it is necessary to generate system faults artificially. As discussed in section 2.3.2, faults can be either classified as abrupt faults (stepwise), incipient faults (drift) or intermittent faults [56]. Faults at an early stage are referred to as incipient faults due to inherent difficulty in detection and isolation. The presence of incipient faults is often unnoticeable in system measurements. This means that traditional fault detection methods are less likely to successfully detect and isolate incipient faults [10]. Incipient faults will most likely be detected by the 2-ORT subspace identification methodology, since a substantial drift in parameters will only be picked up after a while. To simulate incipient faults, pole and zero locations will be adjusted in small decrements or increments such that there is no immediate change on the measured system outputs. In practice, the poles and zeros of a process can deviate from initial locations due to physical process deterioration. One example of physical process deterioration is due to the corrosive nature of the carbonate solution. The Benfield plants have mainly been built from carbon steel and thus require corrosive protection. However, over time the corrosive nature of the carbonate solution changes the process dynamics that will influence the system control performance [85]. More abrupt physical process deterioration is pipes that build up with residue, constricting process flow. Abrupt faults can either be sensors, valves or actuators that malfunction. Valves that are also operating near their operating boundaries may result in unstable process dynamics. Biased sensors and biased actuators can also be a cause for abrupt faults. Abrupt faults can be introduced artificially by changing pole locations (replicate the unpredictable and unstable dynamics of valves near operating boundaries). Sudden changes in the gain will also contribute to abrupt faults (biased sensors and biased actuators). Department of Electrical, Electronic and Computer Engineering 122 Chapter 6 6.5 Subspace SID and FD Methodology Evaluation: Simulations Validation of the Fault Detection Methodology: Simulations Structured simulation tests were conducted to validate and test the fault detection methodology. This section describes the different experiments that were conducted for the closed-loop system setup, illustrated by figure 6.1, and the fault detection methodology using the configuration illustrated by figure 6.11. 6.5.1 Experiment 5: Fault detection of abrupt faults Objective The objective of this experiment is to introduce abrupt process faults to the plant dynamics. Abrupt faults will be generated by a sudden change in pole or zero locations, or an abrupt change in process gains. These abrupt faults will replicate faults associated with valves that are operating near operational boundary limits or valves and actuators that are biased. Experimental parameters In this experiment, the closed-loop system will be subjected to excessive levels of unwanted unmeasured noise on both the controlled variables and measured variables respectively. The aim is to determine how effective the fault detection methodology is in presence of excessive levels of unwanted unmeasured noise. The experimental parameters are tabulated in table 6.8. Table 6.8: Experiment 5: Parameter configuration setup. Description Value Unit P E1 P E2 ηu ηy Hp (z) Hc (z) NSID KN λ V ξ terr -1 dB -1 dB -15 dB -19 dB 1 1 2000 20 0.9 10−5 100 ∆err % 30; 75; 120 102 s Department of Electrical, Electronic and Computer Engineering 123 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations The threshold value, ξ, is configured to be robust against false alarms. A fault will thus only be detected if there is an error deviation of at least 100% or more on the previous maximum error between the nominal parameters and the estimated parameters. The parameter λ is configured to provide acceptable robustness against disturbances but not set to the maximum of 1, thus allowing the extended Kalman filter to track parameter changes fast. The covariance properties of the parameters are initially unknown, thus an initial small covariance factor, V = 10−5 is set accordingly. For every NSID = 2000 consecutive sample, the 2-ORT subspace methodology will attempt to identify and update the system parameters if it manages to identify previously unidentified dynamics, resulting in more accurate model parameters. In this re-identification process, the possibly of parameter drift which could not be picked up by the Kalman filter will be detected. The Kalman filter recursively (the amount if recursive steps is arbitrarily chosen as KN = 20) calculates a new set of parameters with each new available set of measured system variables, ui , yi , before determining the parameter deviation via the infinity matrix norm measurement. The simulation time, t=15000s, results in 1000 samples to be analysed for faults with a sampling time of Ts = 15s. An artificial process fault will be generated at terr = 3000s, which will replicate a 50% gain attenuation due to a biased actuator or valve on the CO2 (CV1 ) feed. At terr = 7500s, a process pole will move from p = −3.8 to p = −4.3, and at terr = 12000s, a process zero will move from z = −0.54 to z = −0.41. Graphs of measurement Figure 6.12 illustrates fault detection when process dynamics have deviations in terms of gain, pole and zero locations. Figures 6.13 and 6.14 are the measured output variable behaviours of the process as abrupt process faults occur, and the corresponding states of the system used for the parameter estimation respectively. Department of Electrical, Electronic and Computer Engineering 124 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.12: Infinity matrix norm error detection with abrupt process faults. Figure 6.13: Measured system output behaviour with abrupt faults. Department of Electrical, Electronic and Computer Engineering 125 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.14: Process state behaviour with abrupt process faults. Description of results From figure 6.12, it can be concluded that the fault detection methodology is successful in picking up abrupt changes in pole/zero locations as well as process gain changes. However, it is important to note that faults that affect the process dynamics more will be detected earlier and easier. An example is seen by the detection of gain changes at terr = 3000s. Abrupt gain changes in the CO2 (CV1 ) feed results in erratic behaviour in both the measured outputs as well as the states. However, pole/zero locations that deviate a little are in some cases unnoticeable. This reduces the efficiency in which an abrupt fault can be detected. The risk in pole/zero location changes is that the system can become unstable which results in a loss of product quality. In the simulation case study abrupt faults were generated at terr = 3000s, 7500s, 12000s, where the plant was configured to be fault free again after each abrupt process fault. This is necessary only for illustration purposes to vividly illustrate process faults. Two major contributing factors that determine the efficiency and accuracy of the fault detection methodology is the influence of unmeasured disturbances, and the accuracy of the nominal model that is used for fault detection. A nominal model that does not represent all the applicable system dynamics of the process plant will be insensitive to process changes, thus rendering the fault detection methodology useless. Sporadically high power levels of unmeasured disturbances Department of Electrical, Electronic and Computer Engineering 126 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations can cause the process to behave abnormally temporarily, which can be the cause of a false alarm. Through trial and error and by investigating previous process behaviour, it is necessary to optimally configure the threshold value, ξ, to limit the amount of false alarms, but still be sensitive enough to slight process parameter deviations. 6.5.2 Experiment 6: Fault detection of incipient faults Objective The objective of this experiment is to evaluate the effectiveness of the proposed fault detection methodology in detecting incipient faults. Incipient faults are classified as faults that have a small effect on the measured output variables and states of a system operating in steady-state. It is thus very difficult to detect these faults, since both the output and state estimates are used in estimating new process parameters used for fault detection. The robustness of the fault detection methodology is also addressed, where it is necessary to distinguish between disturbances resulting in momentarily measured variable deviation and true incipient faults. Foaming and flooding is due to differential pressure deviations. To prevent foaming and flooding, the fault detection methodology must be sensitive enough to detect small unwanted deviations in the differential bed pressures. Experimental parameters In this experiment it is necessary to distinguish between true incipient faults and the effects of unmeasured disturbances on the measured output variables. It is thus essential for this experiment that the nominal model used for estimating system parameters with newly available data sets, ui , y i , to be very accurate. Table 6.9 tabulates the parameters used for this experiment. From table 6.9, a fault will be artificially generated at terr = 10500s. The aim of the fault is to cause slight deviations of the differential bed pressures which is the major cause for flooding and foaming. Department of Electrical, Electronic and Computer Engineering 127 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Table 6.9: Experiment 6: Parameter configuration setup. Description Value Unit P E1 P E2 ηu ηy Hp (z) Hc (z) NSID KN λ V ξ terr 1 1 -35 -10 1 1 2000 20 0.9 10−5 65 105 dBW dBW dBW dBW ∆err % x10−2 s Graphs of measurement Figure 6.15 illustrates the detection of an incipient fault resulting from slight deviations of the differential bed pressures. The slight deviations of the differential bed pressures (CV1 -CV3 ) are illustrated in figure 6.16, where figure 6.17 illustrates the system states of the process. Description of results An incipient error was generated at terr = 10500s. It is observed from figures 6.16 and 6.17 that the effects of the incipient fault at terr = 10500s is visible in the measured system outputs and states of the process. The deviations of the differential bed pressures as observed from figure 6.16 are very small, where the CO2 slip (CV1 ) was not even visibly affected; however, this may cause unwanted flooding and foaming. The difficulty of detecting incipient faults is illustrated by figure 6.15. The threshold value, ξ, used for detecting incipient faults needs to be much smaller to detect abrupt faults. With severe noise interference, it might become impossible to differentiate between a fault and temporary disturbances on the system parameters. Department of Electrical, Electronic and Computer Engineering 128 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.15: Infinity matrix norm error detection with incipient process faults. Figure 6.16: Measured system output behaviour with incipient faults. Department of Electrical, Electronic and Computer Engineering 129 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.17: Process states behaviour with incipient process faults. 6.5.3 Experiment 7: False alarm robustness test Objective The objective of this experiment is to determine how robust the fault detection methodology is against false alarms. A factor that will contribute to false alarms is inaccurate nominal process models used for parameter estimation. Sporadic unmeasured disturbances, which have not been accounted for, can force the process into unstable states temporarily, causing false alarms. It is also necessary to consider the threshold value, ξ, which serves as a tradeoff between false alarm robustness and sensitivity to incipient faults. Experimental parameters The simulation parameters are tabulated in table 6.10. The interference of unmeasured disturbances on the measured outputs and controlled variables is limited. The reference signal is adjusted to be able to inspect the behaviour of the fault detection methodology under severe system state and measured variable fluctuations in system execution. Department of Electrical, Electronic and Computer Engineering 130 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Table 6.10: Experiment 7: Parameter configuration setup. Description Value Unit ηu ηy Hp (z) Hc (z) NSID KN λ V ξ -35 -15 1 1 2000 20 0.9 10−5 100 dBW dBW ∆err % Figure 6.18: Infinity matrix norm error detection. Graphs of measurement Figure 6.18 illustrates the infinity matrix parametric norm for reference adjustments applied to the process. The measured output variables are illustrated in figure 6.19, where the reference setpoint for each CVi is adjusted every 300 seconds. Figure 6.20 illustrates the process states of the system as it returns to steady-state after each reference adjustment. Department of Electrical, Electronic and Computer Engineering 131 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Figure 6.19: Measured system outputs. Figure 6.20: Process states. Department of Electrical, Electronic and Computer Engineering 132 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations Description of results From figure 6.18, a flat error time graph is observed, even under severe normal measured output and system state fluctuations. The fault detection methodology is thus robust against normal process output and state fluctuations as expected. It should however be noted that the nominal model that was used for estimating system parameters had a validation fit of 80%. A lesser fit would reduce the accuracy of the newly updated system parameters. The use of inaccurate process parameters will be inefficient for accurate parameter tracking, increasing the probability of false fault detection alarms. For the simulated scenario a choice of threshold value of ξ = 100 should be sufficient and robust against possible sporadic false alarms. This threshold value indicates that any matrix norm measurement over 100% of the previous maximum error value will cause an alarm. In the case of figure 6.18 a fault will typically be detected for norm amplitudes exceeding 2.5. With decreasing nominal modal accuracy, the threshold value needs to be increased, since parametric fluctuations due to poor parametric estimation models may result in false fault detection scenarios. The appropriate configuration of the threshold value guarantees that the fault detection system is robust against temporary sporadic process behaviour. It thus increases the tolerance of false process faults caused by unmeasured disturbances inflicted on the system temporarily, or severe controller switching (which is typical for MPC applications). False process faults can typically be abrupt process stepping on the process inputs. An inaccurate model would not be able to update accurate process parameters from the data measured when the process is not in steady state. However, this fault is temporarily and will be detected as a false alarm with the incorporation of a threshold value. 6.6 Conclusion In this chapter, experiments were conducted to validate and evaluate the efficiency of the proposed 2-ORT subspace identification methodology and fault detection methodology that were proposed in Chapter 5. Simulation environments were defined for both the 2-ORT subspace identification methodology and fault detection methodology that replicates the real-time environmental conditions of an operational plant as close as possible. Department of Electrical, Electronic and Computer Engineering 133 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations From simulation experiments for the 2-ORT subspace identification methodology, it was concluded that the proposed subspace methodology outperforms classic subspace identification methods (N4SID). Classic subspace identification methods fail to identify and produce stable and accurate models of plants operating in closed-loop environments. The prediction error method, traditionally used for closed-loop system identification, estimated accurate unbiased system parameters but was unable to outperform the proposed 2-ORT subspace identification methodology in terms of closed-loop validation data fits. The identification results for both the PEM and 2-ORT subspace method were almost identical. However, the computational burden of estimating system parameters by using the PEM methodology outweighed that of the 2-ORT subspace methodology. A strength of the 2-ORT subspace identification methodology is the ability of the method to produce guaranteed stable models. This is a major advantage over the traditional PEM method when the closed-loop system is subjected to unmeasured coloured noise interference. It was observed from experiment 6.3.2 that the traditional PEM methodology failed to produce any stable models due to coloured noise interference. In contrast, the 2-ORT subspace identification methodology still managed to produce stable models. The proposed fault detection methodology proves to be robust against unmeasured disturbances; and through various experiments, it was concluded that the fault detection methodology is able to detect both abrupt and incipient faults. The extended Kalman filtering method is an effective and efficient way to track the estimated system parameters, obtained by the 2-ORT subspace identification methodology, by using newly available data sets ui , y i . The drift of system parameters, which is too subtle to be directly detected by the fault detection method, is addressed by re-identifying system parameters (using the 2-ORT subspace identification methodology), and to update the initial parameters used by the fault detection method. The updating of system parameters is done periodically as the 2-ORT subspace identification methodology manages to estimate system parameters that result in improved data validation fits. Experimental outcomes for both the 2-ORT subspace identification methodology and fault detection methodology proved to be very favourable. However, it should be noted that in practice, the possibility of obtaining similar results and conditions are very rare. Challenges and requirements that need to be addressed Department of Electrical, Electronic and Computer Engineering 134 Chapter 6 Subspace SID and FD Methodology Evaluation: Simulations are the availability of measured data sets of reference inputs, and the persistent excitation signals used on these reference inputs. Sporadic unmeasured disturbances can also result in poor identification results and false fault detection alarms, when exceeding critical threshold values. The next chapter deals with the identification and process monitoring using real time process data. Department of Electrical, Electronic and Computer Engineering 135 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data 7.1 Introduction This chapter evaluates and validates the 2-ORT subspace identification methodology and fault detection methodology as proposed in Chapter 5. Real process data from the Benfield East process, operational at Secunda, South Africa, were obtained, where the process was operating under normal process conditions in a closed-loop environment. The objective of this chapter is thus to evaluate how capable the 2-ORT subspace identification methodology is in identifying an accurate and reliable process model when limited process excitation is available, and where the process is operating in the closed-loop environment. Another objective is to implement the identified process model as a prediction model used for fault detection, and validate the efficiency of the fault detection methodology to detect faults by monitoring parametric discrepancies. Validation of the 2-ORT subspace identification methodology is accomplished by using the 2-ORT method to identify two process models. The one process model will be identified using open-loop test data, where the second process model will be identified using closed-loop data. The identified models will be compared to each other by considering validation fits and residual analysis of each model in section 7.2.2. Analysis of the process models is discussed in section 7.2.3, where the pole Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data and zero locations of the models as well as the step response and Bode plots are considered. The fault detection methodology is evaluated and validated by monitoring realtime measured process data, also obtained from the Benfield East process. These data were measured in the month of August and September, 2004. Scenarios that depict process foaming (section 7.3.1) and process flooding (section 7.3.1) which are each deemed as operational problems, and the efficiency in which the fault detection methodology detects these faults, are investigated. 7.2 Subspace Identification Methodology Validation This section evaluates and validates the developed 2-ORT subspace identification methodology, as proposed in Chapter 5. Directly measured raw plant data were used for the identification process. Both measured open-loop data as well as measured closed-loop data were used for the identification of two separate process models; which will be compared. The measured open-loop data were obtained from the process, where structured step-testing was conducted to excite the appropriate plant dynamics. No structured step-testing was allowed for the process operating in a closed-loop environment, thus resulting in limited process dynamic excitation. From Chapter 4, it was concluded that the best and most appropriate way to identify the process dynamics is to isolate each individual MV-CV pair, and identify each pair separately. It was further noted in Chapter 4 that the corresponding MVs must be isolated in reference to each other. This is necessary since the process that needs to be identified is not decoupled, which implies that the corresponding CVs are excited by all the MVs simultaneously. The mentioned approach works well when structured open-loop step-testing is conducted, since only one MV is adjusted and controlled. In the closed-loop process environment, all the MVs are adjusted simultaneously in order to control the process optimally. A huge data set thus needs to be incorporated for the identification process in closed-loop. The persistently excitated exogenous inputs, (r 1 , r 2 ), are set-points or test signals used for identification and were previously defined in section 5.3.1. Katayama and Tanaka [8] required that these exogenous inputs must satisfy persistent excitation conditions, and must also be uncorrelated with white noise. For the Department of Electrical, Electronic and Computer Engineering 137 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data simulation case study conducted in Chapter 6, it was possible to adhere to the mentioned requirements, but unfortunately the mentioned requirements are not met with a real operating process plant. The Benfield process operates in steady state for long periods of time, where natural excitation due to unmeasured disturbances, are limited. Further, no or limited structured step-testing is allowed for the Benfield process, since the latter will influence the product quality. Not adhering to the exogenous input requirements implies that it is not possible to fully identify the non-linear process dynamics of the process. However, since a linear subspace identification methodology is used, identifying non-linear dynamics will not contribute to the system model since it cannot be accurately modelled by using a linear structure. The question that thus arises is what the purpose of the identified process model is and how will the process model be used. In this case, the identified process model will be used for predicting process behaviour which will be used for fault detection. The latter implies that the accuracy of the identified process model is not as critical as for the case when the model is used for control purposes, however this statement can only be confirmed when the fault detection results are analysed. 7.2.1 Subspace Identification using the 2-ORT subspace methodology Structured step-testing on open-loop process operation The data segments that were used in identifying the initial process model in section 4.4, as well as the data segments used for validating the identified initial process model, as well as models obtained from Sasol will also be used in this section. The motivation for this choice is to prevent biased results, which can result due to data sets that are more or less informative (the information content of a data set depends on how well the process dynamics were excited). It would also further be insightful to compare the 2-ORT subspace identification results of the corresponding MV-CV pairs, with the results obtained from traditional identification procedures used in the MATLAB Identification Toolbox. Parameters which can be configured for the 2-ORT subspace identification methodology are the Hankel matrix depth, k, and the order of the identified system model, n. The order of the identified model is determined automatically when only the most significant singular values, up to the (nth ) order, of the extended observability matrix are used to determine the final system matrices (for a more Department of Electrical, Electronic and Computer Engineering 138 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data in-depth explanation refer to section 5.3.1). The Hankel matrix depth, k, is a user defined value [8]. To obtain the best possible identification results, the Hankel matrix depth k will be iteratively adjusted until an optimal validation fit was obtained for each corresponding identified MV-CV pair. Table 7.1: MV-CV Identification results. M V1 M V2 M V3 M V4 CV1 CV2 CV3 CV4 47/10 56%/44% 45/15 65%/63% 45/11 60%/53% 44/14 60%/51% 42/10 6.3%/6.2% 50/13 2.9%/2.6% 43/19 0.4%/0% 50/20 4.2%/7.8% 43/18 16.6%/0% 46/9 33%/26% 41/14 30%/22.2% 43/18 22%/7.9% 40/13 4.8%/8.5% 40/9 2.8%/1% 45/12 7.4%/1.6% 37/20 4.2%/1.1% Table 7.1 tabulates the results which contains the choice of Hankel matrix depth, k, optimal system order, n, as well as the validation fit obtained by using the 2-ORT subspace method, SID2-ORT , and system identification procedures of the Matlab Identification Toolbox, SIDM atlab . The results in each corresponding identified MV-CV block is interpreted as follow: k/n . SID2-ORT/SIDM atlab (7.1) From table 7.1 the 2-ORT subspace identification method produces models of higher model accuracy than those models that have been identified by using the Matlab System Identification Toolbox (represents standard system identification and parameter estimation methods like N4SID, ARX, ARMAX, PEM ). It should also be mentioned that no additional data preprocessing was necessary for the data sets used with the 2-ORT subspace identification method, except for the orthogonal projection used for projecting out stochastic disturbances. The data sets used for the Matlab System Identification Toolbox, however, needed extensive data preprocessing (removal of periodic trends, means and appropriate filtering). The effect of not including exogenous signals in the 2-ORT subspace identification process did not have a severely detrimental effect on the identification process as had been expected. However, this is only due to the fact that structured step-tests were used that were sufficient enough in exciting process dynamics. Department of Electrical, Electronic and Computer Engineering 139 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Closed-loop process operation No structured step-testing was conducted on the process operating in a closed-loop environment. No structured step-testing implies that it can take very long for the process to be excited sufficient and to generate data sets that are informative enough for identification purposes. It is due to the large data sets that are necessary for system identification, that traditional system identification methodologies are incapable and insufficient to identify the process. However, the 2-ORT subspace identification methodology is well suited to handle large data sets efficiently. Parameters which can be configured for the 2-ORT subspace identification methodology are the Hankel matrix depth, k, and the order of the identified system model, n. For closed-loop identification, a Snapshot Identification procedure was used in identifying the process. Snapshot Identification implies that portions of the identified plant that produce acceptable validation results are isolated and used to build up the final process model. Data blocks that were used for identification consisted each of 10000 samples, which is equivalent to two days of full-time process operation. 7.2.2 Validation of the identified model Open-loop process operation The evaluation of the identified model is accomplished by considering the validation fit using step-test data and raw plant data of the process under normal operating conditions. Further is also further insightful to investigate the autocorrelation between the residuals, and the cross-correlation between the residuals and past inputs. As mentioned previously; to prevent the validation results to be biased, it will be required that the validation data that are being used to evaluate the identified system model is the same as the data that has been used with the validation process of the initial identified process model in Chapter 4. For evaluation purposes, it is necessary to consider the validation fit results obtained for the initial Benfield process model, as discussed previously in section 4.4. It was concluded in section 4.4 that the validation fit obtained for the initial Benfield process model, using open-loop step-test data, had a 7% improvement on a similar validation fit using the process models produced by Synfuel. However, the identified initial Benfield process model was not able to produce validation fit results as good as those produced by the process models provided by Synfuel, Department of Electrical, Electronic and Computer Engineering 140 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data using closed-loop process data. A reason for the discrepancy in the validation results, using closed-loop raw plant data, can be contributed to the fact that even though step-testing was conducted in open-loop, it is impossible to remove partial process feedback. This partial feedback results in biased parameter estimates and inflated parameter variances when traditional subspace (N4SID,CVA,MOESP ) identification methods and parameter estimation (PEM ) methods are implemented. Figure 7.1 illustrates the validation fit between the predicted outputs obtained from the identified 2-ORT subspace model and real process data. The average validation fit (64%) that was obtained, using open-loop step-test data with partial process feedback, is the same as the result that was obtained (see section 4.4) when identifying the Benfield process using traditional identification and parameter estimation methods. However, the traditional methods proved to be inferior to identify and estimate accurate system parameters, which is confirmed when closed-loop raw process data are being used to validate the identified model. Figure 7.2 illustrates the validation fit results obtained using closed-loop validation data. An average validation fit of 64% was obtained. The 2-ORT subspace methodology produced validation fits that are superior to those fits obtained from the initial identified Benfield process model. The 2-ORT subspace methodology is thus capable of identifying and estimating accurate system parameters when closed-loop system data are used. Residual analysis bears information about the quality of the identified model. As discussed in section 4.4.2, it is pragmatic to consider the relationship between the residuals of the identified model and the past inputs. The auto-correlation results between the past inputs and process residuals gives an indication whether the model that has been identified is only applicable to a certain set of inputs, or if the identified model is an accurate universal true representation of the real process. The cross-correlation between the past inputs and process residuals are illustrated in figure 7.3 (see furthermore Appendix D, figures D.1-D.15). From figure 7.3, it is concluded that the 2-ORT subspace identification methodology managed to identify an accurate system model which is valid for a wide variety of system inputs. This conclusion is based on the fact that the auto-correlation results all lie within the 95% confidence boundary region. Department of Electrical, Electronic and Computer Engineering 141 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.1: Validation fit using open-loop step-test data with partial process feedback. Department of Electrical, Electronic and Computer Engineering 142 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.2: Validation fit using closed-loop raw process data. Department of Electrical, Electronic and Computer Engineering 143 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.3: The auto-correlation and cross-correlation of residuals and M V1 with output CV1 . Deficiency in the identified model can also be found by considering the autocorrelation among the residuals as defined by expression 4.8. If the correlation among the residuals is not small for τ 6= 0, then part of ε (t) could have been predicted from past data [30]. From figure 7.3 (see furthermore Appendix D, figures D.1-D.15), it is observed that the auto-correlation among the residuals are small for τ 6= 0, but since the correlation of the residuals does not lie within the 95% confidence boundary region, the latter implies that the outputs could have been predicted better. Closed-loop process operation Similar to the open-loop model validation tests, validation data fits were also conducted on the models, which were identified using measured closed-loop data. Residual analysis tests were also conducted; where the auto-correlation of process residuals and cross-correlation between residuals and past inputs were considered. Figure 7.4 illustrates the closed-loop validation fit on the model which was identified in a closed-loop environment. If one compares the average percentage validation fit with that of the open-loop validation fit in figure 7.2, one observes a 2% validation fit improvement (66%). However, the identified CO2 -slip (CV1 ) produced models that are less accurate than those that have been obtained in the open-loop identification. It is important to elaborate on the discrepancy between Department of Electrical, Electronic and Computer Engineering 144 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.4: Validation fit using closed-loop raw process data. Department of Electrical, Electronic and Computer Engineering 145 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data the process prediction and real process output, CV1 , at t=10000s. It is this discrepancy that resulted in a poor validation fit for CV1 . As mentioned earlier, it is not possible to isolate MV-CV pairs. In other words, the data segments where only one MV is controlled is isolated when closed-loop process data are used for identification. The use of large data sets to identify a complete process model can result in process dynamics to be modelled and included in incorrect sections of the process models; this is the risk of modelling a non-diagonal system. From figure 7.4 some of the process dynamics of CV2 − CV4 have been included in the models obtained for CV1 , which resulted in spurious behaviour for CV1 around t=10000s. In the closed-loop environment the only possible excitation that is present for identifying the CO2 -slip dynamics is the natural disturbance excitation as produced by the raw CO2 gas feed. It would thus be very difficult to obtain accurate CO2 -slip models if no additional excitation is introduced into the closed-loop environment. A measured closed-loop data sample with a size of 1000 was used in validating the identified models (which is approximately 4-hours of operating process time). To validate the quality of the models that have been identified, it is further necessary to consider the auto-correlation between the process residuals, as well as the cross-correlation between the process residuals and past inputs. The autocorrelation between the process residuals are illustrated in figure 7.5 (furthermore see Appendix D, figures D.16-D.30). The cross-correlation between the process residuals and past inputs are illustrated in figure 7.5 (furthermore see Appendix D, figures D.16-D.30). From figure 7.5 the cross-correlation results are acceptable, considering the lack of process excitation. Almost all the cross-correlation results lie within the 95% confidence boundary region, which implies that the identified model is valid for a variety of process inputs. Deficiency in the identified model can also be determined by considering the auto-correlation among the residuals themselves. The auto-correlation between the residuals, as depicted by figure 7.5, shows that the process outputs could have been predicted better. The only possible way to better predict the outputs is to excite the plant dynamics more thoroughly and model the process dynamics more accurately. However, for fault detection the validation fit results and the residuals analysis results are acceptable, and in some cases very good, which implies that Department of Electrical, Electronic and Computer Engineering 146 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.5: The auto-correlation and cross-correlation of residuals and M V1 with output CV1 . the models that have been identified should be sufficient for fault detection. The latter will be confirmed in section 7.3. 7.2.3 Analysis of the identified model Poles and Zeros of the Identified Process RHP zeros, and RHP poles are common in many multivariable systems [87]. Figure 7.6 illustrates the pole and zero locations of the identified Benfield process for both the models that have been identified by using closed-loop data and open-loop data respectively. From figure 7.6 one notices that fewer poles and zeros were used to model the process where closed-loop data were used; with limited process excitation. This explains and confirms the auto-correlation results between the process residuals (D.16-D.30), where it was concluded that the process outputs could have been better predicted. Naturally, better prediction models would include more poles and zeros to describe the process dynamics more accurately. From the pole locations for both open-loop and closed-loop models in figure 7.6, it can be concluded that model stability is guaranteed, where all the identified poles lie within the unit circle boundary region. Department of Electrical, Electronic and Computer Engineering 147 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.6: The poles and zeros for the identified process model. Department of Electrical, Electronic and Computer Engineering 148 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data One can also conclude from the right half plane zeros on the unit boundary that the process delays have been identified as previously in the initial Benfield process model in Chapter 4. Step Response of the Identified Process The step response of the identified Benfield process is illustrated in figure 7.7. With the step response, it is observed that for a bounded input, the identified process model produces a bounded output. The model is thus stable. Bode Plots It would be insightful to compare the Bode plots of the models that have been identified using closed-loop data and open-loop data. Figure 7.8 illustrates the magnitude Bode plots for the corresponding models. The Bode plots for both the models are surprisingly close, where the only discrepancy in the Bode plots can be observed for MV3 -CV1 and MV3 -CV4 where the low frequency gains are different. This explains the validation fit difference between CV1 andd CV4 in figures 7.2 and 7.4. Department of Electrical, Electronic and Computer Engineering 149 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.7: Step response of the identified process model. Department of Electrical, Electronic and Computer Engineering 150 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.8: Bode plot of the identified process model. Department of Electrical, Electronic and Computer Engineering 151 Chapter 7 7.3 Subspace SID and FD Methodology Validation: Real Process Data Fault Detection Validation This section evaluates the fault detection methodology as proposed in section 5.4. To validate the efficiency and effectiveness of the fault detection methodology, real-time process data taken from the Benfield process operating in a closed-loop environment will be used to monitor the process behaviour. The process model that was identified and validated in section 7.2 from real-time, closed-loop measured process data will be used to predict process behaviour. The proposed fault detection methodology will be validated by how well it can detect both flooding and foaming faults. Foaming is deemed an operational problem, because it limits mass transfer and thereby leads to CO2 slip run-away [85]. It is difficult to distinguish between flooding and foaming by just looking at operational data. However, it has been concluded through observation studies that foaming typically occurs when one of the bed’s differential pressure starts fluctuating rapidly. On the other hand, flooding is usually associated with a severe differential pressure increase over two or more of the process beds. A threshold value which can also indicate the possibility of severe flooding and foaming is when differential bed pressures exceed, or tend to bed pressures of 20kPa [2]. 7.3.1 Detection of Process Foaming Foaming occurs when the differential bed pressure of only one of the beds rises sharply compared to the other differential pressures. Figure 7.9 illustrates a scenario where foaming has occurred while the Benfield process was under normal operation. This foaming is due to excessive fluctuations in the differential bed pressure, as observed from CV3 . Differential bed pressure fluctuations start around t=2700s which can already be a risk for potential foaming. At t=4200s foaming occurs since the early warnings were ignored and not reacted on quickly enough. Figure 7.10 illustrates the fault detection monitoring results when implemented to monitor the process behaviour as depicted by figure 7.9. From figure 7.10, the infinity matrix norm detects parameter discrepancies as soon as there is early differential bed pressure fluctuations at t=2700s. The Maximum-Minimum graph serves as a threshold detector that indicates the difference between the maximum and minimum process parameter fluctuation error over a specified time period. The Maximum-Minimum graph allows the user to evaluate and translate the seriousness of the infinity matrix norm errors. Department of Electrical, Electronic and Computer Engineering 152 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.9: Foaming under normal closed-loop process operation. Figure 7.10: Foaming under normal closed-loop process operation. Graph a illustrates the infinity norm parameter deviation results, where graph b illustrates the Maximum-minimum parameter fluctuation measurement as determined from graph a Department of Electrical, Electronic and Computer Engineering 153 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.11: Flooding under normal closed-loop process operation. It is observed from the Maximum-Minimum graph that early fault detection starts at t=3000s, where the severity of the symptom increases drastically at the actual foaming incident. From figure 7.10, the process operator could have prevented foaming by taking early preventive actions as soon as t=3200s. 7.3.2 Detection of Process Flooding Flooding occurs when more than one of the beds show a rapid change in their differential bed pressures. Figure 7.11 illustrates real process behaviour under normal conditions, where differential bed pressure fluctuations have led to flooding at t=2700s. Flooding in figure 7.11 was associated with the trend of CV2 and CV3 that fluctuate rapidly. One will also notice that the undetected fault at t=2700s has further led to spurious fluctuations of CV2 at t=5250s where at t=7500s the threat of foaming is inevitable. Figure 7.12 illustrates the results of the fault detection methodology that monitors the process behaviour. It is observed from the infinity matrix norm graph in figure 7.12, that at the start of severe fluctuations and a rise in differential bed pressures, the infinity matrix norm detects severe parameter discrepancies. The Maximum-Minimum graph can be interpreted as a warning to the process operator as soon as t=2700s. Department of Electrical, Electronic and Computer Engineering 154 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data Figure 7.12: Flooding under normal closed-loop process operation. Graph a illustrates the infinity norm parameter deviation results, where graph b illustrates the Maximum-minimum parameter fluctuation measurement as determined from graph a No-or delayed reaction to the faults leads to another, even more severe, fault at t=7500s; which is clearly observed from the infinity matrix norm graph in figure 7.12. 7.4 Conclusion In this chapter the 2-ORT subspace identification methodology proved to be very computationally efficient and effective in identifying process models when limited excitation of process dynamics is available, thereby implying that large data sets must be used for identification. The 2-ORT subspace identification methodology is capable of identifying a process operating in a closed-loop environment, where unbiased parameter estimates were estimated by using orthogonal projection for data pretreatment. The models that have been identified by using closed-loop data are just as accurate as the models that have been identified by using open-loop data. The fault detection methodology was able to detect faults which include process foaming and flooding. The sensitivity of the fault detection methodology directly depends on the accuracy of the identified model. The Kalman filter accurately estimate and converge to a new set of parameters. However, an accurate Department of Electrical, Electronic and Computer Engineering 155 Chapter 7 Subspace SID and FD Methodology Validation: Real Process Data initial parameter estimate as obtained from the 2-ORT subspace identification methodology is essential. The infinity matrix norm is an efficient matrix measurement tool to evaluate parameter deviations. A user-defined threshold value can be defined and used to evaluate detected faults and adjust the robustness of the fault detection methodology. A user-defined threshold value together with a Maximum-Minimum representation (see figure 7.12) of parametric discrepancies is a user friendly tool which can indicate to a process operator how to make critical decisions. From the residual analysis it was concluded that a better model could have been estimated from the closed-loop data. The identification of a more accurate model is only possible with the inclusion of structured closed-loop step-testing. A lack of structured closed-loop step-testing resulted in poor identified models for the CO2 slip. However, it was observed that the fault detection methodology is robust and can utilise poorly identified models efficiently to detect process faults. Department of Electrical, Electronic and Computer Engineering 156 Chapter 8 Subspace SID and FD Methodology Verification 8.1 Introduction In this chapter, the proposed 2-ORT subspace identification methodology and the fault detection methodology is verified by using recent measured process data. The process data were taken in the months of August, September and October, 2008, from the Benfield East process, operating in closed-loop with a RMPCT controller. The 2-ORT subspace identification methodology is used to identify the Benfield process, where a discussion on the validation fit and residual analysis results will follow in section 8.2.2. The fault detection methodology is verified by detecting process and instrumental faults, previously identified by Sasol. A discussion on the results will follow in section 8.3. 8.2 Subspace Identification Methodology Verification The 2-ORT subspace identification methodology needs to be verified. The subspace methodology is verified by using recent measured process data of the Benfield process, operating under normal operating conditions in closed-loop. In Chapter 7, the subspace methodology was validated by using data that were measured when the Benfield process was initially commissioned. These data were measured in the months of August and September, 2004. The data used for the verification were measured in the months of August, September and October in 2008. Chapter 8 Subspace SID and FD Methodology Verification For the verification of the 2-ORT subspace identification methodology, it is necessary to conduct a validation data fit of the estimated process model, as well as a residual analysis. 8.2.1 Critical Evaluation of Data Sets used for Identification Prior to process identification, it is necessary to investigate and evaluate the data sets that are being used for identification. The effectiveness of the 2-ORT subspace identification methodology relies on data sets that need to be sufficiently informative. Data sets are considered informative when they contain measured data that represents the excited process dynamics sufficiently. Evaluating the measured data, measured over the three months in 2008, it was decided that the best way to identify the Benfield process is to isolate MV-CV pairs and to identify each MV-CV pair individually, using the 2-ORT subspace identification methodology. In Chapter 7, the process was identified by using the measured data of all the MVs and CVs simultaneously. The latter was necessary since the data sets were relatively small (not exceeding 5000 samples), and it was extremely difficult to isolate individual MV excitations. The small data sets were the result of unmeasured disturbances on the process, as well as initial instrumental faults (faulty sensors and actuators not correctly calibrated or installed), which is typical for a newly commissioned plant. The Benfield process stability has increased dramatically over the last four years due to active monitoring and process maintenance. The only viable course to follow with the identification of the Benfield process is thus to isolate each MV individually, once it has been naturally stepped in such a way that the process dynamics are excited. Natural stepping of the process does not guarantee sufficient excitation of process dynamics. It is thus necessary to use very large data sets which increase the possibility of natural excitation created by unmeasured disturbances and non-linear switching of the RMPCT controller. 8.2.2 Validation Fits of Identified Process Models The 2-ORT subspace identification methodology was verified by conducting three process validation data fits. For each process validation data fit, 2000 samples (just over 8 hours of process operation) of the Benfield process, operating in closed-loop Department of Electrical, Electronic and Computer Engineering 158 Chapter 8 Subspace SID and FD Methodology Verification with the RMPCT controller, were used. A process validation data fit was conducted once in August, once in September and once in October; in the morning, afternoon and evening 8 hours each respectively. Figures 8.1-8.3 illustrate the validation fit for the three months respectively. It is concluded from these figures that the models that have been identified are very accurate in predicting the process behaviour. Furthermore, it can be concluded that in isolating MV-CV pairs, and using larger data sets, it is possible to capture more excited process dynamics; thereby identifying accurate process models. Department of Electrical, Electronic and Computer Engineering 159 Chapter 8 Subspace SID and FD Methodology Verification Figure 8.1: Validation fit using raw closed-loop process data, measured in August, 2008. Department of Electrical, Electronic and Computer Engineering 160 Chapter 8 Subspace SID and FD Methodology Verification Figure 8.2: Validation fit using raw closed-loop process data, measured in September, 2008. Department of Electrical, Electronic and Computer Engineering 161 Chapter 8 Subspace SID and FD Methodology Verification Figure 8.3: Validation fit using raw closed-loop process data, measured in October, 2008. Department of Electrical, Electronic and Computer Engineering 162 Chapter 8 8.2.3 Subspace SID and FD Methodology Verification Residual Analysis To validate the quality of the models that have been identified, it is further necessary to consider the auto-correlation between the process residuals, as well as the cross-correlation between the process residuals and past inputs. The autocorrelation between the process residuals are illustrated in figure 8.4 (see Appendix E, figures E.1-E.15 for more residual analysis results). The cross-correlation between the process residuals and past inputs are illustrated in figure 8.4 (see Appendix E, figures E.1-E.15 for more residual analysis results). Figure 8.4: The auto-correlation and cross-correlation of residuals and M V1 with output CV1 . From figure 8.4, the cross-correlation results are very good. All the crosscorrelation results lie within the 95% confidence boundary region, which implies that the identified models are valid for a variety of process inputs. Deficiency in the identified model can also be determined by considering the auto-correlation among the residuals themselves. The auto-correlation between the residuals, as depicted in figure 8.4, shows that the process outputs have been predicted accurately. 8.2.4 Identified System Analysis When isolating and identifying MV-CV pairs, it results in identified models that are very accurate in predicting model outputs. However, the amount of poles Department of Electrical, Electronic and Computer Engineering 163 Chapter 8 Subspace SID and FD Methodology Verification and zeros (process parameters) increase dramatically. The clear advantage is that a better process model is identified, but a disadvantage is that the increase in process parameters creates a greater computational burden in the monitoring of process parameters. In Chapter 7, the 2-ORT subspace identification methodology managed to identify the Benfield process model, where only 15 states were used to describe the process behaviour. This was at a cost of predicting the process output accurately at an average fit of 64%. However, only 361 parameters ((s + y) × (s + u)) had to be monitored for fault detection, where s, u and y are the number of states, inputs and outputs, respectively. Identified models producing accurate predictions as depicted by figures 8.1-8.3 required 77841 parameters where s=275, u=4 and y=4. The drastic increase of process states is due to the fact that individual MVCV pairs were isolated for identification. The latter is one of the disadvantages of piecewise MV-CV pair identification. However, the identified models are very accurate, as concluded from the validation data fits. Figure E.16 in appendix E illustrates the poles and zeros. From the pole locations, one can conclude that the identified system is stable. 8.3 Fault Detection Methodology Verification This section verifies the effectiveness and efficiency in which the proposed fault detection methodology is able to detect faults. Faults that were isolated include foaming and flooding (which are regarded as process faults), and sensor faults (which are classified as instrumental faults). Faults, as identified by Sasol, were used to verify whether the fault detection methodology was able to detect actual process faults. The fault detection methodology verification process incorporates the process model as identified in section 8.2.2. Raw plant data, as measured from the Benfield process operating in closed-loop with a RMPCT controller, is monitored for abnormal situations. Department of Electrical, Electronic and Computer Engineering 164 Chapter 8 8.3.1 Subspace SID and FD Methodology Verification Detection of Process Flooding and Foaming As discussed in section 7.3, foaming and flooding is deemed an operational problem, because it limits mass transfer and thereby leads to CO2 slip run-away [85]. Factors that cause foaming and flooding can either be distinguished between internal process upsets (the presence of hydrocarbons heavier than CO2 , cyclic hydrocarbons, DEA degradation products, organic acids and other oxygenated components serve as foaming agents), or external abnormal unidentified process disturbances (abnormal fluctuations in the CO2 rich gas feed). In section 7.3, the fault detection methodology was validated and used to monitor the Benfield process, where it detected foaming and flooding caused by internal process upsets. In the verification of the fault detection methodology, foaming and flooding were caused by external process disturbances. Figure 8.5 illustrates the corresponding MVs and CVs that were monitored to detect possible process faults. Figure 8.6 illustrates the detection of the fault at t=3100s. It can already be concluded from parameter error fluctuations (t<3100s), before the fault was detected, that the process exhibits unstable and unpredictable behaviour. The latter is concluded from the fact that the controlled CVs exhibit abnormal fluctuations, while the controlled inputs remained constant. One will also notice that the Maximum-minimum graph in figure 8.6 illustrates how the parameter error deviations increase over time, where actions should have been taken by the process operator at t=3750s to prevent possible flooding and foaming at t=4500s. Preventive measures include dosing anti-foam agents. Figure 8.7 illustrates the MVs and CVs, which were monitored for process faults, as detected in figure 8.8. The fault was identified as foaming with the possibility of flooding (flooding is usually caused by sporadic fluctuating differential bed pressures which were not controlled to acceptable levels). At t=6000s (see CV2 -CV4 in figure 8.7) the differential bed pressures have shown abnormal deviations from their normal operating levels. These deviations resulted in a steep increase in the infinity matrix norm parametric error measure (as seen by the infinity matrix measure in graph a of figure 8.8). From the Maximum-minimum graph in figure 8.8, the process operator could have taken the necessary precautions to prevent possible process upsets, as soon as t=5500s. Department of Electrical, Electronic and Computer Engineering 165 Chapter 8 Subspace SID and FD Methodology Verification Figure 8.5: Measured MVs and CVs. Abnormal differential bed pressures, observed from the CV fluctuations, resulted in foaming [9]. Figure 8.6: Detection of foaming due to abnormal differential bed pressure fluctuations. Graph a illustrates the infinity matrix norm measure for estimated parameter deviations, where graph b depicts the severity of the parameter deviation by illustrating the difference between the maximum and minimum parameter error over a predefined period of monitoring. Department of Electrical, Electronic and Computer Engineering 166 Chapter 8 Subspace SID and FD Methodology Verification Figure 8.7: easured MVs and CVs. Abnormal differential bed pressures, observed from the CV fluctuations, resulted in foaming. Figure 8.8: Detection of foaming and the possibility of flooding. Graph a illustrates the infinity matrix norm measure for estimated parameter deviations, where graph b depicts the severity of the parameter deviation by illustrating the difference between the maximum and minimum parameter error over a predefined period of monitoring. Department of Electrical, Electronic and Computer Engineering 167 Chapter 8 Subspace SID and FD Methodology Verification Figure 8.9: Measured MVs and CVs. An abnormal spike in the MV3 illustrates the degradation of a sensor where possible sensor failure is inevitable. 8.3.2 Detection of Sensor Faults Early detection of sensor faults due to complete sensor failure is necessary to prevent critical process upsets. Figure 8.9 illustrates the monitored MVs and CVs, where a sensor fault was detected at t=3800s From figure 8.10, one will notice the parameter error fluctuations (t > 4125s), that have started after the sensor fault was detected. Parameter error fluctuations have previously led to foaming and flooding as illustrated by figures 8.5 and 8.7. The fault detection methodology was thus not only able to identify a sensor fault, but was also able to detect parameter fluctuations; which have previously led to foaming and flooding. 8.4 Conclusion In this Chapter the proposed fault detection methodology and 2-ORT subspace identification methodology was verified by identifying and monitoring the Benfield process using recent process data measured in the months of August, September and October, 2008. Process and instrumental faults, as identified by Sasol, were used as a benchmark for the fault detection methodology. Considering the data validation fit results, and the residual analysis results, it Department of Electrical, Electronic and Computer Engineering 168 Chapter 8 Subspace SID and FD Methodology Verification Figure 8.10: Detection of sensor failure. was concluded that the 2-ORT subspace identification methodology is able to identify process models that are very accurate in predicting model outputs. Accurate models were identified by using large data sets, and isolating MVs for individual MV-CV model identification. The latter led to an increase in process parameters, which increases the computational burden for the fault detection methodology. The 2-ORT subspace identification methodology and fault detection methodology is a feasible monitoring solution to be implemented in the real Benfield process. The computational burden of estimating a new set of parameters is not a problem due to the slow process dynamics and sampling rate of the Benfield process. However, processes with faster process dynamics, where higher sampling rates are used, require that the estimation process needs to be optimised to make it a feasible real-time solution. The fault detection methodology was able to detect the foaming, flooding and sensor faults as previously identified by Sasol. Monitoring the results of the fault detection methodology, it was also possible to predict potential future process upsets. Department of Electrical, Electronic and Computer Engineering 169 Chapter 9 Conclusions and Further Research 9.1 Conclusive Summary This section provides a summary of the work in this dissertation. Accurate process monitoring is necessary to increase the economical feasibility and availability of a process plant. Such a plant, which needs active monitoring, is the Benfield process. 9.1.1 The Benfield Process The Benfield process is a thermally regenerated cyclic solvent process, implemented by Sasol, Synfuel, which removes CO2 from tail gas. The efficient and effective removal of CO2 from the tail gas results in an increase in profitability margins, where the CO2 -clean tail gas is further refined and used for a variety of gas products further down the gas circuit. The Benfield unit consists of two phases, each containing two functional stages: a Potassium Carbonate Solution wash stage, which removes the bulk of the CO2 , and a Diethanolamine Solution wash stage, which trims the CO2 levels down to below the 40 ppm level. Each stage consists of a wash column in which the CO2 is absorbed by the particular wash solution, and a regeneration column in which the CO2 is stripped from the solution using manipulated pressurised steam. For this dissertation, the focus was mainly placed on the Potassium Carbonate Solution wash stage, where the wash column was considered for active monitoring. Chapter 9 9.1.2 Conclusions and Further Research Operating Philosophy and Operational Problems The current operating philosophy for the Benfield process is to keep it simultaneously hydraulically and CO2 loaded, as far as possible, so as to make optimal profit from the gas circuit products. This operating philosophy requires 100% availability and utilisation [85]. The most common operational problems experienced with the Benfield process that could influence the plant availability and utilisation include the following: foaming, flooding, corrosion, insufficient regeneration, poor mass transfer, high and abnormally low bed differential pressures, pump cavitations and fluctuations in the CO2 slip feed to the cold separation unit [2]. Foaming and flooding is detected by erratic bed differential pressures and column levels. Foaming is deemed an operational problem, because it limits mass transfer, and thereby leads to CO2 slip run-away. Particulate matter, which typically includes rust particles, dirt and activated carbon particles, serve as catalysts for potential foaming. Severe and uncontrolled foaming usually leads to flooding. 9.1.3 Benfield Process Control Solution The Benfield process operates in a closed-loop environment, and is controlled by an advanced process control solution, namely a Robust Multivariate Predictive Control Technology (RMPCT). The RMPCT controller is derived from the architecture of the Model Predictive Control (MPC) architecture, where a prediction horizon and control horizon is defined respectively such that the process is controlled optimally. The RMPCT control philosophy uses a finite impulse (FIR) model for each of the controlled-manipulated variable pairs. RMPCT uses two different kinds of optimisation functions namely error and profit optimisation. With optimisation, RMPCT permits the constraints on some or all controlled variables to be de-activated. 9.1.4 Process Monitoring On-line process monitoring with fault detection can provide stability and efficiency for a wide range of industrial processes. With continuous on-line fault detection, it is possible to detect abnormal and undesired process states and process parameters, which ultimately increases plant performance [11]. A prerequisite for efficient and accurate on-line monitoring is to have an accurate process model, as identiDepartment of Electrical, Electronic and Computer Engineering 171 Chapter 9 Conclusions and Further Research fied from process data. This accurate identified process model is utilised in such a way to predict process behaviour, so that faults can be detected and corrected, ultimately increasing plant performance. 9.1.5 Process Model Identification In the last decade, researchers have shown a renewed interest in closed-loop system identification. The use of closed-loop system identification is motivated due to distinct advantages over open-loop identification. Closed-loop system identification can be classified into three distinct approaches, as was defined by Ljung [30]. These approaches are: the direct approach, the indirect approach and the joint input-output approach. Identification formulations in the 80’s and 90’s have traditionally been cantered around the Prediction Error Method (PEM) [93] paradigm, proposed by Ljung [30]. The advantage of PEM is that convergence and asymptotic variance are well established [61], where the disadvantage of the PEM is a rather complex parametrisation and non-convex optimisation. The extension of PEM to closedloop systems was first introduced in the late 90’s by the proposed ASYM [31]. The early 90’s witnessed the birth of a new type of linear system identification algorithm called the subspace method. Subspace methods originated between system theory, geometry and numerical linear algebra [1]. Linear subspace identification methods are concerned with systems and models that can be represented as state-space models. State-space based approaches are typically better suited to model multiple-input, multiple-output (MIMO) systems. The main difficulty with applying predictor error methods to state-space models is to find a numerically robust canonical realization, since the alternative, a full parametrisation of the state-space model, would involve a huge number of parameters [28]. Subspace identification is regarded as an alternative to PEM identification methods. Subspace identification yields a multivariable system model without the need for special parametrisation-which requires significant prior knowledge and non convex optimisation [45]. Most subspace methods can fail, (unbiased estimate property of subspace methods is lost) when used with closed-loop system data, even with large amounts of data sets [7, 45]. With closed-loop data, consistency for the first least squares estimation breaks down. Department of Electrical, Electronic and Computer Engineering 172 Chapter 9 Conclusions and Further Research Two solutions, proposed by Qin and Ljung [79] and Katayama and Tanaka [8] respectively, addresses the problem of the lost unbiased estimate property of subspace methods when used with closed-loop data. Qin and Ljung [79] revealed that typical subspace identification algorithms actually use non-parsimonious model formulations with extra terms in the model that appear to be non-causal. These extra terms are included to conveniently perform subspace projection, but are the cause of inflated variance in the estimates, and partially responsible for the loss of closed-loop identifiably. Qin and Ljung [79] proposed a subspace method, where these non-causal terms are removed, making the model parsimonious. Qin and Ljung’s [79] proposed method estimated the non-causal terms and incorporated these terms into the identification of the process model. Katayama and Tanaka’s [8] proposed method is based on two successive orthogonal decompositions (ORT). The first LQ decomposition, used for data processing, calculates the deterministic components of the joint input-output process. The second decomposition uses the ORT method to compute the system matrices. The LQ decomposition as proposed by Katayama and Tanaka [8] is used to eliminate the non-causal terms, instead if estimating these terms, as was done earlier by Qin and Ljung [79]. The proposed method of Katayama and Tanaka [8] was implemented for closedloop identification of the Benfield process. It was concluded through extensive simulations, that the models that were identified by using the 2-ORT subspace identification method, were in some cases unstable and not suitable for use in fault detection. A solution, proposed by Maciejowski [91], suggested an alternative way in how the extended observability matrix can be estimated, that results in guaranteed system stability. The proposed method of Maciejowski [91] guarantees the estimation of a stable system matrix. However, the cost of this benefit is the loss of accurate estimation results, but in some applications this is outweighed by the advantage of guaranteed stability. Possible applications that can take advantage of this guaranteed stability is subspace algorithms that run online and unsupervised, such as adaptive control or fault monitoring [91]. Maciejowski [91] proves that by using the shift invariance property of the extended observability matrix, and appending a block of zeros to the shifted extended observability matrix, the estimated system matrix is guaranteed to be stable. Department of Electrical, Electronic and Computer Engineering 173 Chapter 9 9.1.6 Conclusions and Further Research A Parametric Fault Detection Approach Solutions to fault detection and process monitoring are based on model based platforms, where input and output signals and the dynamics of process models are used to detect faults. Well known model-based fault detection methods include parameter estimation, parity equations and state observers [56]. Process monitoring and fault detection methods using process parameters has been studied extensively since the early 80’s [26]. However, due to the difficulty of identifying accurate system models from engineering first principles, parameter monitoring fault detection methods has not been the predominant choice in process fault detection. The estimation of process parameters using the least squares estimation method is also computationally intensive and is thus not a feasible solution for online process parameter monitoring. Subspace methods have proven to be computational efficient where no a-priori process knowledge is necessary to estimate a system model. Subspace system identification methods thus allow the user to identify black-box models, which can be used to monitor processes. The challenge in fault detection with subspace methods is to monitor and evaluate the vast amount of system parameters efficiently and elegantly. Re-identification of the process using subspace methods, necessary to track parameter changes, is also not a feasible solution to fault detection. The reason is due to the vast amount of data samples which are necessary and must contain well excited process dynamics. The proposed fault detection solution in this dissertation utilises the initial process parameters as was identified by die subspace identification methodology; and updates and monitors these parameters periodically, to detect possible process upsets and faults. The general Kalman filter theory, which is generally used to estimate systems states, was extended such that it can be used to estimate system parameters. The infinity norm matrix measurement is an effective measurement tool which was implemented to detect parameter deviations between initial parameters and newly estimated parameters as estimated by the extended Kalman fitler. 9.1.7 Validation and Verification of Methodologies The proposed 2-ORT subspace identification methodology and fault detection methodology were validated by identifying and monitoring a true process model. Data used for identification and monitoring purposes was measured under closedDepartment of Electrical, Electronic and Computer Engineering 174 Chapter 9 Conclusions and Further Research loop process operation, using data obtained at the time when the Benfield process was initially commissioned. The Benfield was commissioned in August 2004, where the process was characterised with a lot of unmeasured disturbances as well as process and instrumental failures. The latter characteristics enabled the fault detection methodology to be tested and validated thoroughly. The proposed methodologies were also verified by using measured data obtained during the months of August, September and October, 2008. With continuous plant monitoring and maintenance procedures, plant operators were able to minimise and isolate unmeasured disturbances and process upsets. With a stable plant with minimal plant upsets, it was possible to obtain very large and informative data sets, which could be utilised in the identification process. These informative data sets led to identified models that are very accurate in predicting system behaviour, which is critical for effective fault monitoring. 9.2 Critical Review of Own Work This dissertation evaluated the hypothesis which states that process faults can be detected by monitoring process parameters, updated periodically, by re-identifying the process model. This hypothesis required the development of an efficient and effective identification methodology, which can be integrated with a fault detection methodology that monitor and update parameter changes. This work was carried out for the Benfield process at Sasol. It was further noted that it would be advantageous if such an identification and fault detection methodology can be executed in real time, thus conducting on-line process monitoring. The first task and objective was to propose and implement a system identification methodology; which can identify process models that operate in a closed-loop environment with unmeasured disturbances and limited excitation. A subspace approach was adopted since it makes use of well defined system theory, geometry and numerical linear algebra, which are efficient tools to be used when large data sets are manipulated to identify process models. A subspace identification solution that caters for single-input, single-output (SISO) systems, was proposed by Katayama and Tanaka [8]; which was further adapted so that it can be used to identify system models of multiple-input multiple-output (MIMO). The subspace Department of Electrical, Electronic and Computer Engineering 175 Chapter 9 Conclusions and Further Research SISO SID solution can be used for MIMO systems if the process is known to be diagonal. The latter was not the case for the Benfield process. However, due to limited process excitation in the verification process, it was impossible to isolate scenarios where all the MVs were excited simultaneously and thus required that individual MV-CV pair-wise identification had to be conducted. Models that have been identified by the proposed 2-ORT MIMO subspace identification methodology tend to be unstable. A solution to this problem was to alter the way in how the system matrices are derived from the extended observability matrix. The latter proposed solution resulted in stable models. However, process dynamics were lost since some of the unstable poles where not included in the process model. If the process indeed has unstable poles, then excluding these poles in the system model may jeopardise the efficiency of the fault detection methodology. The latter was not a problem in the Benfield process, since all the faults that were used as a benchmark to test the fault detection methodology were detected, and poles identified were all stable. One can conclude from the validation and verification results that the proposed 2-ORT MIMO subspace identification methodology was very successful in identifying the Benfield processes operating in closed-loop. Very accurate models were identified every time which were validated by validation data fits and residual analysis. Compared to traditional subspace methods, the 2-ORT MIMO subspace identification methodology is superior in every way, in the sense that it identified unbiased system parameters successfully. In comparison to the PEM framework, the 2-ORT MIMO subspace identification methodology is superior in the sense that it can efficiently utilise large amounts of data sets and identify system models with limited process excitation. A prerequisite of the 2-ORT MIMO subspace identification methodology was that it needed persistent excitation signals as references to excite plant dynamics. This is not practically feasible in a real-time operating plant, and was thus not adhered to. However, it would be insightful to conduct further comparative studies to determine the effect of the inclusion of these reference signals. It was concluded that it is not practically feasible to re-identify process models, using the subspace methodology, to update process parameters for fault detection. Subspace methods require a vast amount of measured process data which makes the turnaround time for parameter updating too long for effective fault detection. Department of Electrical, Electronic and Computer Engineering 176 Chapter 9 Conclusions and Further Research A proposed solution was to adapt the Kalman filter theory to estimate system parameters instead of states. A natural prerequisite for the extended Kalman filter is accurate initial parameter estimates. The subspace method is able to produce accurate initial parameter estimates periodically. The extended Kalman filter is an elegant way of estimating new parameters but it might be difficult to isolate parameter changes since there is no controlled manner or procedure in how these parameters are updated. The extended Kalman filter is thus a good solution for fault detection, but may not be suitable for fault isolation. The extended Kalman filter approach is also computational intensive when there are many parameters which need to be estimated. It was concluded from verification and validation results that by monitoring parameter deviations it is possible to detect process faults, such as foaming, flooding and sensor faults. The infinity matrix norm measurement tool is an effective way to determine whether there is a global parameter change. However, it is not very sensitive to small parameter deviations. The fault detection methodology is thus regarded as a robust fault detection tool, which produces a minimum amount of false alarms. 9.3 Directions of Future Work Closed-loop identification has proven to be the most economically feasible method in identifying process models. However, limited excitation of process dynamics due to limited or no structured step-testing on the plant sometimes makes closedloop identification infeasible. Subspace identification methods address the problem where subspace methodologies can utilise large data sets and extract system dynamics information. Addressing the problem of non-causal terms found in the formulation of subspace methods, used for projection, has extended the use of subspace SID to the identification of processes operating in a closed-loop system. Katayama and Tanaka [8] proposed a LQ projection procedure used to pre-treat data and remove stochastic and non-causal components. Theoretical work conducted by Katayama and Tanaka [8] showed that it is possible to limit and prevent biased estimated parameters by using the LQ projection procedure. However, through simulations it was concluded that the efficiency of this method strongly relies on the type of unmeasured disturbances which are present. Subjected to a colored noise source, Department of Electrical, Electronic and Computer Engineering 177 Chapter 9 Conclusions and Further Research it is very difficult to estimate accurate system parameters. More theoretical work needs to be done to understand the estimated parameter variance properties of subspace methods when subjected to colored noise environments. The assumption of white noise environments is too general to develop any method of practical significance. Monitoring parameter deviations is a feasible way of detecting faults. The next step would be to isolate the parameters that cause the most severe deviations and link the deviations to a cause. The isolation of process faults would increase plant availability and utilisation, which will increase profitability margins. Further research also needs to be done in fault prediction. The timely predictions of faults that include foaming and flooding (in reference to the Benfield process) will result in more efficient mass transfer rates and regeneration cycles, ultimately increasing plant economic profitability. The extended Kalman theory approach to estimate system parameters is an effective tool. However, the computational burden increases dramatically with an increase in system parameters which are necessary for accurate system models. This can become a bottleneck when very large parameter sets are necessary to describe the dynamics of a plant. Further optimisation and reformulation of the implementation of the extended Kalman theory needs to be addressed. Department of Electrical, Electronic and Computer Engineering 178 Bibliography [1] W. Favoreel, B. de Moor, and P. van Overschee, “Subspace state space system identication for industrial processes,” Journal of Process Control, vol. 10, pp. 149–155, 2000. [2] P. Koller, “Detailed design spesification: Benfield east,” Tech. Rep. PK6956 Rev 2, Sasol Technology, 2005. [3] U. Forssell and L. Ljung, “Closed-loop identification revisited,” Automatica, vol. 35, pp. 1215–1241, 1999. [4] M. Basseville, “On-board component fault detection and isolation using the statistical local approach,” Automatica, vol. 34, no. 11, pp. 1391–1415, 1998. [5] R. Isermann and P. Balle, “Trends in the application of model-based fault detection and diagnosis of technical processes,” Control Eng. Practice, vol. 5, no. 5, pp. 709–719, 1998. [6] J. C. Agüero and G. C. Goodwin, “Virtual closed loop identification: A subspace approach,” in 43rd IEEE Conference on Decision and Control, (Atlantis, Paradise Island, Bahamas), pp. 14–17, December 2004. [7] E. de Klerk and I. Craig, “Multivariable closed-loop system identification of plants under model predictive control,” in Proc. of the 13th IFAC Symposium on System Identification (P. V. den Hof, B. Wahlberg, and S. Weiland, eds.), vol. 1, (Rotterdam, The Nederland), pp. 411–419, IFAC, Elsivier, August 2003. [8] T. Katayama and H. Tanaka, “An approach to closed loop subspace identification by orthogonal decomposition,” Automatica, vol. 43, 2007. [9] O. van der Westhuizen. Synfuels, Sasol, Private Communications, October 2008. Bibliography [10] S. Simani and C. Fantuzzi, “Dynamic system identification and model-based fault diagnosis of an industrial gas turbine prototype,” Control Engineering Practise, vol. 16, pp. 341–363, 2006. [11] P. Kampjarvia, M. Souranderb, T. Komulainenc, N. Vatanskic, M. Nikusc, and S. Jamsa-Jounela, “Fault detection and isolation of an on-line analyzer for an ethylene cracking process,” Control Engineering Practise, vol. 16, pp. 1–10, 2008. [12] R. Isermann, Fault-diagnosis systems: An introduction from fault detection to fault tolerance. Springer, 1 ed., 2005. [13] R. Patton, P. Frank, and R. Clark, Issues of fault diagnosis for dynamic systems. Springer, 1 ed., 2000. [14] J. Gertler, Fault detection and diagnosis in engineering systems. New York: Marcel Dekker, 1998. [15] M. Basseville and I. Nikiforov, Detection of abrupt changes: Theory and application. Prentice-Hall, Englewood Cliffs, 1993. [16] J. Chen and R. Patton, Robust model-based fault diagnosis for dynamic systems. Dordrecht: Kluwer Academic, 1999. [17] J. Korbicz, J. Koscielny, Z. Kowalczuk, and W. Cholewa, Fault diagnosis: Models, artificial intelligence, applications. Springer, 2004. [18] S. Simani and R. Patton, “Fault diagnosis of an industrial gas turbine prototype using a system identification approach,” Control Engineering Practise, vol. 9, pp. 1016–1024, 8 2007. [19] A. Deshpande, C. Patwardhan, and S. Narasimhan, Assessment and Future Directions of Nonlinear Model Predictive Control, vol. 358 of Lecture Notes in Control and Information Sciences. 2007. [20] D. G. Chapel, C. L. Mariz, and J. Ernest, “Recovery of co2 from flue gases: Commercial trends.” October 2000. [21] M. Tolsma and H. Barnard. Synfuels, Sasol, Private Communications, October 2008. [22] V. Venkatasubramanian, R. Rengaswamy, K. Yin, and S. Kavuri, “A review of process fault detection and diagnosis part i: Quantitative model-based methods,” Control Engineering Practise, vol. 27, pp. 293–311, 2003. Department of Electrical, Electronic and Computer Engineering 180 Bibliography [23] S. Kavuri, V. Venkatasubramanian, R. Rengaswamy, and K. Yin, “A review of process fault detection and diagnosis part i: Qualitive model-based methods,” Journal of Process Control, vol. 27, pp. 312–326, 2003. [24] R. Rengaswamy, V. Venkatasubramanian, K. Yin, and S. Kavuri, “A review of process fault detection and diagnosis part iii: Process history based methods,” Journal of Process Control, vol. 27, pp. 327–346, 2003. [25] R. Patton, P. Frank, and R. Clark, Fault Diagnosis in Dynamic sytems: Theory and Application. Series in Systems and Control Engineering, 66 Wood Lane End, Hemel Hempstead, Hertfordshire: Prentice Hall, 1989. [26] R. Isermann, “Process fauls detection based on modelling and estimation methods:a survey,” Automatica, vol. 20, no. 4, pp. 387–404, 1984. [27] S. Katipamula and M. Brambley, “Methods for fault detection, diagnostics, and prognostics for building systems:a review, part i,” International Journal of HVAC&R Research, vol. 11, pp. 23–47, January 2003. [28] O. Nelles, Nonlinear System Identification. Springer, 2001. [29] K. Patan and T. Parisini, “Identification of neural dynamic models for fault detection and isolation: the case of a real sugar evaporation process,” Journal of Process Control, vol. 15, pp. 67–79, 2005. [30] L. Ljung, System Identification: Theory for the user. Prentice Hall, 1999. [31] Y. Zhu, “Multivariable process identification for mpc: the asymptotic method and its application,” Journal of Process Control, vol. 8, pp. 101–115, April 1998. [32] P. van den Hof and R. de Callafon, “Multivariable closed-loop identification: From indirect identification to dual-youla paramterization,” in Proc. of the 35th Conf. on Decision and Control, 1996. [33] M. Viberg, “Subspace based methods for the identification of linear time invariant systems,” Automatica, vol. 31, no. 12, pp. 1835–1851, 1995. [34] Z. Zhang, J. Fan, and H. Hua, “Simulation and experiment of a blind subspace identification method,” Journal of Sound and Vibration, vol. 311, 2008. [35] S. Simani, C. Fantuzzi, and S. Beghelli, “Diagnosis techniques for sensor faults of industrial processes,” IEEE Trans. on Control Systems Tech., vol. 8, pp. 848–856, September 2000. Department of Electrical, Electronic and Computer Engineering 181 Bibliography [36] T. Söderstöm, “Accuracy analysis of the frisch scheme for identifying errorsin-variables systems,” IEEE Transactions on Automatic Control, vol. 52, pp. 985–997, June 2007. [37] X. Wu and G. Campion, “Fault detection and isolation of systems with slowly varying parameters simulation with a simplified aircraft turbo engine model,” Mechanical Systems and Signal Processing, vol. 18, 2004. [38] S. Rajaraman, J. Hahn, and M. Mannan, “A methodology for fault detection, isolation, and identification for nonlinear processes with parametric uncertainties,” Ind. Eng. Chem. Res., vol. 43, pp. 6774–6786, September 2004. [39] L. Palma, F. Coito, and R. da Silva, “Diagnosis of parametric faults based on identification and statistical methods,” 44th IEEE Conference on Decision and Control, (Seville, Spain), European Control Conference, December 2005. [40] J. Rossiter, Model-Based Predictive Control- A Practical Approach. New York: CRC Press, 2003. [41] J. Prakash, S. Narasimhan, and S. Patwardhan, “Integrating model based fault diagnosis with model predictive control,” Ind. Eng. Chem. Res., vol. 44, pp. 4344–4360, 2005. [42] J. H. Lee and M. Morari, “Model predictive control: Past, present and future,” Comput. Chem. Eng., vol. 23, pp. 667–682, 1999. [43] H. Zeiger and A. McEwen, “Approximate linear realizations of given dimensions via ho’s algorithm,” IEEE transaction on automatic control, vol. 19, no. 153, 1974. [44] D. Brillinger, Time series: Data analysis and theory. San-Francisco, CA: Holden-Day, 1981. [45] S. B. Jorgensen and J. Lee, “Recent advances and challenges in process identification,” tech. rep., Georgia Institute of Technology, Atlanta, Georgia, USA, 2002. [46] Y. Zhua and F. Butoyi, “Case studies on closed loop identification for mpc,” Control Engineering Practice, vol. 10, pp. 403–417, 2002. [47] C. Koung and J. MacGregor, “Design of identification experiments for robust control. a geometric approach for bivariate processes.,” Industrial Engineering and Chemical Research, vol. 32, pp. 1658–1666, 1993. Department of Electrical, Electronic and Computer Engineering 182 Bibliography [48] S. J. Qin, “An overview of subspace identification,” Computers and Chemical Engineering, vol. 30, pp. 1502–1513, 2006. [49] M. Gevers, L. Ljung, and P. V. den Hof, “Asymptotic variance expressions for closed loop identification and their relevance in identification for control,” Selected Topics in Identification, vol. 9, pp. 9–15, 1996. [50] A. Esmaili, J. F. MacGregor, and P. A. Taylor, “Direct and two-step methods for closed loop identification: a comparison of asymptotic and finite data set performance,” Journal of Process Control, vol. 10, pp. 525–537, 2000. [51] M. Jelali, “An overview of control performance assessment technology and industrial applications,” Control Engineering Practice, vol. 14, pp. 441–466, 2006. [52] W. Luyben, B. Tyreus, and M. Luyben, Plant-wide process control. New York: McGraw-Hill, 1999. [53] N. Thornhilla and A. Horch, “Advances and new directions in plant-wide disturbance detection and diagnosis,” Control Engineering Practice, vol. 15, 2007. [54] M. Paulonis and J. Cox, “A practical approach for large-scale controller performance assessment, diagnosis, and improvement.,” Journal of Process Control, vol. 13, pp. 155–168, 2003. [55] L. Desborough and R. Miller, “Increasing customer value of industrial control performance monitoringhoneywells experience,” vol. 98 of AIChE Symposium Series, pp. 153–186, 2002. [56] R. Iserman, “Model-based fault-detection and diagnosis status and applications,” Annual Reviews in Control, vol. 29, pp. 71–85, 2005. [57] A. Ghaffari, J. Roshanian, and M. Tayefi, “Time-varying transfer function extraction of an unstable launch vehicle via closed loop identification,” Aerospace Science and Technology, vol. 11, pp. 238–244, 2007. [58] M. A.-G. Shahin, Fault Diagnosis and Performance Recovery Based on the Dynamic Safety Margin. PhD thesis, University of Mannheim, Mannheim, 2006. [59] J. Patton, P. Frank, and R. Clark, Issues of Fault Diagnosis for Dynamic System. Springer, 2000. Department of Electrical, Electronic and Computer Engineering 183 Bibliography [60] L. Mendonc, J. Sousa, and J. S. da Costa, “An architecture for fault detection and isolation based on fuzzy methods,” Expert Systems with Applications, 2008. [61] L. Ljung, “Asymptotic variance expression for identified black-box transfer function models,” IEEE Trans. Auto. Cont., vol. 30, pp. 834–844, 1985. [62] W.-C. Yu and N.-Y. Shih, “Bi-loop recursive least squares algorithm with forgetting factors,” IEEE Signal Processing Letters, vol. 13, pp. 505–508, August 2006. [63] Z. Chao, H. Hua-sheng, B. Wei-min, and Z. Luo-ping, “Robust recursive estimation of auto-regressive updating model parameters for real-time flood forecasting,” Journal of Hydrology, vol. 349, pp. 376–382, 2008. [64] I. Eker, “Open-loop and closed loop experimental on-line identification of a three-mass electromechanical system,” Mechatronics, vol. 14, pp. 549–565, 2004. [65] C. Alexander and R. Trahan, “A comparison of traditional and adaptive control strategies for systems with time delay,” ISA Trans, vol. 40, pp. 353– 368, 2001. [66] J. Zhou, “Classical theory of errors and robust estimation,” Aota Geodetica et Cartographica Sinica, pp. 115–120, 1989. [67] T. Söderström and P. Stoica, System Identification. Series in System and Control Engineering, NY, USA: Prentice Hall, 1989. [68] N. Araki, M. Okada, and Y. Konishi, “Parameter identification and swingup control of an acrobot system,” IEEE Transactions on Automatic Control, pp. 1040–1045, 2005. [69] S. Julier, J. Uhlmann, and H. F. Durrant-Whyte, “A new method for the nonlinear transformation of means and covariances in filters and estimators,” IEEE Transactions on Automatic Control, vol. 45, no. 3, pp. 477–482, 2000. [70] T. Lefebvre, H. Bruyninck, and J. D. Schutter, “Comment on a new method for the nonlinear transformation of means and covariances in filters and estimators,” IEEE Transactions on Automatic Control, vol. 47, pp. 1406–1408, August 2002. Department of Electrical, Electronic and Computer Engineering 184 Bibliography [71] P. V. Overschee and B. D. Moor, “N4sid: subspace algorithms for the identification of combined deterministic and stochastic systems,” Automatica, vol. 30, no. 1, pp. 75–93, 1994. [72] M. Verhaegen and P. Dewilde, “Subspace identification, part i: the outputerror state space model identification class of algorithms,” Internat. J. Control, vol. 56, pp. 1187–1210, 1992. [73] W. Larimore, “Canonical variate analysis in identification, filtering and adaptive control,” Proc. of the 29th Conference on Decision and Control, vol. 90, pp. 596–604, 1990. [74] M. Viberg, “Subspace methods in system identification,” Proc. of the 10th IFAC Symposium on System Identification, vol. 1, no. 94, pp. 1–12, 1994. [75] P. van Overschee and B. de Moor, “Closed-loop subspace identification algorithm,” In Proc. Of the 36th IEEE CDC, pp. 1848–1853, 1997. [76] D. Bauer, “Subspace algorithms,” in A Proceedings volume from the 13th IFAC Symposium on System Identification (P. van den Hof, B. Wahlberg, and S. Weiland, eds.), vol. 2, (Rotterdam, The Nederlands), pp. 993–1004, August 2003. [77] P. Mathieu and M. Mohammed, “Closed loop indentifcation method using a subspace approach,” in A Proceedings volume from the 13th IFAC Symposium on System Identification (P. van den Hof, B. Wahlberg, and S. Weiland, eds.), vol. 1, (Rotterdam, The Nederlands), pp. 423–428, August 2003. [78] S. Qin and L. Ljung, “Paralles implementation of subsapce identification with parsimonious models.” IFAC Symposium on System Identification, 2003. [79] S. Qin and L. Ljung, “Closed loop subspace indentifcation with innovation estimation,” in A Proceedings volume from the 13th IFAC Symposium on System Identification (P. van den Hof, B. Wahlberg, and S. Weiland, eds.), vol. 2, (Rotterdam, The Nederlands), pp. 861–866, August 2003. [80] M. Jansson, “Subspace identification and arx modeling,” in A Proceedings volume from the 13th IFAC Symposium on System Identification (P. van den Hof, B. Wahlberg, and S. Weiland, eds.), vol. 4, (Rotterdam, The Nederlands), pp. 1585–1590, August 2003. [81] J. S. Conner and D. E. Seborg, “Assessing the need for process reidentification,” Ind. Eng. Chem. Res., vol. 44, pp. 2767–2775, 2005. Department of Electrical, Electronic and Computer Engineering 185 Bibliography [82] A. AlGhazzawi and B. Lennox, “Monitoring a complex refining process using multivariate statistics,” Control Engineering Practice, vol. 16, pp. 294–307, 2008. [83] Honeywell, “Benfield process.” Internet, 2008. [84] E. de Klerk, “Closed-loop indetification of plants under model predictive control,” dissertation, University of Pretoria, Faculty of Engineering, 2003. [85] E. Jefferies and L. van den Merwe, “Operating envelope document: Process operability review,” Tech. Rep. Rev 1, Sasol Technology, 2006. [86] A. Krishnan, K. A. Kosanovich, M. R. DeWitt, and M. B. Creech, “Robust model predictive control of an industrial solid phase polymerizer,” in Proceedings of the American Control Conference, 1998. [87] S. Skogestad and I. Postlethwaite, Multivariate Feedback Control: Analysis and Design. Wiley, 2 ed., 2005. [88] P. van den Hof and R. Schrama, “An indirect method for transfer function estimation from closed loop data,” Automatica, vol. 29, no. 6, pp. 1523–1527, 1993. [89] W. Lin, Closed-loop Subspace Identification and Fault Diagnosis with Optimal Structured Residuals. PhD thesis, Graduate School of The University of Texas at Austin, Austin, May 2005. [90] T. Katayama, H. Kawauchi, and G. Picci, “Subspace identification of closed loop systems by the orthogonal decomposition method,” Automatica, vol. 41, pp. 863–872, November 2005. [91] J.M.Maciejowski, “Guaranteed stability with subspace methods.” July 1995. [92] V. Panuska, “A new form of the extended kalman filter for parameter estiition in linear systems with correlated noise,” IEEE Transaction on Automatic Control, vol. AC-25, pp. 229–235, April 1980. [93] J. W. MacArthur and C. Zhan, “A practical global multi-stage method for fully automated closed loop identification of industrial processes,” Journal of Process Control, vol. 17, pp. 770–786, 2007. [94] M. Verhaegen, “Identification of the deterministic part of mimo state space models given in innovations form from input-output data,” Automatica, vol. 30, no. 1, pp. 61–74, 1994. Department of Electrical, Electronic and Computer Engineering 186 Appendix A Residual Analysis of Nominal Benfield Process Model Figures A.1-A.15 illustrate the auto-correlation between the residuals of the identified Benfield process, validated by closed-loop raw plant data, and the individual CV outputs of the identified Benfield process. Figures A.1-A.15 also illustrate the cross-correlation between each CV output and M V input. Appendix A Residual Analysis of Nominal Benfield Process Model Figure A.1: The auto-correlation and cross-correlation of residuals and M V1 with output CV2 . Figure A.2: The auto-correlation and cross-correlation of residuals and M V1 with output CV3 . Department of Electrical, Electronic and Computer Engineering 188 Appendix A Residual Analysis of Nominal Benfield Process Model Figure A.3: The auto-correlation and cross-correlation of residuals and M V1 with output CV4 . Figure A.4: The auto-correlation and cross-correlation of residuals and M V2 with output CV1 . Department of Electrical, Electronic and Computer Engineering 189 Appendix A Residual Analysis of Nominal Benfield Process Model Figure A.5: The auto-correlation and cross-correlation of residuals and M V2 with output CV2 . Figure A.6: The auto-correlation and cross-correlation of residuals and M V2 with output CV3 . Department of Electrical, Electronic and Computer Engineering 190 Appendix A Residual Analysis of Nominal Benfield Process Model Figure A.7: The auto-correlation and cross-correlation of residuals and M V2 with output CV4 . Figure A.8: The auto-correlation and cross-correlation of residuals and M V3 with output CV1 . Department of Electrical, Electronic and Computer Engineering 191 Appendix A Residual Analysis of Nominal Benfield Process Model Figure A.9: The auto-correlation and cross-correlation of residuals and M V3 with output CV2 . Figure A.10: The auto-correlation and cross-correlation of residuals and M V3 with output CV3 . Department of Electrical, Electronic and Computer Engineering 192 Appendix A Residual Analysis of Nominal Benfield Process Model Figure A.11: The auto-correlation and cross-correlation of residuals and M V3 with output CV4 . Figure A.12: The auto-correlation and cross-correlation of residuals and M V4 with output CV1 . Department of Electrical, Electronic and Computer Engineering 193 Appendix A Residual Analysis of Nominal Benfield Process Model Figure A.13: The auto-correlation and cross-correlation of residuals and M V4 with output CV2 . Figure A.14: The auto-correlation and cross-correlation of residuals and M V4 with output CV3 . Department of Electrical, Electronic and Computer Engineering 194 Appendix A Residual Analysis of Nominal Benfield Process Model Figure A.15: The auto-correlation and cross-correlation of residuals and M V4 with output CV4 . Department of Electrical, Electronic and Computer Engineering 195 Appendix B Complementary Subspace Theory Shift Invariance property of Extended Observability and Controllability matrix The system matrix, A , can be determined by exploiting the shift-invariance property of either the extended observability matrix, Γ, or the extended controllability matrix, Ξ. The extended observability matrix and extended controllability matrix can be defined as follow [91]: Γ= def C CA CA2 · · · T def B AB A2 B · · · Ξ= (B.1) . Shift invariance implies that ΓA = Γ ↑, (B.2) and ← AΞ = Ξ, (B.3) where Γ ↑= CA CA2 CA3 . . . T AB A2 B A3 B . . . , (B.4) and ← Ξ= Approximating the pair (B, D) Let the following LQ factorisation be given: . (B.5) Appendix B Complementary Subspace Theory Uf L11 0 0 U p L21 L22 0 Y = L31 L32 L33 p L41 L42 L43 Yf RT1 0 T R 0 2, T 0 R3 L44 RT4 (B.6) then the Toeplitz matrix, Ψ, can be estimated as follow [94]: Ψ̂ = L31 L42 L11 L22 † . (B.7) From the estimated Toeplitz matrix and estimated extended observability matrix, the pair of matrices (B, D) satisfies the following relation [94]: In 0 0 Γ̂ (1 : p (k − 1) , 1 : n) In 0 0 Γ̂ (1 : p (k − 2) , 1 : n) .. .. . . In = " # D B (B.8) 0 Ψ̂ (1 : pk, 1 : m) Ψ̂ (p + 1 : pk, m + 1 : 2m) .. . , Ψ̂ (p (k − 1) + 1 : pk, m (k − 1) + 1 : mk) where p is the output dimension, m is defined as the input dimension and n is the order of the system, and k is the Toeplitz and observability matrix depth. Department of Electrical, Electronic and Computer Engineering 197 Appendix C Residual Analysis: Simulation Case Study White Noise Interference Figures C.1-C.15 illustrate the auto-correlation between the residuals of the identified Benfield process, validated by closed loop raw plant data, and the individual CV outputs of the identified Benfield process. Figures C.1-C.15 also illustrate the cross-correlation between each CV output and M V input. Appendix C Residual Analysis: Simulation Case Study Figure C.1: The auto-correlation and cross-correlation of residuals and M V1 with output CV1 . Figure C.2: The auto-correlation and cross-correlation of residuals and M V1 with output CV3 . Department of Electrical, Electronic and Computer Engineering 199 Appendix C Residual Analysis: Simulation Case Study Figure C.3: The auto-correlation and cross-correlation of residuals and M V1 with output CV4 . Figure C.4: The auto-correlation and cross-correlation of residuals and M V2 with output CV1 . Department of Electrical, Electronic and Computer Engineering 200 Appendix C Residual Analysis: Simulation Case Study Figure C.5: The auto-correlation and cross-correlation of residuals and M V2 with output CV2 . Figure C.6: The auto-correlation and cross-correlation of residuals and M V2 with output CV3 . Department of Electrical, Electronic and Computer Engineering 201 Appendix C Residual Analysis: Simulation Case Study Figure C.7: The auto-correlation and cross-correlation of residuals and M V2 with output CV4 . Figure C.8: The auto-correlation and cross-correlation of residuals and M V3 with output CV1 . Department of Electrical, Electronic and Computer Engineering 202 Appendix C Residual Analysis: Simulation Case Study Figure C.9: The auto-correlation and cross-correlation of residuals and M V3 with output CV2 . Figure C.10: The auto-correlation and cross-correlation of residuals and M V3 with output CV3 . Department of Electrical, Electronic and Computer Engineering 203 Appendix C Residual Analysis: Simulation Case Study Figure C.11: The auto-correlation and cross-correlation of residuals and M V3 with output CV4 . Figure C.12: The auto-correlation and cross-correlation of residuals and M V4 with output CV1 . Department of Electrical, Electronic and Computer Engineering 204 Appendix C Residual Analysis: Simulation Case Study Figure C.13: The auto-correlation and cross-correlation of residuals and M V4 with output CV2 . Figure C.14: The auto-correlation and cross-correlation of residuals and M V4 with output CV3 . Department of Electrical, Electronic and Computer Engineering 205 Appendix C Residual Analysis: Simulation Case Study Figure C.15: The auto-correlation and cross-correlation of residuals and M V4 with output CV4 . Colored Noise Interference Figures C.16-C.30 illustrate the auto-correlation between the residuals of the identified Benfield process, validated by closed-loop raw plant data, and the individual CV outputs of the identified Benfield process. Figures C.16-C.30 also illustrate the cross-correlation between each CV output and M V input. Department of Electrical, Electronic and Computer Engineering 206 Appendix C Residual Analysis: Simulation Case Study Figure C.16: The auto-correlation and cross-correlation of residuals and M V1 with output CV1 . Figure C.17: The auto-correlation and cross-correlation of residuals and M V1 with output CV3 . Department of Electrical, Electronic and Computer Engineering 207 Appendix C Residual Analysis: Simulation Case Study Figure C.18: The auto-correlation and cross-correlation of residuals and M V1 with output CV4 . Figure C.19: The auto-correlation and cross-correlation of residuals and M V2 with output CV1 . Department of Electrical, Electronic and Computer Engineering 208 Appendix C Residual Analysis: Simulation Case Study Figure C.20: The auto-correlation and cross-correlation of residuals and M V2 with output CV2 . Figure C.21: The auto-correlation and cross-correlation of residuals and M V2 with output CV3 . Department of Electrical, Electronic and Computer Engineering 209 Appendix C Residual Analysis: Simulation Case Study Figure C.22: The auto-correlation and cross-correlation of residuals and M V2 with output CV4 . Figure C.23: The auto-correlation and cross-correlation of residuals and M V3 with output CV1 . Department of Electrical, Electronic and Computer Engineering 210 Appendix C Residual Analysis: Simulation Case Study Figure C.24: The auto-correlation and cross-correlation of residuals and M V3 with output CV2 . Figure C.25: The auto-correlation and cross-correlation of residuals and M V3 with output CV3 . Department of Electrical, Electronic and Computer Engineering 211 Appendix C Residual Analysis: Simulation Case Study Figure C.26: The auto-correlation and cross-correlation of residuals and M V3 with output CV4 . Figure C.27: The auto-correlation and cross-correlation of residuals and M V4 with output CV1 . Department of Electrical, Electronic and Computer Engineering 212 Appendix C Residual Analysis: Simulation Case Study Figure C.28: The auto-correlation and cross-correlation of residuals and M V4 with output CV2 . Figure C.29: The auto-correlation and cross-correlation of residuals and M V4 with output CV3 . Department of Electrical, Electronic and Computer Engineering 213 Appendix C Residual Analysis: Simulation Case Study Figure C.30: The auto-correlation and cross-correlation of residuals and M V4 with output CV4 . Department of Electrical, Electronic and Computer Engineering 214 Appendix D Identified System Validation Results: Real Process Data Residual Analysis Figures D.1-D.15 illustrate the auto-correlation between the residuals of the identified Benfield process, validated by closed-loop raw plant data, and the individual CV outputs of the identified Benfield process. Figures D.1-D.15 also illustrate the cross-correlation between each CV output and M V input. The models used for residual analysis for figures D.1-D.15 were identified by using open-loop structured step test data and the 2-ORT subspace identification methodology. Figures D.16-D.30 illustrate the auto-correlation between the residuals of the identified Benfield process, validated by closed-loop raw plant data, and the individual CV outputs of the identified Benfield process. Figures D.16-D.30 also illustrate the cross-correlation between each CV output and M V input. The models used for residual analysis for figures D.16-D.30 were identified by using closed-loop data and the 2-ORT subspace identification methodology. Appendix D Identified System Validation Results: Real Process Data Figure D.1: The auto-correlation and cross-correlation of residuals and M V1 with output CV2 . Figure D.2: The auto-correlation and cross-correlation of residuals and M V1 with output CV3 . Department of Electrical, Electronic and Computer Engineering 216 Appendix D Identified System Validation Results: Real Process Data Figure D.3: The auto-correlation and cross-correlation of residuals and M V1 with output CV4 . Figure D.4: The auto-correlation and cross-correlation of residuals and M V2 with output CV1 . Department of Electrical, Electronic and Computer Engineering 217 Appendix D Identified System Validation Results: Real Process Data Figure D.5: The auto-correlation and cross-correlation of residuals and M V2 with output CV2 . Figure D.6: The auto-correlation and cross-correlation of residuals and M V2 with output CV3 . Department of Electrical, Electronic and Computer Engineering 218 Appendix D Identified System Validation Results: Real Process Data Figure D.7: The auto-correlation and cross-correlation of residuals and M V2 with output CV4 . Figure D.8: The auto-correlation and cross-correlation of residuals and M V3 with output CV1 . Department of Electrical, Electronic and Computer Engineering 219 Appendix D Identified System Validation Results: Real Process Data Figure D.9: The auto-correlation and cross-correlation of residuals and M V3 with output CV2 . Figure D.10: The auto-correlation and cross-correlation of residuals and M V3 with output CV3 . Department of Electrical, Electronic and Computer Engineering 220 Appendix D Identified System Validation Results: Real Process Data Figure D.11: The auto-correlation and cross-correlation of residuals and M V3 with output CV4 . Figure D.12: The auto-correlation and cross-correlation of residuals and M V4 with output CV1 . Department of Electrical, Electronic and Computer Engineering 221 Appendix D Identified System Validation Results: Real Process Data Figure D.13: The auto-correlation and cross-correlation of residuals and M V4 with output CV2 . Figure D.14: The auto-correlation and cross-correlation of residuals and M V4 with output CV3 . Department of Electrical, Electronic and Computer Engineering 222 Appendix D Identified System Validation Results: Real Process Data Figure D.15: The auto-correlation and cross-correlation of residuals and M V4 with output CV4 . Figure D.16: The auto-correlation and cross-correlation of residuals and M V1 with output CV2 . Department of Electrical, Electronic and Computer Engineering 223 Appendix D Identified System Validation Results: Real Process Data Figure D.17: The auto-correlation and cross-correlation of residuals and M V1 with output CV3 . Figure D.18: The auto-correlation and cross-correlation of residuals and M V1 with output CV4 . Department of Electrical, Electronic and Computer Engineering 224 Appendix D Identified System Validation Results: Real Process Data Figure D.19: The auto-correlation and cross-correlation of residuals and M V2 with output CV1 . Figure D.20: The auto-correlation and cross-correlation of residuals and M V2 with output CV2 . Department of Electrical, Electronic and Computer Engineering 225 Appendix D Identified System Validation Results: Real Process Data Figure D.21: The auto-correlation and cross-correlation of residuals and M V2 with output CV3 . Figure D.22: The auto-correlation and cross-correlation of residuals and M V2 with output CV4 . Department of Electrical, Electronic and Computer Engineering 226 Appendix D Identified System Validation Results: Real Process Data Figure D.23: The auto-correlation and cross-correlation of residuals and M V3 with output CV1 . Figure D.24: The auto-correlation and cross-correlation of residuals and M V3 with output CV2 . Department of Electrical, Electronic and Computer Engineering 227 Appendix D Identified System Validation Results: Real Process Data Figure D.25: The auto-correlation and cross-correlation of residuals and M V3 with output CV3 . Figure D.26: The auto-correlation and cross-correlation of residuals and M V3 with output CV4 . Department of Electrical, Electronic and Computer Engineering 228 Appendix D Identified System Validation Results: Real Process Data Figure D.27: The auto-correlation and cross-correlation of residuals and M V4 with output CV1 . Figure D.28: The auto-correlation and cross-correlation of residuals and M V4 with output CV2 . Department of Electrical, Electronic and Computer Engineering 229 Appendix D Identified System Validation Results: Real Process Data Figure D.29: The auto-correlation and cross-correlation of residuals and M V4 with output CV3 . Figure D.30: The auto-correlation and cross-correlation of residuals and M V4 with output CV4 . Department of Electrical, Electronic and Computer Engineering 230 Appendix E Identified System Verification Results: Real Process Data Residual Analysis Figures E.1-E.15 illustrate the auto-correlation between the residuals of the identified Benfield process, validated by closed-loop raw plant data, and the individual CV outputs of the identified Benfield process. Figures E.1-E.15 also illustrate the cross-correlation between each CV output and M V input. The models used for residual analysis for figures E.1-E.15 were identified by using closed-loop process data and the 2-ORT subspace identification methodology. Appendix E Identified System Verification Results: Real Process Data Figure E.1: The auto-correlation and cross-correlation of residuals and M V1 with output CV2 . Figure E.2: The auto-correlation and cross-correlation of residuals and M V1 with output CV3 . Department of Electrical, Electronic and Computer Engineering 232 Appendix E Identified System Verification Results: Real Process Data Figure E.3: The auto-correlation and cross-correlation of residuals and M V1 with output CV4 . Figure E.4: The auto-correlation and cross-correlation of residuals and M V2 with output CV1 . Department of Electrical, Electronic and Computer Engineering 233 Appendix E Identified System Verification Results: Real Process Data Figure E.5: The auto-correlation and cross-correlation of residuals and M V2 with output CV2 . Figure E.6: The auto-correlation and cross-correlation of residuals and M V2 with output CV3 . Department of Electrical, Electronic and Computer Engineering 234 Appendix E Identified System Verification Results: Real Process Data Figure E.7: The auto-correlation and cross-correlation of residuals and M V2 with output CV4 . Figure E.8: The auto-correlation and cross-correlation of residuals and M V3 with output CV1 . Department of Electrical, Electronic and Computer Engineering 235 Appendix E Identified System Verification Results: Real Process Data Figure E.9: The auto-correlation and cross-correlation of residuals and M V3 with output CV2 . Figure E.10: The auto-correlation and cross-correlation of residuals and M V3 with output CV3 . Department of Electrical, Electronic and Computer Engineering 236 Appendix E Identified System Verification Results: Real Process Data Figure E.11: The auto-correlation and cross-correlation of residuals and M V3 with output CV4 . Figure E.12: The auto-correlation and cross-correlation of residuals and M V4 with output CV1 . Department of Electrical, Electronic and Computer Engineering 237 Appendix E Identified System Verification Results: Real Process Data Figure E.13: The auto-correlation and cross-correlation of residuals and M V4 with output CV2 . Figure E.14: The auto-correlation and cross-correlation of residuals and M V4 with output CV3 . Department of Electrical, Electronic and Computer Engineering 238 Appendix E Identified System Verification Results: Real Process Data Figure E.15: The auto-correlation and cross-correlation of residuals and M V4 with output CV4 . Identified Pole-Zero Plots: Closed-loop and Openloop SID Figure E.16 illustrates the poles and zeros of the identified plant where closed-loop raw plant data were used correspondingly for system identification. Department of Electrical, Electronic and Computer Engineering 239 Appendix E Identified System Verification Results: Real Process Data Figure E.16: The poles and zeros for the identified process model. Department of Electrical, Electronic and Computer Engineering 240

Download PDF

advertisement