Ml Ml Ml Ml

^ ox 1 dxn ' \ OUl oum with the partial derivatives being evaluated at (xss,uss). In view of Eq. 4.56, Eq. 4.57 can now be written in a compact form as f(x, u) = A(x - xss) + B(u - uss) + r(x - xss, u - uss) . (4.59)

Neglecting the higher order terms rfc(x - xss. u — uss) and defining the deviation variables x = x - xss, u = u - uS5 (4.60) Eq. (4.44) can be written as

The output equation is developed in a similar manner:

where the elements of C and D are the partial derivatives dhi/dxj with i = 1, • • • ,p and j = 1, • ■ • , n and dhi/duj with i = 1, • • • ,p and j = 1, • • • , m, respectively. Hence, the linearized equations are of the same form as the original state-space equations in Eq. 4.49. Linearization of discrete time nonlinear models follows the same procedure and yields linear difference equations similar to Eq. (4.50).

Subspace State-Space Models

Subspace state-space models are developed by using techniques that determine the largest directions of variation in the data and build models that describe the data. PCA and PLS are subspace methods used for steady state data. They could be used to develop models for dynamic relations by augmenting the appropriate data matrices with lagged values of the variables. In recent years, dynamic model development techniques that rely on subspace concepts have been proposed [315, 316, 613, 621], Subspace methods are introduced in this section to develop state-space models for process monitoring and closed-loop control.

Consider a simple state-space model without external inputs uk xfe+1 = Fxfc + Hefc yfc = Cxk + ek (4.63)

where x^ is the state variable vector of dimension n at time k and yk is the observation vector with p output measurements. The stochastic input £fc is the serially uncorrelated innovation vector having the same dimension as y a- and covariance J = A if I = 0, and 0 otherwise. This representation would be useful for process monitoring activities where "appropriate" state variables (usually the first few state variables) are used to determine if the process is operating as expected. The statistics used in statistical process monitoring (SPM) charts assume no correlation over time between measurements. If state-space models are developed such that the state variables and residuals are uncorrelated at zero lag, the statistics can be safely applied to these calculated variables instead of measured process outputs. Several techniques, balanced realization [21], PLS realization [416], and the canonical variate realization [315, 413] can be used for developing these models. Negiz and Cinar [413] have proposed the use of state variables developed with canonical variate analysis based realization to implement such SPM techniques to multivariable continuous processes.

Subspace algorithms generate the process model by successive approximation of the memory or the state variables of the process by determining successively functions of the past that have the most information for predicting the future [316]. Canonical variates analysis (CVA), a subspace algorithm, is used to develop state space models [315] where the first state variable contains the largest amount of information about the process dynamics, the second state variable is orthogonal to the first (does not repeat the information explained in the previous state variable) and describes the largest amount of the remaining process variation. The first few significant state variables can often be used to describe the greatest variation in the process. The system order n is determined by inspecting the dominant sin gular values (SV) of a covariance matrix (the ratio of the specific SV to the sum of all the SVs [21] generated by singular value decomposition (SVD) or an information theoretic approach such as the Akaike Information Criterion (AIC) [315].

The Hankel matrix (Eq. 4.65) is used to develop subspace models. It expresses the covariance between future and past stacked vectors of output measurements. If the stacked vectors of future (3^ ) and past (ykK ) data are given as y k yfc+i

Yfc+j-i and yjfc-i yfc-2

Yk-k

the Hankel matrix (note that Hxj is different than the H matrix in Eq. (4.63)) is

Ai A2 a2 A3

Atf+i

where Ae is the autocovariance of y^'s which are £ time period apart and E[-\ denotes the expected value of a stochastic variable. K and J are past and future window lengths. The non-zero singular values of the Hankel matrix determine the order of the system, i.e., the dimension of the state variables vector. The non-zero and dominant singular values of Hjk are chosen by inspection of singular values or metrics such as AIC.

CV (canonical variate) realization requires that covariances of future and past stacked observations be conditioned against any singularities by taking their square roots. The Hankel matrix is scaled by using and Rj defined in Eq. (4.67). The scaled Hankel matrix (Hjk) and its singular value decomposition is given as

U£Vr where

Upjxn contains the n left eigenvectors of Hjk, S„xn contains the singular values (SV), and Vjfp*„ contains the n right eigenvectors of the decomposition. The subscripts associated with U, S and V denote the dimensions of these matrices. The SVD matrices in Eq. 4.66 include only the SVs and eigenvectors corresponding to the n state variables retained in the model. The full SV matrix £ has dimension .Jp x Kp and it contains the SVs in a descending order. If the process noise is small, all SVs smaller than the nth SV are effectively zero and the corresponding state variables are excluded from the model.

The state variables are given as xfc = E^V^R ~Ky1/2y^lK. (4.68)

Once Xfc (or x(t)) is known, F, G (or A, B), C, and A can be constructed [413]. The covariance matrix of the state vector based on CV decomposition £?[xfcXjT] = E reveals that x^ are independent at zero-lag.

The second subspace state-space model includes external inputs:

Xfc+i = Fxjt + Gufc + Hiwfc yfc = Cxfc + Dufc + H2vfc (4.69)

where F,G,C,D,Hi and H2 are system matrices, and w and v are Normally distributed, zero-mean noise vectors. It can be developed using CV or other methods such as N4SID [613].

The subspace state-space modeling framework has been used to develop batch-to-batch process monitoring and control techniques that utilize information from previous batches along with measurements from the ongoing batch (Section 6.6).

4.3.3 State Estimators

A state estimator is a computational algorithm that deduces the state of a system by utilizing knowledge about system and measurement dynamics, initial conditions of the system, and assumed statistics of measurement and system noises [182]. State estimators can be classified according to the set of state variables estimated. Full-order state estimators estimate all n state variables of the process. Minimum-order state estimators estimate only the unmeasurable state variables. Reduced-order state estimators estimate some of the measured state variables in addition to all unmeasurable state variables.

The estimator is designed to minimize the estimation error in a well-defined statistical sense by using all measurement information and prior knowledge about the process. The accuracy of the estimates is affected by errors in process models used. Three estimation problems can be listed: filtering, smoothing, and prediction (Fig. 4.5). In filtering, the time at which the estimate is desired coincides with the latest measurement time.

WIMMM indicates span of wmmM available data

t

(c) Prediction Figure 4.5. Estimation problems.

In smoothing, the time of the estimate falls within the span of measurement data available. The state of the process at some prior time is estimated based on all measurements collected up to the current time. In prediction, the time of the estimate occurs after the last available measurement. The state of the process in some future time is estimated. The discussion in this section focuses on filtering (Fig. 4.6), and in particular on Kalman filtering technique.

An estimate x of a state variable x is computed using the measured outputs y. An unbiased estimate x has the same expected value as that of

Process Measurement

Error Error A Priori

Process Measurement

Error Error A Priori

Figure 4.6. Kalman filtering technique.

Figure 4.6. Kalman filtering technique.

the variable being estimated (x). A minimum variance (unbiased) estimate has its error variance that is less than or equal to the variance of any other unbiased estimate. A consistent estimate x converges to the true value of x as the number of measurements increase. Kalman filters are unbiased, minimum variance, consistent estimators. They are also optimal estimators in the sense that they minimize the mean square estimation error.

Discrete Kalman Filter

Consider a discrete time system with a state equation

where Xfc is an abbreviation for x(ifc), and the subscript of Ffc_i indicates that it is time dependent (F(£jt_i)). Note that the time index is shifted back by 1 with respect to the discrete time state-space model description in Eq. (4.50) to emphasize the filtering problem, Wfc is a zero mean, white (Gaussian) sequence with covariance Qk, and the system is not subjected to external inputs (unforced system) (G(tk) = 0). The measured output equation is yfc=C fexfc+vfc (4.71)

where vjt is a vector of random noise with zero mean and covariance Rfc corrupting the output measurements yfc. Given the prior estimate of Xfc denoted by xfc , a recursive estimator is sought to compute an updated estimate Xfc based on measurements yfc. The recursive estimator uses only the most recent values of measurements and prior estimates, avoiding the need for a growing storage of past values. The updated estimate is a weighted sum of xfc and y^:

where K'fc and Kfc are unspecified (yet) time-varying weighting matrices. Expressing the estimates as the sum of unknown real values and estimation errors denoted by Xfc

and inserting the equation for x;7 and Eq. (4.71) in Eq. (4.72), the estimation error xj becomes:

x+ = (K'k + KfcCfc - I) Xfc + K^x^ + KfcVfc . (4.74)

Consider the expected value (£?[•]) of Eq. (4.74). By definition E[vk] — 0. If ] = 0 as well, then the estimator (Eq. (4.72)) will be unbiased for any given Xfc if K'fc + KfcCfc -1 = 0. Hence, substituting for K'k in Eq. (4.72):

that can be rearranged as x+ = + Kk(yk-Ck^) . (4.76)

The corresponding estimation error is derived from Eqs. (4.71), (4.73) and (4.76) as x+ = (I - KfeC*)*fc + Kkyk . (4.77)

The error covariance matrix P& changes when new measurement information is used.

Pt = E [x+XfcT] = (I - KfcCfc)PjT(I - KfeCfe)T + KfcRfeK£ (4.78)

where P^ and Pj are the prior and updated error covariance matrices, respectively [182].

From Eq. (4.76), the updated estimate is equal to the prior estimate corrected by the error in predicting the last measurement and the magnitude of the correction is determined by the "gain" K^. If the criterion for choosing Kfc is to minimize a weighted scalar sum of the diagonal elements of the error covariance matrix PjJ", the cost function Jk could be

where S is a positive semidefinite matrix. If S = I, Jk — trace [PjJ"] which is equivalent to minimizing the length of the estimation error vector. The optimal choice of K.k is derived by taking the partial derivative of Jk with respect to and equating it to zero:

Substituting Eq. (4.80) in Eq. (4.78) provides a simpler expression for PjJ" [182]:

The equations derived so far describe the state estimate and error co-variance matrix behavior across a measurement. The extrapolation of these entities between measurements is

Discrete Kalman filter equations are suihmarized in Table 4.1.

Table 4.1. Summary of discrete Kaiman filter equations

Description

Equation

Other

Process model

Was this article helpful?

0 0

Post a comment