B(q) = (h + b2q~1 + ■ ■ ■ + bnuq~(n"-V) (4.33)

Often the inputs may have a delayed effect on the output. If there is a delay of nk sampling times, Eq. (4.30) is modified as

ï?(t) + /i»?(t - 1) + ■ • • + fny v(t - ny) (4.35) = biu(t - nk) + b2u(t - (nk + 1)) 4-----h bnuu(t - (nu + nfe - 1)) .

The disturbance term can be expressed in the same way w(t) = H(q, 0)e(t) (4.36)

where e(t) is white noise and

C(q) = l + cig-1 + • • • + cncq-n< D{d) 1 + diq-1 + The model (Eq. 4.28) can be written as y(t) = G(q, 0)u(t) + F(q, 6)e(t) (4.38)

where the parameter vector 0 contains the coefficients bi, Ci,di and ft of the transfer functions G(q,6) and H(q,6). The model structure is described by five parameters ny, nu, nc, and n<j. Since the model is based on polynomials, its structure is finalized when the parameter values are selected. These parameters and the coefficients are determined by fitting candidate models to data and minimizing some criteria based on reduction of prediction error and parsimony of the model.

The model represented by Eq. (4.38) is known as the Box-Jenkins (BJ) model, named after the statisticians who have proposed it [79]. It has several special cases:

• Output error (OE) model. When the properties of disturbances are not modeled and the noise model H(q) is chosen to be identity (nc = 0 and rid = 0), the noise source w(t) is equal to e(t), the difference (error) between the actual output and the noise-free output.

• AutoRegressive Moving Average model with eXogenous inputs (ARMAX). If the same denominator is used for G and H

A(q) = F(q) = D(q) = 1 + a+ • • • + anaq~n- . (4.39)

Hence Eq. (4.38) becomes

where A(q)y(t) is the autoregressive (regressing on previous values of the same variable y{t)) term, C(q)e(t) is the moving average of white noise e(i), and B(q)u(t) represents the contribution of external inputs. Use of a common denominator is reasonable if the dominating disturbances enter the process together with the inputs.

• AutoRegressive model with eXogenous inputs (ARX). A special case of ARMAX is obtained by letting C(q) = 1 (nc = 0).

These models are used for prediction of the output given the values of inputs and outputs in previous sampling times. Since white noise cannot be predicted, its current value e(t) is excluded from prediction equations. Predicted values are denoted by a "A over the variable symbol, for example y(t). To emphasize that predictions are based on a specific parameter set 0, the nomenclature is further extended to y(t | 6).

The computation of parameters 8 is usually cast as a minimization problem: select the values for 6 that minimize the prediction errors e(t, 6) = y(i) — y(t | for given sets of data over a time period. For N data points where "arg min" denotes the minimizing argument. This criteria has to be extended to prevent overfit of data. A larger model with many parameters may fit data used for model development very well, but it may give large prediction errors when new data are used. Several criteria have been proposed to balance model fit and model complexity. Two of them are given here to illustrate how they balance accuracy and parsimony:

• Akaike's Information Criterion (AIC)

where d is the number of parameters estimated (dimension of 6). • Final Prediction Error (FPE)

Model development (also called system identification) involves several critical activities including design of experiments and collection of data, data pretreatment, model fitting, model validation and acceptability of the model for its use. A vast literature has been developed over the last 50 years in various aspects of model identification [212, 354, 346, 500, 558]. A schematic diagram in Figure 4.4 [347] where the ovals represent human activities and decision making steps and the rectangles represent computer-based computations and decisions illustrates the links between critical activities.

4.3.2 State-Space Models

State variables are the minimum set of variables that are necessary to describe completely the state of a system. In quantitative terms, given the values of state variables x(t) at time to and the values of inputs u(t) for t > to, the values of outputs y(t) can be computed for t > to- Various types of state-space models are introduced in this section. Recall the models derived from first principles in Chapter 2. The process variables used in these models can be subdivided into measured and unmeasured variables, and all process variables can be included in the set of state variables while the measured variables can form the set of output variables. This way, the model can be used to compute all process variables based on measured values of output variables and the state-space model.

Classical state-space models are discussed first. They provide a versatile modeling framework that can be linear or nonlinear, continuous or discrete time, to describe a wide variety of processes. State variables can be defined based on physical variables, mathematical solution convenience or ordered importance of describing the process. Subspace models are discussed in the second part of this section. They order state variables according to the magnitude of their contributions in explaining the variation in data. Statespace models also provide the structure for developing state estimators where one can estimate corrected values of state variables, given process input and output variables and estimated values of process outputs. State estimators are discussed in the last part of this section.

State space models relate the variation in state variables over time to their values in the immediate past and to inputs with differential or difference equations. Algebraic equations are then used to relate output variables to state variables and inputs at the same time instant. Consider a system of first-order differential equations (Eq. 4.44) describing the change in state variables and a system of output equations (Eq. 4.45) relating the outputs to state variables f =*(*)= f(x(t),u(i)) (4.44)

For a specific time to, x(to) can be computed using Eq. 4.44 if x(t) and u(t) are known at time to. For an infinitesimally small interval St, one can compute x(to + St) using x(t0 + St) = x(i0) + St ■ f(x(i0), u(t0)). (4.46)

Then, the output y(to + St) can be computed using x(to + St) and Eq. (4.45). Equation (4.46) is the Euler's method for the solution of Eq. (4.44) if St is a small number. This computation sequence can be repeated to compute values of x(i) and y(t) for I > to if the corresponding values of u(t) are given for future values of time such as to + 2St, ■ ■ ■ ,to + kSt. The model composed of Eqs. (4.44)-(4.45) is called the state-space model, the vector x(i), the state vector, and its components Xi(t) the state variables. The dimension of x(t), n (Eq. (4.27)) is the model order.

State-space models can also be developed for discrete time systems. Let the current time be denoted as tk and the next time instant where input values become available as tk+i- The equivalents of Eqs. (4.44)-(4.45) in discrete time are x(ife+i) = f(x(tfc),u(ifc)) fc = 0,1,2, • ■ • (4.47)

For the current time to ~ t-k, the state at time tk+i = to + St is now computed by using the difference equations (4.47)-(4.48). Usually, the time interval between the two discrete times St = tk+i — ifc is a constant value equal to the sampling time.

The functional relations f(x, u) and h(x, u) in Eqs. (4.44)-(4.45) or Eqs. (4.47)-(4.48) were not restricted so far. They could be nonlinear. For the sake of easier mathematical solutions, if justifiable, they can be restricted to be linear. The linear continuous models are represented as x(t) = Ax(i) + Bu(i)

The linear discrete time model is x(tfc+x) = Fx(tfc) + Gu(ifc) k = 0,1,2,- •• y (ifc) = Cx(tfc) + Du(ifc) . (4.50)

The notation in Eq. (4.50) can be simplified by using Xfc or x(k) to denote x(i)t):

xfc+i = Fxfc + Gufc fc = 0,1,2, • • • yk = Cxfc + Dufc . (4.51)

Matrices A and B are related to matrices F and G as

Jo where the sampling interval T — tk+i - tk is assumed to be equal for all values of k. The dimensions of these matrices are

These models are called linear time-invariant models. Mild nonlinear-ities in the process can often be described better by making the matrices in model equations (4.49) or (4.50) time dependent. This is indicated by symbols such as A(i) or Ffc.

Disturbances are inputs to a process. Some disturbances can be measured, others arise and their presence is only recognized because of their influence on process and/or output variables. The state-space model needs to be augmented to incorporate the effects of disturbances on state variables and outputs. Following Eq. (4.28), the state-space equation can be written as x(t) = f(x(i),u(t),w(i)) y (t) = h(x(t),u(i),w(t))

where w(i) denotes disturbances. It is necessary to describe w(t) in order to compute how the state variables and outputs behave. If the disturbances are known and measured, their description can be appended to the model. For example, the linear state-space model can be written as where wi(t) and w2(i) are disturbances affecting the state variables and outputs, respectively, and Ei and E2 are the corresponding coefficient matrices. This model structure can also be used to incorporate modeling uncertainties (represented by wi(i)) and measurement noise (represented by w2(i)).

Another alternative is to develop a model for unknown disturbances to describe w (t) as the output from a dynamic system with a known input uu,(i) that has a simple functional form.

where the subscript w indicates state variables, inputs and functions of the disturbance(s). Typical choices for input forms may be an impulse, white noise, or infrequent random step changes. Use of fixed impulse and step changes lead to deterministic models, while white noise or random impulse and step changes yield stochastic models [347], The disturbance model is appended to the state and output model to build an augmented dynamic model with known inputs.

Sometimes a nonlinear process can be modeled by linearizing it around a known operating point. If the nonlinear terms are expanded using the linear terms of Taylor series and equations are written in terms of deviations of process variables (the so-called deviation variables) from the operating point, a linear model is obtained. The model can then be expressed in state-space form [438, 541],

Consider the general state-space equation Eqs. (4.44-4.45) and assume that there is a stable stationary solution (a steady state) at x — Xgg ^ u —

x(t) = Ax(t) + Bu(t)+Eiwi(i) y(t) = Cx(t) + Du(t) + E2w2(i) .

USs:

If f(x, u) has continuous partial derivatives in the neighborhood of the stationary solution x = xss, u = uss, then for t = 1, • • • , n:

d ft fe{x,u) = fe(xss,uss) + q^-{xss,uss)(x1 - xssA) + ■ ■ ■ (4.57)

+ • ' • + -t-(xss, Uss)(?Zm — uss m) + 7~fc(x — XSS,U — Uss)

oum where all partial derivatives are evaluated at (xss, uss) and r^ denotes the higher order terms that yield nonlinear expressions, which are assumed to be negligible. Consider the Jacobian matrices A and B that have the partial derivatives in Eq. (4.57) as their elements:

Was this article helpful?

## Post a comment