M

t=i where M = Yl\=i mi with rrii = mi-1 ■ (ny - m + i — l)/i, N is the time series data length, p\(t) — 1, p,(i) are monomials of degree up to I composed of various combinations of z\(t) to zn(t) (n = m x ny), e(t) are the residuals, and 6i are the unknown model parameters to be estimated.

A new methodology has been proposed for developing multivariable additive NARX (Nonlinear Autoregressive with eXogenous inputs) models based on subspace modeling concepts [122], The model structure is similar to that of a Generalized Additive Model (GAM) and is estimated with a nonlinear Canonical Variate Analysis (CVA) algorithm called CANALS. The system is modeled by partitioning the data into two groups of variables. The first is a collection of "future" outputs, the second is a collection of past input and outputs, and "future" inputs. Then, future outputs are predicted in terms of past and present inputs and outputs. This approach is similar to linear subspace state-space modeling [316, 415, 613]. The appeal of linear and nonlinear subspace state-space modeling is the ability to develop models with error prediction for a future window of output (window length selected by user) and with a well-established procedure that minimizes trial-and-error and iterations. An illustrative example of such modeling is presented based on a simulated continuous chemical reactor that exhibits multiple steady states in the outputs for a fixed level of the input [122],

Models with a small number of monomials are usually adequate to describe the dynamic behavior of most real processes. Methods have been developed for the combined structure selection and parameter estimation problem based on Gram-Schmidt orthogonalization [94]. The selection of monomials is carried out by balancing the reduction in residuals and increase in model complexity. Criteria such as Akaike Information Criteria (AIC) are used to guide the termination of modeling effort. A variant of AIC is given in Eq. 4.42

where E = (e(l) • ■ • e(N))T and k is the number of parameters 6 in the model. AIC and similar criteria balance minimization of prediction error (residuals) and model complexity (parsimony). The addition of new monomials to the model is terminated when AIC is minimized.

A common problem with polynomial models is the explosion of the predicted variable magnitude. This may be caused by assigning large values to some predictors which are raised to high powers, or to the existence of unstable equilibrium points. This type of behavior necessitates the censoring of the predicted variable value. One censoring method is based on embedding the whole prediction term as the argument of a sigmoid function as is done in ANN. Consequently, the predicted value reaches a lower or upper limit as the magnitude of the argument increases. Other censoring methods rely on fixed upper and lower limits, or upper and lower limits that are linear functions of the value predicted by the uncensored polynomial model.

Threshold Models provide a nonlinear description by using different submodels in different ranges of a variable. A piecewise linear model with AR and MA terms takes the form

where the appropriate parameter set (aj, 6j) is selected based on y(t — d) € Rj, j = 1,1. Here Rj — (rj_i. v./) with the linearly ordered real numbers ro < ri < • • ■ < ri called the threshold parameters, and d is the delay parameter [24]. The identification of threshold models involves estimation of model parameters and selection of d and ry The threshold model (Eq. 4.177) can be reduced to an AR structure by setting b^ — 1 and b'f' = 0, i — l,m — 1. External input variables can also be incorporated and the condition for selection of parameter sets may be based on the input variables. The submodels may also be nonlinear functions such as NARX and NARMAX models.

Models Based on Spline Functions. Spline functions provide a non-parametric nonlinear regression method with piecewise polynomial fitting. A spline function is a piecewise polynomial where polynomials of degree q join at the knots Ki, i = 1, k and satisfy the continuity conditions for the function itself and for its q — 1 derivatives [658]. Often continuity of the first and second derivatives are enough, hence cubic splines (q = 3) have been popular. One-sided and two-sided power univariate basis functions for representing gth order splines are bq(y - Ki) = (y - Ki)\ and bf(y-Ki) = {±(y-Ki)\l (4.178)

where subscript + indicates that the term is evaluated for positive values, the basis function has a value of zero for negative values of the argument.

The multivariate adaptive regression splines (MARS) method [169] is an extension of the recursive partitioning method. Friedman [169] describes the evolution of the method and presents algorithms for building MARS models. An introductory level discussion with applications in chemometrics is presented by Sekulic and Kowalski [539]. Spline fitting is generalized to higher dimensions and multivariable systems by generating basis functions that are products of univariate spline functions

where Rm is the maximum number of allowable variable interactions and Vv(r,m) denote predictor variables. The final model is of the form

Was this article helpful?

0 0

Post a comment