A classical problem in optimal control theory, the linear-quadratic problem, is instrumental in identification of optimal feedback control strategies for both linear and nonlinear systems , Since bioprocesses without any exception are nonlinear systems, we consider the nonlinear optimal control problem described by Eqs. 7.1 and 7.3-7.9. Let there be open-loop trajectories of u(t) and x(i), u*(t) and x*(t), respectively, for a particular initial condition, x(0) = Xq, at which the necessary conditions for open-loop op-timality, Eqs. 7.11, 7.13, 7.14 and 7.20, are satisfied, with the Hamiltonian H being defined in Eq. 7.10. It is assumed here that x(0) is specified and tf and x(tf) are unspecified. After a second-order expansion of the objective function J in Eq. 7.3 around the optimal open-loop trajectories of the state variables and the manipulated inputs, having adjoined the constraints in Eq. 7.1 and employed the necessary conditions for optimality listed above, the variation in 5 J can be expressed as 
SJ = ]- [ f [(<5u)tR<5u + 2(<5x)tQ<5u + (<5x)TP<5x] dt 2 Jo
<5x(i) = x(i) - x*(i), 5u(i) = u(t) - u*(i), and 5J = J - J*, (7.131) and
Notice that the definitions of P, Q and R are the same as those in Eqs. 7.79. The matrices P(i), Q(t), R(t) and S/ are evaluated at the optimal trajectories of x and u, viz., x(f) = x*(f) and u(t) — u*(i). The vector of state variables x considered here includes the n process variables which influence the process kinetics and additional up to (<2 + 6+1) state variables, the time-variance of which is described by Eqs. 7.23, 7.24 and 7.26. P, R and Sf are symmetric matrices. One can then work with the following perturbation equations obtained from Eq. 7.1 via linearization around the open-loop optimal policy [u(t) = u*(i), x(£) = x*(£)] for a fixed initial condition stated in Eq. 7.1, viz., x(0) = Xq.
The equation above represents the process behavior for initial conditions in a close neighborhood of Xq. Definitions of system matrices A and B are the same as those in Eqs. 7.79 and 7.104. The variation in the objective function in Eq. 7.130 can be arranged in the following quadratic form
The objective of the optimal feedback control is then to minimize the degradation in the process performance (5 J < 0) due to perturbations in x and u. Maximization of S J then requires solution of Eq. 7.133 and the associated adjoint variable equations. The boundary conditions for Sx(t) are provided at t = 0, while those for the adjoint variables X(t) are known at t = tj. The solution to the resulting two-point boundary value problem can be conveniently expressed using the Riccati transformation wherein the adjoint variables and the corresponding state variables are related as [498, 560]
For the objective functional in Eq. 7.135, the variation in the n x n matrix S(i) with t is described by the following Riccati equation
~ = -SA-ATS + (SB+Q)R"1(Qr+BrS)-P, S(i/) = S/. (7.137)
The solution to Eq. 7.137 is then employed to relate the manipulated inputs to the state variables as per the following perturbation feedback control law [84, 498]
For implementation of the feedback control policy outlined in Eqs. 7.1377.139, knowledge of the optimal open-loop control policies is required. If the initial condition x(0) is altered, the entire nonlinear open-loop optimal control policy must be recalculated, since nonlinear optimal control problems, such as the ones encountered with bioprocesses, depend nonlin-early on the initial process conditions. A set of optimal open-loop control policies over a range of nominal initial conditions Xq must be calculated and stored prior to implementation of optimal feedback control. The corresponding trajectories of controller gains, K(i), based on solution of Riccati equation, Eq. 7.137, should be calculated and stored. The on-line feedback control can then be implemented by identifying the closest initial condition (among the stored values) to the actual initial condition and using the corresponding trajectory of proportional controller gain matrix, K(t), for feedback control. The procedure described here is useful for designing optimal proportional controllers with time-varying gains. Besides the proportional action, the other two controller actions, viz., the derivative and integral actions, can be built in with certain modifications of the problem considered earlier [135, 498]. For example, integral action can be added by inclusion of time derivative of u in the objective function J or by augmenting the state variables by p auxiliary state variables z(t) with
In Eq. 7.140, M is an appropriate weight matrix and the p auxiliary variables correspond to those state variables for which integral action is desired. The state variable vector x(ii) then would be comprised of the n process variables which influence the process kinetics, up to (a + b + 1) auxiliary state variables which satisfy Eqs. 7.23, 7.24 and 7.26 and p auxiliary variables which satisfy Eq. 7.140. Derivative control action can similarly be incorporated through a different transformation .
Was this article helpful?