Dynamic Optimization of Batch Process Operations

Discussion on optimal operation of fermentation processes in Chapter 7 focused on the search for open-loop optimal trajectories (Section 7.2) and regulation of process operation to track reference trajectories while rejecting disturbances by using optimal feedback control (Section 7.5) and model predictive control (MPC) (Section 7.6). These techniques rely on the availability of reliable dynamic models for the process and disturbances. Industrial practice involves following recipes developed in the laboratory that are modified to accommodate changes in equipment and scale. The 'educated trials' approach based on experience and heuristics is used often for recipe adjustment. The recipes and reference profiles are often non-optimal since the search is usually limited to the vicinity of a known 'acceptable' recipe and the reference profiles are somewhat conservative to assure feasible process operation in spite of process disturbances.

From an industrial perspective, there is a need to improve the performance of batch processes in spite of incomplete and/or inaccurate process models, few online measurements and estimates of process variables, large uncertainties (model inaccuracies, process disturbances, variations in raw material properties), and important operational and safety constraints. In a series of papers, Bonvin and co-workers assessed the industrial perspective in batch process operation and the batch process optimization problem

[71, 73, 348], and proposed an 'Invariant-Based Optimization' approach that does not require an accurate process model [73].

Industrial Perspective and Practice The operational objectives of batch processes are high productivity, reproducible product quality, process and product safety, and short time to market. These objectives could be posed as an optimization problem, but the implementation of optimization through mathematical modeling and optimization techniques is not widespread. The popular approach is to develop recipes in the laboratory for safe implementation in production environment, then empower plant personnel to adjust the process based on heuristics and experience for incremental improvements from batch to batch [622]. Various organizational and technical reasons are cited for this practice [73].

The organizational reasons hindering the adoption of a rigorous dynamic optimization approach include process registration and validation, low levels of interaction between design of individual steps in multi-step processes, and separation between design and control tasks [73]. Process registration and validation with regulatory agencies such as the U.S. Food and Drug Administration is mandatory in production of active compounds in pharmaceutical and food processing. Because this is a time-consuming and costly task, it is performed simultaneously with the research and development work of a new process. Consequently, the main operational parameters are fixed within conservative limits at an early stage of process development. Changes in process operation may require revalidation and registration; a costly venture. The second reason is related to use of different design teams for different steps of the process. While each step is optimized by introducing the appropriate conservatism to account for uncertainty, the process as a whole may become too conservative. The third reason stems from the practice of treating design and control as separate tasks; a legacy from the times when automatic process control was considered mostly as the installation of hardware for process control and tuning individual controllers. This prevents the use of systems science, control theory, and optimization tools to develop a better design that is easier to optimize and control.

A number of technical reasons has influenced the administrative decisions to favor this conservative optimization practice. Lack of reliable first principles models, absence of on-line quality measurements, uncertainty due to variations in feedstock properties, assumptions during process scale-up, and modeling errors, and constraints caused by equipment limitations, operational limits on variables and end-points are some of these reasons [73]. These reasons also hint that improvements in model development and measurements that are coupled with powerful optimization techniques may generate significant improvements in batch process operation, productivity, product quality, and safety.

Dynamic Optimization Problem Batch process optimization is a dynamic optimization problem that involves dynamic and static constraints. Various types of optimization problems are formulated depending on the assumptions used for uncertainty and availability of measurement information, and the method used for updating the optimal values of inputs [73]. If it is assumed that there is no uncertainty, the problem is reduced to nominal optimization and the computational load is lighter. But the solution computed may not be feasible when implemented in a real application that invariably has some uncertainty. Uncertainty necessitates the adoption of conservative operation (control) strategies. The uncertainty is taken into account in robust optimization by considering the range of possible values for uncertain parameters and the optimization is performed by considering the worst-case scenarios (selecting the best solution for the worst conditions assures that the solution would be feasible under better scenarios) or by using expected values. The availability of process measurements reduces uncertainty and consequently less conservative process operation strategies can be adopted. Various types of dynamic optimization problems and their major disadvantage are classified in [73] (Figure 9.2). If quality mea-

Problem:

Dynamic Optimization

Uncertainty:

Nominal Optimization (discards uncertainty)

Optimization under Uncertainty

Information:

No Measurements Robust Optimization (conservative)

Measurements Measurement-based Optimization

Input Calculation:

Model-based Repeated Optimization

Model-free Implicit Optimization

Methodology:

Fixed Model (accuracy of model)

Refined Model (persistency of excitation)

Evolution/ Interpolation

(curse of dimensionality)

Reference Tracking (what to track for optimality)

Figure 9.2. Dynamic optimization scenarios with, in parentheses, the corresponding major disadvantage [73].

surements at the end of a batch run are available, they could be used in determining the optimal operation policy of the next batch. Consider the fcth run of a batch process where process measurements from the previous (fc — 1) batches and measurements up to the current time ti of the fcth batch are available. The optimal input policy for the remaining time interval [ti,tf] of the fcth batch can be determined by solving the optimization problem:

xfe = F(xfc,0,ufc)+dfe(t), xfe(0) = Xq yfe = H(xfe,0)+vfc(t) S(xfc, 0,ufe) < 0 , T(xfc(i/,0)) <0 yi(i) , i = 1, N for j = 1, fc — 1 and i — 1,1 for j = k where the superscript k denotes the fcth batch run, xfc(£), uk(t),yk(t). dk(t), and vfc(i) denote the state, input, output, disturbance, and measurement noise vectors, respectively. S() is a vector of path constraints, and T() is a vector of terminal constraints. y3{i) denotes the ith measurement vector collected during the jth batch run, and N the total number of measurements during a run. The optimization utilizes information from the previous k — 1 batch runs and measurements up to time ti of the current batch to reduce uncertainty in the parameter vector 0 and to determine the optimal input policy for the remainder of the current batch run fc.

The optimization approaches that rely on process measurements to update the inputs can be divided into two main groups: model-based techniques and model-free techniques [73]. Model-based techniques use the mathematical model of the batch process to predict the evolution of the run, compute the cost sensitivity with respect to input variations, and update the inputs. Measurement information is used to improve the estimates of the state variables and parameters. The estimation and optimization tasks are repeated over time (as frequently as at each sampling time), yielding significant computational burden. In this repeated optimization approach the model can be fixed or refined during the batch run and its optimization. If the model is fixed, a higher level of model accuracy is necessary. If the model parameters are known with accuracy and uncertainty is caused by disturbances, the fixed model can yield satisfactory results. If model refinement such as estimation of model parameters is carried out during the run, the initial model may not need to have high accuracy. The tradeoff is heavier computational burden and addition of persistent excitation to input signals in order to generate data rich in dynamic information for more reliable model identification. Unfortunately, the requirement for sufficient min J

I'l.'/l such that given excitation in inputs may conflict with optimal value of inputs.

Model-free optimization relies on measurements from batch runs for updating the inputs to achieve optimality without using a model or an explicit numerical optimization procedure. These implicit optimization schemes use either the deviation from a reference trajectory or measurement information to update the inputs. The reference-based techniques update the inputs by using feedback controllers to track the reference trajectories. Reference (optimal) trajectories are usually computed using a nominal model (See Section 7.2). Uncertainty in the model may cause significant deviation of the actual optimal (unknown) trajectories from the nominal ones computed by the model. Data-based techniques compute the inputs directly by using measurement information from past and current batch runs. A reliable historical database is needed to implement this approach.

The type of measurements (off-line taken at the end of the batch run or on-line during the progress of the batch) indicate the type of optimization sought. Off-line end-of-batch measurements lead to batch-to-batch optimization where process knowledge obtained in earlier batches enable update of the operating strategy of the current run, approaching an optimal solution as information from additional batch runs are used. The availability of on-line measurements during the run enable the use of an on-line optimization approach. On-line measurement-based optimization schemes have many similarities to model-predictive control. The Iterative Learning Control approach (Section 7.6) integrates MPC and batch-to-batch optimization [327, 328, 682]. The integrated methodology is capable of eliminating persisting errors from previous runs and responds to new disturbances that occur in the current run [100, 323, 327]. The differences between measurement-based optimization and MPC are discussed and an extensive list of references for measurement-based optimization studies is given in [73]. Table 9.1 summarizes the classification of measurement-based optimization methods in [73] and provides additional references.

An invariant-based optimization approach is proposed in [73] to identify the important characteristics of optimal trajectories of a batch run that are invariant under uncertainty and provide them as reference to feedback controllers. The method consists of three steps: state-dependent parameterization of inputs, selection of signals that are invariant under uncertainty, and tracking the invariant by using process measurements. The state-dependent parameterization is related to the characteristics of the optimal solution: switching times of inputs (related to the concept of process landmarks) and the types of input arcs that occur between switching times. The two types of input arcs are singular arcs where the input lies in the interior of the feasible region and nonsingular arcs where the inputs are determined by a path

Table 9.1. MBO methods specifically designed to compensate uncertainty [73].

Methodology

Batch-to-batch optimization (Off-line measurements)

On-line optimization (On-line measurements)

Model-based

Fixed model

[131, 683, 684]

[2, 6, 380]

Model-based

Refined model

[152, 157, 178, 317] [365, 369, 497]

[100, 142, 177, 323] [321, 430, 529]

Model-free

Evolution Interpolation

[108, 687]

[155, 307, 486] [537, 596, 673]

Model-free

Reference tracking

[536, 562]

[158, 186, 312] [532, 559, 585] [602, 612, 623]

constraint. The structure of the optimal solution is determined by the type and sequence of arcs, and switching times. This can be based on experiential knowledge of plant personnel, analytical expressions for optimal inputs, or inspection of the solution from numerical optimization. Uncertainty affects the numerical values of optimal inputs, but the necessary conditions for optimality remain invariant. This fact is exploited to identify the invariants and the measurements to track the invariants by use of feedback. The proposed approach is effective when the optimization potential stems from meeting path and/or terminal constraints of a batch run [73].

Was this article helpful?

0 0

Responses

  • geraldino boni
    How to carryout optimisation of batch fermentation?
    8 months ago

Post a comment