T

Pb m

Figure 4.12. CPCA and HPCA methods [640, 665]. X data matrix is divided into b blocks (Xi, X2,..., X5) with block b having mxb variables.

Wold et al. [665] the modeling process data (hundreds of variables) from a catalytic cracker.

Another PLS algorithm called multiblock PLS (MBPLS) has been introduced to deal with data blocks [629, 640, 666]. The algorithm could handle many types of pathway relationships between the blocks. It is logically specified from left to right. Left end blocks are defined as blocks that predict only while right end blocks are blocks that are predicted but do not predict. Interior blocks both predict and are predicted. The main difference between this method and HPLS is that in MBPLS, each X block is used in a PLS cycle with Y block to calculate the block scores tf,, while in HPLS tb is calculated as in CPCA. The basic methodology is illustrated in Figure 4.14 where there is a single Y block and two X blocks and the algorithm is given as

1. Start by selecting one column of Y, y3. as the starting estimate for u.

2. Perform part of a PLS round on each of the blocks Xi and X2 to get (wj,ti) and (w2,t2) as in Eq. 4.21 stated in PLS algorithm in Section 4.2.4.

3. Collect all the score vectors ti, t2 in the consensus matrix T (or composite block).

4. Make one round of PLS with T as X (Eqs. 4.21-4.23) to get a loading vector v and a score vector tc for T matrix, as well as a loading vector q and a new score vector u for the Y matrix.

5. Return to step 2 and iterate until convergence of u.

6. Compute the loadings pi = Xfti/tfti and p2 — X2 t2/t2for the Xi and X2 matrices.

7. Compute the residual matrices Ei = Xi — tipf, E2 = X2 — t2P^, F = Y-tcqr.

8. Calculate the next set of latent vectors by replacing Xi, Xo and Y by their residual matrices Ei,E2, and F, and repeating from step 1.

This algorithm has been applied to monitoring a polymerization reactor [355] where process data are divided into blocks of data, with each block representing a section of the reactor. An increased sensitivity in multivariate charts is reported for this reactor. The reason behind sensitivity improvement is that these charts for individual blocks are assessing the magnitude of the deviations relative to normal operating conditions in that part of the process only, and not with respect to variations in all variables of the process.

Figure 4.13. HPLS method [640, 665]. X data matrix is divided into b blocks (Xi, X2,..., Xf,) with block b having rrixb variables while only one Y block containing my variables is present.
Figure 4.14. Multiblock PLS algorithm [355, 629].

Another application of the same algorithm is also reported for the case of wet granulation and tableting [638]. An improvement (with respect to ordinary PLS method) in prediction of a number of pharmaceutical tablet properties was reported.

When extra information is available (such as feed conditions, initial conditions, raw material qualities, etc.), this information should be incorporated in the multiway MBPLS framework. A general block interaction can be depicted for a typical batch process as shown in Figure 4.15. In this typical multiblock multiway regression case (multiblock MPLS), the blocks are the matrix containing a set of initial conditions used for each batch, Z(/ x AT), the three-way array of measurements made on each variable at each batch, X(/ x J x K), and Y(/ x M) containing quality measurements made on batches. Kourti et al. have presented an implementation of this MBPLS technique [297] by dividing process data into two blocks based on different polymerization phases and also incorporating a matrix of initial conditions. An improvement in the interpretation of multivariate charts and fault detection sensitivity on individual phases are reported. The ability to relate the faults detected to initial conditions was another benefit of multiblock modeling that included relations between initial conditions and final product quality.

Was this article helpful?

0 0

Post a comment