Process Validation

Process validation is defined as:

"Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its pre-determined specifications and quality attributes.

1 USFDA Guideline on General Principles of Process Validation, May 1987, P4.

The key words in this definition are "documented" and "consistently". It is essential that validation be properly documented and conducted under appropriate levels of control. Consistency within a specified matrix of operational ranges according to a controlled set of manufacturing instructions is the ultimate goal for a validated process. These operational ranges or limits must be demonstrated to encompass parameter values which do not have significant impact on product safety. In terms of fermentation, at least, it is the requirement for "demonstration" which is the most problematic. There is almost an inconsistency between the concept of supporting validation outcomes by analysis of end product and the "product by process" approach to drug approval. However, there is a limit to how much clinical testing (the ultimate "proof' of product safety) that can be done and validation will provide information as to the response surface where risk is minimized. Validation should not be considered as a discrete event, conducted once and subsequently archived. GMP requires ongoing monitoring of processes as an essential component of compliance. This monitoring should be applied using classical trend analysis methods - especially to permit identification of a potential problem before process failure and product rejection. Such monitoring also requires periodic scientific review of data, ideally by members of the Research and Development group responsible for development of the process. Validation thus becomes a company-wide activity.

In classical terms, validation involves manipulation of one variable at a time (be it the preparation of a buffer for an assay, the load to an autoclave, the concentration of a disinfectant or cleaning agent, for example), demonstrating that the outcome of the procedure is the same, and showing that there is no adverse impact on the product. This approach is manageable where the number of parameters to be evaluated is small and the interaction between variables inconsequential. In terms of fermentation, such a structured approach rapidly runs into the problem of number of experiments required. The product of a fermentation step in a process is often a relatively crude mixture of components which is difficult to analyze in a meaningful, quantitative, manner. In addition, if final product quality becomes the yardstick, the human resources involvement is unsupportable for any but the largest and most profitable manufacturers. This is both because the number of variables is large and because they are interdependent. It is therefore necessary to reduce the number of variables to be tested to a minimum whilst retaining an understanding of the manufacturing plant reality. It is evidently possible to set all operational limits for a process as being the same as the practical limits of control imposed by the process equipment. Such an approach has high safety profile but will also result in a high number of rejected processes. There is thus a trade-off between definition of process ranges which reduce risk of failure and their demonstration as acceptable ("safe").

The preferential entiy point to validation for fermentation is the development of the process (3). During this stage, the important parameters are defined and the preferred values determined. The capacity of the purification process to purify product away from host-derived components is normally determined concurrently with fermentation. Finally, the analytical tools to characterize the product through the process are developed in response to experience with the vagaries of the product itself in context of the production system. The prime importance of good documentation during process development is thus highlighted. The end result is a body of information surrounding a bench-scale process which provides product of a certain quality. The bench-scale process is then taken to pilot or full manufacturing scale prior to testing in animal models for toxicity and subsequently into Phase 1 clinical trial for human safety. This provides the baseline process profile with defined transfer points between stages and a series of in-process control analyses which is the benchmark for validation.

The question of in-process control testing is also an important consideration, since these are included in the quality indicators used in demonstrating successful validation. The process development phase (including scale-up) should be used to gather data to allow elimination of variables from the list of those items requiring validation. During development, there is a tendency to maximize the number of tests in a process - partly on the assumption that this will improve control. These tend to be carried through to scale-up - mainly to provide confidence of successful transfer from bench to pilot or production scale. However, each of these must be considered during validation. Thus the preferred approach is to monitor a large number of variables during process development and to identify those which are critical indicators of process control. The majority of the tests can thus be dropped from validation to monitoring status. In this regard, a good method need only be precise. Variability in an analytical method can only compound the difficulties of validating a potentially variable process.

Once the critical process parameters have been identified through the development programmed a validation protocol is written. The goal of this is to establish a series of experiments to test the limits for these parameters -individually and in combination. There is an important caveat at this point -numerology dictates that the number of combinations to be tested increases dramatically with the number of individual cases involved. A scientific case may be made to eliminate many of these combinations as insignificant, statistically unlikely, or physiologically implausible. For those parameters which remain, the validation will ideally, but not necessarily, demonstrate the failure limits of the process as well as the safe limits and the operational limits to form a nested set. That is where the operational (control) limits are encompassed by the range of values shown to have no evident impact on product (safe limits) and which, in turn, lie within those conditions which result in process failure. Thereby providing a high degree of confidence that the process will consistently provide a product which is safe for therapeutic use. The "good scientific argument" scenario may be coupled with a definition of "worst case" conditions to further reduce the number of experiments required. Thus, for example, the upper limit of parameter "A" may have more potential consequence on product quality than the lower limit. It may therefore be possible to define a lower safe limit and both safe and failure limits for the high ends of the range. For other parameters, there may be no discrete "failure" limits - for example, process temperature, within a relatively broad range, may be considered to simply reduce the rate of the biochemical processes occurring within the cell. Thus the process will arrive at a certain point at a time which is dependent on the temperature. Assuming that this point is the same (i.e. a target biomass for culture harvest) then the process can be considered not to have failed over this entire range. However, in the real world, process time in a manufacturing plant is costly and so the process will be operated under conditions designed to get the purified product in the shortest reasonable time. Thus the "failure" limits can be considered to be those outside of which the process takes too long or is too short to fit into the plant schedule. There is no rational reason to define or validate these ranges. However, experience in a manufacturing environment quickly shows that process controllers do malfunction and this can result in "drift" of a control variable and it is essential, from a pragmatic perspective, to have evidence which shows that this deviation has no adverse effect on the product.

Arguably one of the most important sections of the validation protocol is that which defines the success or failure of the process. The product of a fermentation process is, at best, a process intermediate where the ultimate product exists in a soup of "background" contaminants. It is often difficult, if not impossible with current methods, to evaluate the quality of the product at this stage of the overall manufacturing process. The quantity, or yield of product may be an acceptable surrogate for most experiments. It remains impossible to predict the effects of changes in fermentation on the profile of contaminants through the recovery and purification stages to the final drug substance. That is without actually doing the purification. It may not be necessary to cany all validation experimental processes through to final drug substance since some of them may fail at an early stage and it may be possible to use normal in-process controls and specifications as acceptance criteria. This latter case is, to a certain extent, based on overcapacity built in to the process. Thus, for example, a chromatography step will not generally be conducted at maximum resin capacity - there will be some binding capacity of resin left unused with the normal process stream. Changes in the load to the column, which may arise through changes in the upstream fermentation, are accommodated by this excess capacity and the process runs normally thereafter. However, it is good practice to retain archived samples to permit evaluation of at least the preliminary stages of purification for all validation experiments. It is obvious that the majority of this validation should be conducted at bench-scale insofar as this can be claimed to be representative of the full scale process. For some manufacturing systems, it may be required to conduct some validation at pilot or manufacturing scale. In any case, validation cannot be completed until the process has been conducted successfully in the manufacturing plant on a minimum of three consecutive occasions. It is equally important in this context to consider conditions under which re-validation is required. Thus changes in manufacturing facility (including equipment, environment, procedures, and materials) must be evaluated in terms of potential impact on the process and, if necessary, tested before implementation.

Was this article helpful?

0 0

Post a comment