Fault Diagnosis

Detection and diagnosis of faults in batch process operations is of great significance for productivity and product quality improvements. Several disciplines including statistics, systems science, signal processing, and computer science have contributed to the development of fault detection and diagnosis (FDD) techniques. FDD systems implement the following tasks [189]:

Fault detection: Indication of abnormal system behavior. This can be achieved by process monitoring techniques discussed in earlier chapters or by a number of other paradigms.

Fault isolation: Determination of the specific cause or location of the fault.

Fault identification: Determination of the magnitude of the fault.

The term "diagnosis" is used to refer to the combined isolation and identification tasks, but it can also be used as a synonym for isolation.

Faults are deviations from normal (expected) behavior in a process or its instruments. Faults may be grouped as sensor, actuator or process faults. Sensor faults are discrepancies between measured and actual values of process variables. Actuator faults are discrepancies between the control command received by an actuator and the actuator output. Process faults include all other faults. They may be additive such as leaks or multiplicative such as deterioration of process equipment like fouling of heat exchange surfaces. In general, additive faults are unknown inputs which are normally zero, and multiplicative faults are abrupt or gradual changes that affect the parameters of the process.

FDD methods can be classified as model-free and model-based methods. Model-free FDD methods do not utilize a mathematical model of the plant. They are based on limits on variables, physical redundancy or empirical process knowledge (mental models). Model-based FDD methods use a mathematical model of the process developed using by first principles or data-based empirical techniques. They either use the residuals between measured and estimated values of process variables or recursive estimates of model parameters to implement FDD. MSPM methods discussed in Chapter 6 for determining out-of-control status (fault detection) and contribution plots, or other statistical tools such as discriminant analysis are also model-based techniques.

The performance of fault detection and diagnosis methods is characterized by several benchmarks:

Sensitivity: Ability to detect and diagnose faults of a specific size. The magnitude of fault size to detect depends on process needs.

Discrimination power (isolation performance): Ability to discriminate the correct fault (s) when several faults occur simultaneously, masking each other.

Robustness: Ability to detect and diagnose a fault in the presence of noise, disturbances, and modeling errors.

Missed fault detections and false alarms: The number of faults that have not been detected and the number of alarms issued when there were no faults.

Detection and diagnosis speed: Time to detect and diagnose faults after their occurrence.

The first four benchmarks are related to Type I and Type II errors discussed in Section 6.1.

To check the correctness of measurements additional information is necessary. For example, the correctness of some temperature measurement reported by a sensor can be checked by using readings from a second temperature sensor (that measures the same temperature) or other relevant process information such as readings of other variables and energy balances. This information redundancy is a critical element of FDD. If duplicate sensors are used to measure the same variable and their readings are compared to detect presence of faults, there is physical redundancy. If a process model is used to estimate process variables and the difference between measured and estimated values forms the basis of diagnosis, there is analytical or functional redundancy. Since physical redundancy necessitates duplication of measurement systems, it is usually more expensive. Furthermore, it is usually focused on FDD of a single variable. Physical redundancy is considered when instantaneous FDD is needed for critical process equipment. Most modern FDD techniques focus on multivariable systems and use analytical redundancy that can leverage the correlation between various process variables.

One approach for FDD that appeals to plant personnel is to first identify process variables that have significant influence on an out-of-control signal issued by process monitoring tools, and then to reason based on their process knowledge about the possible source causes that affect these variables. The influence of process variables can be determined by contribution plots discussed in Section 8.1. The second stage of this indirect FDD approach can be automated by using knowledge-based systems. Many FDD techniques are based on direct pattern recognition and discrimination that diagnoses the fault directly from process data and models. Their foundations are built on signal processing, machine learning and statistics theory. In some techniques, trends in process variables are compared directly to a library of patterns that represent normal and faulty process behavior. The closest match is used to identify the status of the process. Statistical discrimination and classification analysis, and Fisher's discriminant function are some of the techniques drawn from statistical theory. They are discussed in Section 8.2. Other model-based FDD techniques are based on signal processing and systems science theory such as Kalman filters, residuals analysis, parity relations, hidden Markov models, and parameter estimation. They are introduced in Section 8.3. Artificial neural networks provide FDD techniques relying on fundamentals in statistics and computer science classification and machine learning, respectively. Knowledge-based systems (KBS) provide another group of FDD techniques that have roots in artificial intelligence. KBSs and their use in integrating and supervising various model-based and model-free FDD techniques are discussed in Section 8.4.

Faults can be classified as abrupt (sudden) faults and incipient (slowly developing) faults. Abrupt faults may lead to catastrophic consequences. They need to be detected quickly to prevent compromise of safety, productivity or quality. Incipient faults are usually associated with maintenance problems (heat exchange surfaces getting covered with deposits) or deviation trends in critical process activities from normal behavior (trends in cell growth in penicillin production). Incipient faults are typically small and consequently more difficult to detect. Multivariate techniques are more useful in their detection (See Chapter 6) since these techniques make use of information from all process measurements and can notice burgeoning trends in many variables and integrate that information to reach a decision. Quick detection may not be as critical for maintenance related problems, but deviations in critical process activities are usually time critical. The time behavior of faults can be grouped into a few generic types: jump (also called step or bias change), intermittent, and drift (Figure 8.1). Jumps in sensor readings are often caused by bias changes or breakdown. Wrong manual recordings of data entries or loose wire connections that lose con-

a) Jump or bias change b) Intermittent c) Drift

Figure 8.1. Typical fault functions [189], tact would result in intermittent erroneous measurements. A measurement instrument that is warming up or an actuator that is wearing out would yield drift faults. Disturbances have the same types of time behavior. These faults and disturbances are usually slow and generate low frequency signals. In addition to faults, sensors, actuators, and process equipment are subjected to noise. Noise is usually assumed to be a random, zero mean, high frequency signal.

Was this article helpful?

0 0

Post a comment