Structures of ANNs

ANNs have a number of elements. The basic structure of ANNs typically includes multilayered, interconnected neurons (or computational units) that nonlinearly relate input-output data. A nonlinear model of a neuron, which forms the core of the ANNs is characterized by three basic attributes (Figure 4.16):

A set of synaptic weights (or connections), describing the amount of influence a unit (a synapse or node) has on units in the next layer; a positive weight causes one unit to excite another, while a negative weight causes one unit to inhibit another. The signal Xj at the input synapse j connected to neuron k in Figure 4.16 is multiplied by weight wkj (Eq. 4.159).

A linear combiner (or a summation operator) of input signals, weighted by the respective synapses of the neuron.

An activation function with limits on the amplitude of the output of a neuron. The amplitude range is usually given in a closed interval [0,1] or [-1,1]. Activation function tp(-) defines the output of a neuron in terms of the activation potential vk (given in Eqs. 4.160 and 4.161). Typical activation functions include the unit step change and sigmoid functions.

A neuron k can be described mathematically by the following set of equations [226]:

(including bias)

Figure 4.16. A nonlinear model of a single neuron [226].

(including bias)

Figure 4.16. A nonlinear model of a single neuron [226].

where the input signals; wki,wk2, • ■ • ,Wkj>-,Wkm are the synaptic weights of neuron k, is the linear combiner output of the input signals, bk is the bias, vk is the activation potential (or induced local field), <£>(•) is the activation function, and yk is the output signal of the neuron. The bias is an external parameter providing an affine transformation to the output Uk of the linear combiner.

Several activation functions are available. The four basic types illustrated in Figure 4.17 are:

1. Threshold Function. Also known as McCulloch-Pitts model [377]