Tsr D I u u u 1 Paba ytrb yt

= y( JL, ^ = yit-T)'b = m) l0g2 Pa(a = y(t-r))Pb(b = y{t))

where Pab is estimated from the normalized histogram of the joint distribution, and Pa and P& are the marginal distributions for y(t — r) and y(t), respectively. Similar to the correlation coefficient guided selection of the time delay, we should select the r where IT attains its first local minimum. Note that, for our example data, the choice of r with both methods is around 25 seconds (Figure 5.14). The drift towards zero in both quantities is due to the finite data size of 17,000, with a sampling rate of 2 measurements per second. Although both methods of choosing a time delay give useful guidelines, one should always make a reality check with the data.

fiMri

°0 1000 2000 3000 4000 5000 6000 7000 0000 9000 I [soc)

°0 1000 2000 3000 4000 5000 6000 7000 0000 9000 I [soc)

(b)

Figure 5.13. (a) Time series data of the blood oxygen concentration of a sleep apnea patient, (b) Selection of a too small time delay (r = 2) hides the information in the data, (c) Selecting a more appropriate time delay (r = 25) reveals more detail in the data.

Figure 5.13. (a) Time series data of the blood oxygen concentration of a sleep apnea patient, (b) Selection of a too small time delay (r = 2) hides the information in the data, (c) Selecting a more appropriate time delay (r = 25) reveals more detail in the data.

Figure 5.14. The correlation coefficient (solid line), and the mutual information content (dashed line) versus the time delay.

When analyzing chaotic signals, it is always easy to use a certain algorithm that gives out a certain result. Yet, the results should always be scrutinized before being adapted, as chaotic systems usually defy haute couture solutions. For a review of different methods to select the time delay, see [233] and the references therein. □

According to the embedding theorems of Takens [582] and Sauer et al. [535], if the attractor has a dimension d, then an embedding dimension of to > 2d is sufficient to ensure that the reconstruction is a one-to-one embedding. The geometric notion of dimension can be visualized by considering the hyper-volume occupied by a hyper-cube of side r in dimension d. This volume will be proportional to rd, and we may get a sense of dimension by measuring how the density of points in the phase space scale when we examine small r's. One of the methods to compute a dimension of the attractor is called the box counting method [90]. To evaluate d, we count the number of boxes necessary to cover all the points in the data set. If we evaluate this number, N(r) for two small values of r, then we can estimate d as

Was this article helpful?

0 0

Post a comment