## Holter Data Classification

Noninvasive investigation of the heart action is one of the most important methods for early diagnosis of heart disease. Long term holter monitoring is widely applied to patients with heart problems such as arrhythmias. The primary task of computer-aided systems in holter ECG evaluation is to distinguish between different beat types.

Among the various beats encountered, the discrimination betweenVentricular (V) and Normal (N) is of paramount importance.

We have used several methods for classification of the holter beats - for further details please refer to the published paper here.

Brief results:

## The Best Wavelet Packet Path feature extraction from QRS complex

### Wavelet transform and wavelet packets decomposition

The wavelet transform (WT) is a time-frequency analysis method which differs from the short time Fourier transform because it allows localization of the information in the time frequency plane. In discrete wavelet transform(DWT) the scaling and translation parameters are discretized. If the step of scaling and translation is 2j the sampling of time-frequency space is called dyadic, which is the most common. The Multi-Resolution Analysis (MRA) is in the core of the wavelet theory. The idea of MRA has been developed by Mallat and Meyer. In MRA a function is decomposed into an approximation (representing the slow variations of the function) and a detail (representing the rapid variations of the function), on a level by level basis, by a scaling and a wavelet function. Both the scaling and the wavelet functions are created from the multi-resolution analysis.

Wavelet packets decomposition were introduced by Coifman, Meyer and Wickerhauser by generalizing the link between multiresolution approximations and wavelets. A space Vj of multiresolution approximation is decomposed into a lower resolution space Vj+1 and a detail space Wj+1. This is done by dividing the orthogonal basis nÎZ of Vj into two new orthogonal bases

### Local Discriminant Basis

The Local Discriminant Basis (LDB) were introduced by Saito and Coifman. The LDB is an extension to the “best-basis” method to select an orthogonal basis suitable for signal or image classification problems from a large collection of orthogonal bases consisting of wavelet packets or local trigonometric bases. The best basis method [3] is the main wavelet based algorithms for dimension reduction. The best basis consists of choosing a redundant basis of orthogonal functions to compress the signal. The orthogonal redundant basis is chosen such as the decomposition algorithm may be obtained from one set of filter coefficients. The original best-basis algorithm selects a basis maximizing entropy from such a library of orthogonal bases. The LDB algorithm selects a basis maximizing certain discriminant measure among classes.

### ECG beat classification

We implemented wavelet packets decomposition by means of filter bank (quadrature mirror filters). The impulse response of low pass filter corresponds to the scaling function, meaning that the output of the low pass filter is the approximation of the signal. In the same way the impulse response of a high pass filter corresponds to wavelet function (the output of high pass filter contains detail of signal). Example of N beat decomposion by means of wavelets packets transform is shown in figure below.

In the proposed iterative algorithm of feature extraction is firstly computed wavelet packets tree of template. Next wavelet packets tree of current beat is made. Each node of these decomposion trees has computed relative entropy. Then expanded node is found based on maximum of relative entropy. This node is split into two nodes in the next level. From these two nodes that one is selected that has higher relative entropy. An example of proposed algorithm result is shown in figure below in the left side. In the right side is an example of LDB result.

The features are defined as differencies between template and current beat in selected nodes. Because we used decomposition into 5 levels we obtained 5 nodes and 5 features from one template matching. In this case we use four templates what means that we have acquired 20 features.

### Results

In order to test the proposed approach we used MIT-BIH arrhythmia database [9]. The MIT database includes 48 ECG records which are half an hour long at sample frequency 360 Hz. The records were re-sampled to 500 Hz. We have used only 128 samples around R peak for each beat. The records were divided into two sets. The first set consists of records 100, 103, 105, 111, 113, 117, 121, 123, 200, 202, 210, 212, 213, 214, 219, 221, 222, 228, 231, 233, 234. The second set consists of records 101, 106, 109, 112, 114, 115, 116, 118, 119, 122, 124, 201, 203, 205, 207, 208, 209, 215, 220, 223, 230.

#### Results for 1st dataset

Beat type | Sensitivity | Specificity | Positiv predictive value |
---|---|---|---|

N | 85,6 | 73,1 | 93,9 |

V | 93,9 | 89,5 | 47,3 |

LBBB | 31,4 | 87,4 | 18,35 |

RBBB | 17,2 | 99,5 | 74,4 |

#### Results for 2nd dataset

Beat type | Sensitivity | Specificity | Positiv predictive value |
---|---|---|---|

N | 85,6 | 73,1 | 93,9 |

V | 93,9 | 89,5 | 47,3 |

LBBB | 31,4 | 87,4 | 18,35 |

RBBB | 17,2 | 99,5 | 74,4 |

Previous page: Myocardial Infarction

Next page: Mechanical Circulatory Support