Nejvíce citovaný článek - PubMed ID 26402681
This paper is devoted to proving two goals, to show that various depth sensors can be used to record breathing rate with the same accuracy as contact sensors used in polysomnography (PSG), in addition to proving that breathing signals from depth sensors have the same sensitivity to breathing changes as in PSG records. The breathing signal from depth sensors can be used for classification of sleep [d=R2]apneaapnoa events with the same success rate as with PSG data. The recent development of computational technologies has led to a big leap in the usability of range imaging sensors. New depth sensors are smaller, have a higher sampling rate, with better resolution, and have bigger precision. They are widely used for computer vision in robotics, but they can be used as non-contact and non-invasive systems for monitoring breathing and its features. The breathing rate can be easily represented as the frequency of a recorded signal. All tested depth sensors (MS Kinect v2, RealSense SR300, R200, D415 and D435) are capable of recording depth data with enough precision in depth sensing and sampling frequency in time (20-35 frames per second (FPS)) to capture breathing rate. The spectral analysis shows a breathing rate between 0.2 Hz and 0.33 Hz, which corresponds to the breathing rate of an adult person during sleep. To test the quality of breathing signal processed by the proposed workflow, a neural network classifier (simple competitive NN) was trained on a set of 57 whole night polysomnographic records with a classification of sleep [d=R2]apneaapnoas by a sleep specialist. The resulting classifier can mark all [d=R2]apneaapnoa events with 100% accuracy when compared to the classification of a sleep specialist, which is useful to estimate the number of events per hour. [d=R2]When compared to the classification of polysomnographic breathing signal segments by a sleep specialistand, which is used for calculating length of the event, the classifier has an [d=R1] F 1 score of 92.2%Accuracy of 96.8% (sensitivity 89.1% and specificity 98.8%). The classifier also proves successful when tested on breathing signals from MS Kinect v2 and RealSense R200 with simulated sleep [d=R2]apneaapnoa events. The whole process can be fully automatic after implementation of automatic chest area segmentation of depth data.
- Klíčová slova
- breathing analysis, computational intelligence, depth sensors, human-machine interaction, image processing, signal processing,
- MeSH
- dechová frekvence fyziologie MeSH
- dospělí MeSH
- dýchání MeSH
- lidé středního věku MeSH
- lidé MeSH
- počítačové zpracování signálu MeSH
- polysomnografie metody MeSH
- senzitivita a specificita MeSH
- spánek fyziologie MeSH
- syndromy spánkové apnoe patofyziologie MeSH
- Check Tag
- dospělí MeSH
- lidé středního věku MeSH
- lidé MeSH
- mužské pohlaví MeSH
- ženské pohlaví MeSH
- Publikační typ
- časopisecké články MeSH
This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.
- Klíčová slova
- MS Kinect data acquisition, big data processing, breathing analysis, computational intelligence, human–machine interaction, image and depth sensors, neurological disorders, visualization,
- MeSH
- audiovizuální záznam MeSH
- časové faktory MeSH
- dýchání * MeSH
- lidé MeSH
- monitorování fyziologických funkcí přístrojové vybavení MeSH
- pohyb MeSH
- srdeční frekvence fyziologie MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH