Most cited article - PubMed ID 26499251
Motion tracking and gait feature estimation for recognising Parkinson's disease using MS Kinect
This paper is devoted to proving two goals, to show that various depth sensors can be used to record breathing rate with the same accuracy as contact sensors used in polysomnography (PSG), in addition to proving that breathing signals from depth sensors have the same sensitivity to breathing changes as in PSG records. The breathing signal from depth sensors can be used for classification of sleep [d=R2]apneaapnoa events with the same success rate as with PSG data. The recent development of computational technologies has led to a big leap in the usability of range imaging sensors. New depth sensors are smaller, have a higher sampling rate, with better resolution, and have bigger precision. They are widely used for computer vision in robotics, but they can be used as non-contact and non-invasive systems for monitoring breathing and its features. The breathing rate can be easily represented as the frequency of a recorded signal. All tested depth sensors (MS Kinect v2, RealSense SR300, R200, D415 and D435) are capable of recording depth data with enough precision in depth sensing and sampling frequency in time (20-35 frames per second (FPS)) to capture breathing rate. The spectral analysis shows a breathing rate between 0.2 Hz and 0.33 Hz, which corresponds to the breathing rate of an adult person during sleep. To test the quality of breathing signal processed by the proposed workflow, a neural network classifier (simple competitive NN) was trained on a set of 57 whole night polysomnographic records with a classification of sleep [d=R2]apneaapnoas by a sleep specialist. The resulting classifier can mark all [d=R2]apneaapnoa events with 100% accuracy when compared to the classification of a sleep specialist, which is useful to estimate the number of events per hour. [d=R2]When compared to the classification of polysomnographic breathing signal segments by a sleep specialistand, which is used for calculating length of the event, the classifier has an [d=R1] F 1 score of 92.2%Accuracy of 96.8% (sensitivity 89.1% and specificity 98.8%). The classifier also proves successful when tested on breathing signals from MS Kinect v2 and RealSense R200 with simulated sleep [d=R2]apneaapnoa events. The whole process can be fully automatic after implementation of automatic chest area segmentation of depth data.
- Keywords
- breathing analysis, computational intelligence, depth sensors, human-machine interaction, image processing, signal processing,
- MeSH
- Respiratory Rate physiology MeSH
- Adult MeSH
- Respiration MeSH
- Middle Aged MeSH
- Humans MeSH
- Signal Processing, Computer-Assisted MeSH
- Polysomnography methods MeSH
- Sensitivity and Specificity MeSH
- Sleep physiology MeSH
- Sleep Apnea Syndromes physiopathology MeSH
- Check Tag
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.
- Keywords
- MS Kinect data acquisition, big data processing, breathing analysis, computational intelligence, human–machine interaction, image and depth sensors, neurological disorders, visualization,
- MeSH
- Video Recording MeSH
- Time Factors MeSH
- Respiration * MeSH
- Humans MeSH
- Monitoring, Physiologic instrumentation MeSH
- Movement MeSH
- Heart Rate physiology MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH