This paper is devoted to proving two goals, to show that various depth sensors can be used to record breathing rate with the same accuracy as contact sensors used in polysomnography (PSG), in addition to proving that breathing signals from depth sensors have the same sensitivity to breathing changes as in PSG records. The breathing signal from depth sensors can be used for classification of sleep [d=R2]apneaapnoa events with the same success rate as with PSG data. The recent development of computational technologies has led to a big leap in the usability of range imaging sensors. New depth sensors are smaller, have a higher sampling rate, with better resolution, and have bigger precision. They are widely used for computer vision in robotics, but they can be used as non-contact and non-invasive systems for monitoring breathing and its features. The breathing rate can be easily represented as the frequency of a recorded signal. All tested depth sensors (MS Kinect v2, RealSense SR300, R200, D415 and D435) are capable of recording depth data with enough precision in depth sensing and sampling frequency in time (20-35 frames per second (FPS)) to capture breathing rate. The spectral analysis shows a breathing rate between 0.2 Hz and 0.33 Hz, which corresponds to the breathing rate of an adult person during sleep. To test the quality of breathing signal processed by the proposed workflow, a neural network classifier (simple competitive NN) was trained on a set of 57 whole night polysomnographic records with a classification of sleep [d=R2]apneaapnoas by a sleep specialist. The resulting classifier can mark all [d=R2]apneaapnoa events with 100% accuracy when compared to the classification of a sleep specialist, which is useful to estimate the number of events per hour. [d=R2]When compared to the classification of polysomnographic breathing signal segments by a sleep specialistand, which is used for calculating length of the event, the classifier has an [d=R1] F 1 score of 92.2%Accuracy of 96.8% (sensitivity 89.1% and specificity 98.8%). The classifier also proves successful when tested on breathing signals from MS Kinect v2 and RealSense R200 with simulated sleep [d=R2]apneaapnoa events. The whole process can be fully automatic after implementation of automatic chest area segmentation of depth data.
- Keywords
- breathing analysis, computational intelligence, depth sensors, human-machine interaction, image processing, signal processing,
- MeSH
- Respiratory Rate physiology MeSH
- Adult MeSH
- Respiration MeSH
- Middle Aged MeSH
- Humans MeSH
- Signal Processing, Computer-Assisted MeSH
- Polysomnography methods MeSH
- Sensitivity and Specificity MeSH
- Sleep physiology MeSH
- Sleep Apnea Syndromes physiopathology MeSH
- Check Tag
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.
- Keywords
- MS Kinect data acquisition, big data processing, breathing analysis, computational intelligence, human–machine interaction, image and depth sensors, neurological disorders, visualization,
- MeSH
- Video Recording MeSH
- Time Factors MeSH
- Respiration * MeSH
- Humans MeSH
- Monitoring, Physiologic instrumentation MeSH
- Movement MeSH
- Heart Rate physiology MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
The paper is devoted to the study of facial region temperature changes using a simple thermal imaging camera and to the comparison of their time evolution with the pectoral area motion recorded by the MS Kinect depth sensor. The goal of this research is to propose the use of video records as alternative diagnostics of breathing disorders allowing their analysis in the home environment as well. The methods proposed include (i) specific image processing algorithms for detecting facial parts with periodic temperature changes; (ii) computational intelligence tools for analysing the associated videosequences; and (iii) digital filters and spectral estimation tools for processing the depth matrices. Machine learning applied to thermal imaging camera calibration allowed the recognition of its digital information with an accuracy close to 100% for the classification of individual temperature values. The proposed detection of breathing features was used for monitoring of physical activities by the home exercise bike. The results include a decrease of breathing temperature and its frequency after a load, with mean values -0.16 °C/min and -0.72 bpm respectively, for the given set of experiments. The proposed methods verify that thermal and depth cameras can be used as additional tools for multimodal detection of breathing patterns.
This study presents a novel approach in the application of Unmanned Aerial Vehicle (UAV) imaging for the conjoint assessment of the snow depth and winter leaf area index (LAI), a structural property of vegetation, affecting the snow accumulation and snowmelt. The snow depth estimation, based on a multi-temporal set of high-resolution digital surface models (DSMs) of snow-free and of snow-covered conditions, taken in a partially healthy to insect-induced Norway spruce forest and meadow coverage area within the Šumava National Park (Šumava NP) in the Czech Republic, was assessed over a winter season. The UAV-derived DSMs featured a resolution of 0.73⁻1.98 cm/pix. By subtracting the DSMs, the snow depth was determined and compared with manual snow probes taken at ground control point (GCP) positions, the root mean square error (RMSE) ranged between 0.08 m and 0.15 m. A comparative analysis of UAV-based snow depth with a denser network of arranged manual snow depth measurements yielded an RMSE between 0.16 m and 0.32 m. LAI assessment, crucial for correct interpretation of the snow depth distribution in forested areas, was based on downward-looking UAV images taken in the forest regime. To identify the canopy characteristics from downward-looking UAV images, the snow background was used instead of the sky fraction. Two conventional methods for the effective winter LAI retrieval, the LAI-2200 plant canopy analyzer, and digital hemispherical photography (DHP) were used as a reference. Apparent was the effect of canopy density and ground properties on the accuracy of DSMs assessment based on UAV imaging when compared to the field survey. The results of UAV-based LAI values provided estimates were comparable to values derived from the LAI-2200 plant canopy analyzer and DHP. Comparison with the conventional survey indicated that spring snow depth was overestimated, and spring LAI was underestimated by using UAV photogrammetry method. Since the snow depth and the LAI parameters are essential for snowpack studies, this combined method here will be of great value in the future to simplify snow depth and LAI assessment of snow dynamics.
- Keywords
- UAV, canopy closure, disturbance, forest, leaf area index, snow depth,
- Publication type
- Journal Article MeSH
This work presents a novel transformer-based method for hand pose estimation-DePOTR. We test the DePOTR method on four benchmark datasets, where DePOTR outperforms other transformer-based methods while achieving results on par with other state-of-the-art methods. To further demonstrate the strength of DePOTR, we propose a novel multi-stage approach from full-scene depth image-MuTr. MuTr removes the necessity of having two different models in the hand pose estimation pipeline-one for hand localization and one for pose estimation-while maintaining promising results. To the best of our knowledge, this is the first successful attempt to use the same model architecture in standard and simultaneously in full-scene image setup while achieving competitive results in both of them. On the NYU dataset, DePOTR and MuTr reach precision equal to 7.85 mm and 8.71 mm, respectively.
- Keywords
- hand pose estimation, multi-stage, neural network, transformer,
- MeSH
- Benchmarking MeSH
- Upper Extremity * MeSH
- Hand * diagnostic imaging MeSH
- Electric Power Supplies MeSH
- Knowledge MeSH
- Publication type
- Journal Article MeSH
Although the field of sleep study has greatly developed over recent years, the most common and efficient way to detect sleep issues remains a sleep examination performed in a sleep laboratory. This examination measures several vital signals by polysomnograph during a full night's sleep using multiple sensors connected to the patient's body. Nevertheless, despite being the gold standard, the sensors and the unfamiliar environment's connection inevitably impact the quality of the patient's sleep and the examination itself. Therefore, with the novel development of accurate and affordable 3D sensing devices, new approaches for non-contact sleep study have emerged. These methods utilize different techniques to extract the same breathing parameters but with contactless methods. However, to enable reliable remote extraction, these methods require accurate identification of the basic region of interest (ROI), i.e., the patient's chest area. The lack of automated ROI segmenting of 3D time series is currently holding back the development process. We propose an automatic chest area segmentation algorithm that given a time series of 3D frames containing a sleeping patient as input outputs a segmentation image with the pixels that correspond to the chest area. Beyond significantly speeding up the development process of the non-contact methods, accurate automatic segmentation can enable a more precise feature extraction. In addition, further tests of the algorithm on existing data demonstrate its ability to improve the sensitivity of a prior solution that uses manual ROI selection. The approach is on average 46.9% more sensitive with a maximal improvement of 220% when compared to manual ROI. All mentioned can pave the way for placing non-contact algorithms as leading candidates to replace existing traditional methods used today.
- Keywords
- 3D data processing, Breathing analysis, Depth sensors, Human-machine interaction, MS Kinect data acquisition, Segmentation,
- MeSH
- Algorithms * MeSH
- Respiration MeSH
- Humans MeSH
- Image Processing, Computer-Assisted methods MeSH
- Polysomnography MeSH
- Sleep MeSH
- Imaging, Three-Dimensional * methods MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
This work focuses on improving a camera system for sensing a workspace in which dynamic obstacles need to be detected. The currently available state-of-the-art solution (MoveIt!) processes data in a centralized manner from cameras that have to be registered before the system starts. Our solution enables distributed data processing and dynamic change in the number of sensors at runtime. The distributed camera data processing is implemented using a dedicated control unit on which the filtering is performed by comparing the real and expected depth images. Measurements of the processing speed of all sensor data into a global voxel map were compared between the centralized system (MoveIt!) and the new distributed system as part of a performance benchmark. The distributed system is more flexible in terms of sensitivity to a number of cameras, better framerate stability and the possibility of changing the camera number on the go. The effects of voxel grid size and camera resolution were also compared during the benchmark, where the distributed system showed better results. Finally, the overhead of data transmission in the network was discussed where the distributed system is considerably more efficient. The decentralized system proves to be faster by 38.7% with one camera and 71.5% with four cameras.
- Keywords
- collaboration, distributed processing, human–robot interaction, obstacles detection, sensors network, workspace monitoring,
- MeSH
- Computer Communication Networks * MeSH
- Publication type
- Journal Article MeSH
BACKGROUND: Analysis of gait features provides important information during the treatment of neurological disorders, including Parkinson's disease. It is also used to observe the effects of medication and rehabilitation. The methodology presented in this paper enables the detection of selected gait attributes by Microsoft (MS) Kinect image and depth sensors to track movements in three-dimensional space. METHODS: The experimental part of the paper is devoted to the study of three sets of individuals: 18 patients with Parkinson's disease, 18 healthy aged-matched individuals, and 15 students. The methodological part of the paper includes the use of digital signal-processing methods for rejecting gross data-acquisition errors, segmenting video frames, and extracting gait features. The proposed algorithm describes methods for estimating the leg length, normalised average stride length (SL), and gait velocity (GV) of the individuals in the given sets using MS Kinect data. RESULTS: The main objective of this work involves the recognition of selected gait disorders in both the clinical and everyday settings. The results obtained include an evaluation of leg lengths, with a mean difference of 0.004 m in the complete set of 51 individuals studied, and of the gait features of patients with Parkinson's disease (SL: 0.38 m, GV: 0.61 m/s) and an age-matched reference set (SL: 0.54 m, GV: 0.81 m/s). Combining both features allowed for the use of neural networks to classify and evaluate the selectivity, specificity, and accuracy. The achieved accuracy was 97.2 %, which suggests the potential use of MS Kinect image and depth sensors for these applications. CONCLUSIONS: Discussion points include the possibility of using the MS Kinect sensors as inexpensive replacements for complex multi-camera systems and treadmill walking in gait-feature detection for the recognition of selected gait disorders.
- MeSH
- Algorithms MeSH
- Gait * MeSH
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Nerve Net MeSH
- Parkinson Disease physiopathology MeSH
- Aged, 80 and over MeSH
- Aged MeSH
- Case-Control Studies MeSH
- Imaging, Three-Dimensional methods MeSH
- Acceleration MeSH
- Check Tag
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Aged, 80 and over MeSH
- Aged MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
Objective.This work presents a method for enhanced detection, imaging, and measurement of the thermal neutron flux.Approach. Measurements were performed in a water tank, while the detector is positioned out-of-field of a 20 MeV ultra-high pulse dose rate electron beam. A semiconductor pixel detector Timepix3 with a silicon sensor partially covered by a6LiF neutron converter was used to measure the flux, spatial, and time characteristics of the neutron field. To provide absolute measurements of thermal neutron flux, the detection efficiency calibration of the detectors was performed in a reference thermal neutron field. Neutron signals are recognized and discriminated against other particles such as gamma rays and x-rays. This is achieved by the resolving power of the pixel detector using machine learning algorithms and high-resolution pattern recognition analysis of the high-energy tracks created by thermal neutron interactions in the converter.Main results. The resulting thermal neutrons equivalent dose was obtained using conversion factor (2.13(10) pSv·cm2) from thermal neutron fluence to thermal neutron equivalent dose obtained by Monte Carlo simulations. The calibrated detectors were used to characterize scattered radiation created by electron beams. The results at 12.0 cm depth in the beam axis inside of the water for a delivered dose per pulse of 1.85 Gy (pulse length of 2.4μs) at the reference depth, showed a contribution of flux of 4.07(8) × 103particles·cm-2·s-1and equivalent dose of 1.73(3) nSv per pulse, which is lower by ∼9 orders of magnitude than the delivered dose.Significance. The presented methodology for in-water measurements and identification of characteristic thermal neutrons tracks serves for the selective quantification of equivalent dose made by thermal neutrons in out-of-field particle therapy.
- Keywords
- 6LiF converter, FLASH electron radiotherapy, Timepix3 pixel detector, equivalent dose, out-of-field dose from neutrons, particle type discrimination, thermal neutrons,
- MeSH
- Algorithms * MeSH
- Electrons * MeSH
- Calibration MeSH
- Neutrons MeSH
- Gamma Rays MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
PURPOSE: The aim of this paper is to investigate the limits of LET monitoring of therapeutic carbon ion beams with miniaturized microdosimetric detectors. METHODS: Four different miniaturized microdosimeters have been used at the 62 MeV/u 12C beam of INFN Southern National Laboratory (LNS) of Catania for this purpose, i.e. a mini-TEPC and a GEM-microdosimeter, both filled with propane gas, and a silicon and a diamond microdosimeter. The y-D (dose-mean lineal energy) values, measured at different depths in a PMMA phantom, have been compared withLET¯D (dose-mean LET) values in water, calculated at the same water-equivalent depth with a Monte Carlo simulation setup based on the GEANT4 toolkit. RESULTS: In these first measurements, no detector was found to be significantly better than the others as a LET monitor. The y-D relative standard deviation has been assessed to be 13% for all the detectors. On average, the ratio between y-D and LET¯D values is 0.9 ± 0.3, spanning from 0.73 ± 0.08 (in the proximal edge and Bragg peak region) to 1.1 ± 0.3 at the distal edge. CONCLUSIONS: All the four microdosimeters are able to monitor the dose-mean LET with the 11% precision up to the distal edge. In the distal edge region, the ratio of y-D to LET¯D changes. Such variability is possibly due to a dependence of the detector response on depth, since the particle mean-path length inside the detectors can vary, especially in the distal edge region.
- MeSH
- Radiotherapy Dosage MeSH
- Equipment Design MeSH
- Phantoms, Imaging MeSH
- Carbon Isotopes therapeutic use MeSH
- Calibration MeSH
- Monte Carlo Method MeSH
- Miniaturization MeSH
- Computer Simulation MeSH
- Polymethyl Methacrylate MeSH
- Radiometry instrumentation MeSH
- Heavy Ion Radiotherapy instrumentation MeSH
- Water MeSH
- Publication type
- Journal Article MeSH
- Comparative Study MeSH
- Names of Substances
- Carbon Isotopes MeSH
- Polymethyl Methacrylate MeSH
- Water MeSH