In today's biometric and commercial settings, state-of-the-art image processing relies solely on artificial intelligence and machine learning which provides a high level of accuracy. However, these principles are deeply rooted in abstract, complex "black-box systems". When applied to forensic image identification, concerns about transparency and accountability emerge. This study explores the impact of two challenging factors in automated facial identification: facial expressions and head poses. The sample comprised 3D faces with nine prototype expressions, collected from 41 participants (13 males, 28 females) of European descent aged 19.96 to 50.89 years. Pre-processing involved converting 3D models to 2D color images (256 × 256 px). Probes included a set of 9 images per individual with head poses varying by 5° in both left-to-right (yaw) and up-and-down (pitch) directions for neutral expressions. A second set of 3,610 images per individual covered viewpoints in 5° increments from -45° to 45° for head movements and different facial expressions, forming the targets. Pair-wise comparisons using ArcFace, a state-of-the-art face identification algorithm yielded 54,615,690 dissimilarity scores. Results indicate that minor head deviations in probes have minimal impact. However, the performance diminished as targets deviated from the frontal position. Right-to-left movements were less influential than up and down, with downward pitch showing less impact than upward movements. The lowest accuracy was for upward pitch at 45°. Dissimilarity scores were consistently higher for males than for females across all studied factors. The performance particularly diverged in upward movements, starting at 15°. Among tested facial expressions, happiness and contempt performed best, while disgust exhibited the lowest AUC values.
- MeSH
- Algorithms * MeSH
- Automated Facial Recognition * methods MeSH
- Biometric Identification methods MeSH
- Adult MeSH
- Head Movements physiology MeSH
- Middle Aged MeSH
- Humans MeSH
- Young Adult MeSH
- Face anatomy & histology MeSH
- Image Processing, Computer-Assisted methods MeSH
- Posture physiology MeSH
- Facial Expression * MeSH
- Imaging, Three-Dimensional MeSH
- Check Tag
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Young Adult MeSH
- Male MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
In the last few years, classification of cells by machine learning has become frequently used in biology. However, most of the approaches are based on morphometric (MO) features, which are not quantitative in terms of cell mass. This may result in poor classification accuracy. Here, we study the potential contribution of coherence-controlled holographic microscopy enabling quantitative phase imaging for the classification of cell morphologies. We compare our approach with the commonly used method based on MO features. We tested both classification approaches in an experiment with nutritionally deprived cancer tissue cells, while employing several supervised machine learning algorithms. Most of the classifiers provided higher performance when quantitative phase features were employed. Based on the results, it can be concluded that the quantitative phase features played an important role in improving the performance of the classification. The methodology could be valuable help in refining the monitoring of live cells in an automated fashion. We believe that coherence-controlled holographic microscopy, as a tool for quantitative phase imaging, offers all preconditions for the accurate automated analysis of live cell behavior while enabling noninvasive label-free imaging with sufficient contrast and high-spatiotemporal phase sensitivity.
- MeSH
- Algorithms MeSH
- Cells classification cytology MeSH
- Holography methods MeSH
- Humans MeSH
- Microscopy methods MeSH
- Pattern Recognition, Automated MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
Clinical islet transplantation programs rely on the capacities of individual centers to quantify isolated islets. Current computer-assisted methods require input from human operators. Here we describe two machine learning algorithms for islet quantification: the trainable islet algorithm (TIA) and the nontrainable purity algorithm (NPA). These algorithms automatically segment pancreatic islets and exocrine tissue on microscopic images in order to count individual islets and calculate islet volume and purity. References for islet counts and volumes were generated by the fully manual segmentation (FMS) method, which was validated against the internal DNA standard. References for islet purity were generated via the expert visual assessment (EVA) method, which was validated against the FMS method. The TIA is intended to automatically evaluate micrographs of isolated islets from future donors after being trained on micrographs from a limited number of past donors. Its training ability was first evaluated on 46 images from four donors. The pixel-to-pixel comparison, binary statistics, and islet DNA concentration indicated that the TIA was successfully trained, regardless of the color differences of the original images. Next, the TIA trained on the four donors was validated on an additional 36 images from nine independent donors. The TIA was fast (67 s/image), correlated very well with the FMS method (R2=1.00 and 0.92 for islet volume and islet count, respectively), and had small REs (0.06 and 0.07 for islet volume and islet count, respectively). Validation of the NPA against the EVA method using 70 images from 12 donors revealed that the NPA had a reasonable speed (69 s/image), had an acceptable RE (0.14), and correlated well with the EVA method (R2=0.88). Our results demonstrate that a fully automated analysis of clinical-grade micrographs of isolated pancreatic islets is feasible. The algorithms described herein will be freely available as a Fiji platform plugin.
This paper presents design of two new methods of speech fundamental frequency (f0) detection for vowel sustained phonations and the detection method, which use cross-corelation to detect f0, is tested. The algorithm consists of certain preprocessing and processing methods. The first method is based on the detection of maxima and the second method is based on band pass filtration. In comparison with the other commonly used f0 detection methods, our algorithms are designed with respect to speech pathology detection. These methods lead to detection of the other voice parameters such as jitter, shimmer and harmonic-to-noise ratio (HNR). The results of this study are compared with database, which is labeled by the help of Praat algorithm. The results for maximum method succeed at 88.4% and for pass band method at 83.9%. The detection leads to create self-automated method, which detect robustly f0.
Systémy pro automatické podávání inzulinu (AID) představují velký pokrok v léčbě diabetu mellitu 1. typu. Podávání inzulinu automatizují integrací kontinuálního monitorování glukózy, řídicího algoritmu a činností inzulinové pumpy. Přes jejich pokročilost je nutno ve specifických situacích nastavení přizpůsobit, a to buď využitím zvláštních funkcí, nebo i ruční úpravou dávkování. Článek podává přehled o možnostech úprav v dávkování inzulinu pro případy interkurentního onemocnění, pro konzumaci alkoholu a zvýšenou fyzickou aktivitu pro 4 v Česku dostupné certifikované systémy automatického podávání inzulinu.
Automated insulin delivery systems (AID) represent a major advance in the treatment of type 1 diabetes. These systems automate insulin delivery by integrating continuous glucose monitoring, control algorithms and insulin pump actions. Despite their advances, there is a need to adjust the settings in specific situations, either by using special features or even by manually adjusting the dose. The article provides an overview of the possibilities of adjustments in the insulin dosing for intercurrent disease, alcohol consumption and increased physical activity for four certified automatic insulin delivery systems available in the Czech Republic.
The process of manual species identification is a daunting task, so much so that the number of taxonomists is seen to be declining. In order to assist taxonomists, many methods and algorithms have been proposed to develop semi-automated and fully automated systems for species identification. While semi-automated tools would require manual intervention by a domain expert, fully automated tools are assumed to be not as reliable as manual or semiautomated identification tools. Hence, in this study we investigate the accuracy of fully automated and semi-automated models for species identification. We have built fully automated and semi-automated species classification models using the monogenean species image dataset. With respect to monogeneans' morphology, they are differentiated based on the morphological characteristics of haptoral bars, anchors, marginal hooks and reproductive organs (male and female copulatory organs). Landmarks (in the semi-automated model) and shape morphometric features (in the fully automated model) were extracted from four monogenean species images, which were then classified using k-nearest neighbour and artificial neural network. In semi-automated models, a classification accuracy of 96.67 % was obtained using the k-nearest neighbour and 97.5 % using the artificial neural network, whereas in fully automated models, a classification accuracy of 90 % was obtained using the k-nearest neighbour and 98.8 % using the artificial neural network. As for the crossvalidation, semi-automated models performed at 91.2 %, whereas fully automated models performed slightly higher at 93.75 %.
OBJECTIVE: Automated behavioral state classification in intracranial EEG (iEEG) recordings may be beneficial for iEEG interpretation and quantifying sleep patterns to enable behavioral state dependent neuromodulation therapy in next generation implantable brain stimulation devices. Here, we introduce a fully automated unsupervised framework to differentiate between awake (AW), sleep (N2), and slow wave sleep (N3) using intracranial EEG (iEEG) only and validated with expert scored polysomnography. APPROACH: Data from eight patients undergoing evaluation for epilepsy surgery (age [Formula: see text], three female) with intracranial depth electrodes for iEEG monitoring were included. Spectral power features (0.1-235 Hz) spanning several frequency bands from a single electrode were used to classify behavioral states of patients into AW, N2, and N3. MAIN RESULTS: Overall, classification accuracy of 94%, with 94% sensitivity and 93% specificity across eight subjects using multiple spectral power features from a single electrode was achieved. Classification performance of N3 sleep was significantly better (95%, sensitivity 95%, specificity 93%) than that of the N2 sleep phase (87%, sensitivity 78%, specificity 96%). SIGNIFICANCE: Automated, unsupervised, and robust classification of behavioral states based on iEEG data is possible, and it is feasible to incorporate these algorithms into future implantable devices with limited computational power, memory, and number of electrodes for brain monitoring and stimulation.
- MeSH
- Algorithms MeSH
- Wakefulness physiology MeSH
- Behavior physiology MeSH
- Adult MeSH
- Electrocorticography methods MeSH
- Epilepsy surgery MeSH
- Deep Brain Stimulation MeSH
- Electrodes, Implanted MeSH
- Middle Aged MeSH
- Humans MeSH
- Polysomnography MeSH
- Reproducibility of Results MeSH
- Sleep, Slow-Wave physiology MeSH
- Sleep Stages physiology MeSH
- Check Tag
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
- Research Support, N.I.H., Extramural MeSH
Purpose The purpose of this research note is to provide a performance comparison of available algorithms for the automated evaluation of oral diadochokinesis using speech samples from patients with amyotrophic lateral sclerosis (ALS). Method Four different algorithms based on a wide range of signal processing approaches were tested on a sequential motion rate /pa/-/ta/-/ka/ syllable repetition paradigm collected from 18 patients with ALS and 18 age- and gender-matched healthy controls (HCs). Results The best temporal detection of syllable position for a 10-ms tolerance value was achieved for ALS patients using a traditional signal processing approach based on a combination of filtering in the spectrogram, Bayesian detection, and polynomial thresholding with an accuracy rate of 74.4%, and for HCs using a deep learning approach with an accuracy rate of 87.6%. Compared to HCs, a slow diadochokinetic rate (p < .001) and diadochokinetic irregularity (p < .01) were detected in ALS patients. Conclusions The approaches using deep learning or multiple-step combinations of advanced signal processing methods provided a more robust solution to the estimation of oral DDK variables than did simpler approaches based on the rough segmentation of the signal envelope. The automated acoustic assessment of oral diadochokinesis shows excellent potential for monitoring bulbar disease progression in individuals with ALS.
- MeSH
- Acoustics MeSH
- Algorithms MeSH
- Amyotrophic Lateral Sclerosis * MeSH
- Bayes Theorem MeSH
- Humans MeSH
- Speech MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
Aims: Previous studies have demonstrated substantial variability in manual assessment of QRS complex duration (QRSd). Disagreements in QRSd measurements were also found in several automated algorithms tested on digitized electrocardiogram (ECG) recordings. The aim of our study was to investigate the variability of automated QRSd measurements performed by two commercially available electrocardiographs. Methods and Results: Two GE MAC 5000 (GE-1 and GE-2) electrocardiographs and two Mortara ELI 350 (Mortara-1 and Mortara-2) electrocardiographs were used in the study. Participants for the study were recruited from patients hospitalized in the department of cardiology of a university hospital. Participants underwent up to four recording sessions within a single day with a different electrocardiograph at each session when two to four immediately successive ECG recordings were undertaken. In 76 patients, 683 ECGs were recorded; the mean QRSd was 109.0 ± 26.1 ms. The QRSd difference ≥10 ms between the first and second intra-session ECG was found in 7, 3, 20, and 14% of ECG pairs for GE-1, GE-2, Mortara-1, and Mortara-2, respectively. No inter-session difference in QRSd was found within both manufacturers. In individual patients, Mortara calculated the mean QRSd to be longer by 7.3 ms (95% CI: 6.2-8.5 ms, P < 0.0001) with a 2.1-times (95% CI: 1.9-2.4) greater standard deviation of the mean QRSd (7.1 vs. 3.3 ms, P < 0.001). Conclusion: Electrocardiographs from two manufacturers measured QRSd values with a systematic difference and a significantly different level of precision. This may have important clinical implications in selection of suitable candidates for cardiac resynchronization therapy.
- MeSH
- Algorithms * MeSH
- Equipment Failure Analysis MeSH
- Equipment Design MeSH
- Diagnosis, Computer-Assisted instrumentation methods MeSH
- Electrocardiography instrumentation methods MeSH
- Humans MeSH
- Reproducibility of Results MeSH
- Pattern Recognition, Automated methods MeSH
- Aged MeSH
- Sensitivity and Specificity MeSH
- Check Tag
- Humans MeSH
- Aged MeSH
- Publication type
- Journal Article MeSH
- Evaluation Study MeSH
- Comparative Study MeSH