Využití umělé inteligence jako asistenční detekční metody v endoskopii se v uplynulých letech těší zvyšujícímu se zájmu. Algoritmy strojového učení slibují zefektivnění detekce polypů, a dokonce optickou lokalizaci nálezů, to vše s minimálním zaškolením endoskopisty. Praktickým cílem této studie je analýza CAD softwaru (computer-aided diagnosis) Carebot pro detekci kolorektálních polypů s využitím konvoluční neuronové sítě. Navržený binární klasifikátor pro detekci polypů dosahuje přesnosti až 98 %, specificity 0,99 a preciznosti 0,96. Současně je diskutována nezbytnost dostupnosti rozsáhlých klinických dat pro vývoj modelů na bázi umělé inteligence pro automatickou detekci adenomů a benigních neoplastických lézí.
The use of artificial intelligence as an assistive detection method in endoscopy has attracted increasing interest in recent years. Machine learning algorithms promise to improve the efficiency of polyp detection and even optical localization of findings, all with minimal training of the endoscopist. The practical goal of this study is to analyse the CAD software (computer-aided diagnosis) Carebot for colorectal polyp detection using a convolutional neural network. The proposed binary classifier for polyp detection achieves accuracy of up to 98%, specificity of 0.99 and precision of 0.96. At the same time, the need for the availability of large-scale clinical data for the development of artificial--intelligence-based models for the automatic detection of adenomas and benign neoplastic lesions is discussed.
Manual and semi-automatic identification of artifacts and unwanted physiological signals in large intracerebral electroencephalographic (iEEG) recordings is time consuming and inaccurate. To date, unsupervised methods to accurately detect iEEG artifacts are not available. This study introduces a novel machine-learning approach for detection of artifacts in iEEG signals in clinically controlled conditions using convolutional neural networks (CNN) and benchmarks the method's performance against expert annotations. The method was trained and tested on data obtained from St Anne's University Hospital (Brno, Czech Republic) and validated on data from Mayo Clinic (Rochester, Minnesota, U.S.A). We show that the proposed technique can be used as a generalized model for iEEG artifact detection. Moreover, a transfer learning process might be used for retraining of the generalized version to form a data-specific model. The generalized model can be efficiently retrained for use with different EEG acquisition systems and noise environments. The generalized and specialized model F1 scores on the testing dataset were 0.81 and 0.96, respectively. The CNN model provides faster, more objective, and more reproducible iEEG artifact detection compared to manual approaches.
- MeSH
- Artifacts * MeSH
- Electroencephalography methods MeSH
- Humans MeSH
- Brain physiology MeSH
- Neural Networks, Computer * MeSH
- Retrospective Studies MeSH
- Machine Learning * MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
- Research Support, N.I.H., Extramural MeSH
Genomic regions that encode small RNA genes exhibit characteristic patterns in their sequence, secondary structure, and evolutionary conservation. Convolutional Neural Networks are a family of algorithms that can classify data based on learned patterns. Here we present MuStARD an application of Convolutional Neural Networks that can learn patterns associated with user-defined sets of genomic regions, and scan large genomic areas for novel regions exhibiting similar characteristics. We demonstrate that MuStARD is a generic method that can be trained on different classes of human small RNA genomic loci, without need for domain specific knowledge, due to the automated feature and background selection processes built into the model. We also demonstrate the ability of MuStARD for inter-species identification of functional elements by predicting mouse small RNAs (pre-miRNAs and snoRNAs) using models trained on the human genome. MuStARD can be used to filter small RNA-Seq datasets for identification of novel small RNA loci, intra- and inter- species, as demonstrated in three use cases of human, mouse, and fly pre-miRNA prediction. MuStARD is easy to deploy and extend to a variety of genomic classification questions. Code and trained models are freely available at gitlab.com/RBP_Bioinformatics/mustard.
- MeSH
- Algorithms MeSH
- Genomics methods MeSH
- Humans MeSH
- RNA, Small Nucleolar genetics MeSH
- MicroRNAs genetics MeSH
- Mice MeSH
- RNA, Untranslated genetics MeSH
- Neural Networks, Computer MeSH
- Software MeSH
- Computational Biology methods MeSH
- Animals MeSH
- Check Tag
- Humans MeSH
- Mice MeSH
- Animals MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
BACKGROUND AND OBJECTIVES: Cardiovascular diseases are critical diseases and need to be diagnosed as early as possible. There is a lack of medical professionals in remote areas to diagnose these diseases. Artificial intelligence-based automatic diagnostic tools can help to diagnose cardiac diseases. This work presents an automatic classification method using machine learning to diagnose multiple cardiac diseases from phonocardiogram signals. METHODS: The proposed system involves a convolutional neural network (CNN) model because of its high accuracy and robustness to automatically diagnose the cardiac disorders from the heart sounds. To improve the accuracy in a noisy environment and make the method robust, the proposed method has used data augmentation techniques for training and multi-classification of multiple cardiac diseases. RESULTS: The model has been validated both heart sound data and augmented data using n-fold cross-validation. Results of all fold have been shown reported in this work. The model has achieved accuracy on the test set up to 98.60% to diagnose multiple cardiac diseases. CONCLUSIONS: The proposed model can be ported to any computing devices like computers, single board computing processors, android handheld devices etc. To make a stand-alone diagnostic tool that may be of help in remote primary health care centres. The proposed method is non-invasive, efficient, robust, and has low time complexity making it suitable for real-time applications.
- MeSH
- Humans MeSH
- Heart Diseases * diagnostic imaging MeSH
- Neural Networks, Computer MeSH
- Heart Sounds * MeSH
- Machine Learning MeSH
- Artificial Intelligence MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
Liver volumetry is an important tool in clinical practice. The calculation of liver volume is primarily based on Computed Tomography. Unfortunately, automatic segmentation algorithms based on handcrafted features tend to leak segmented objects into surrounding tissues like the heart or the spleen. Currently, convolutional neural networks are widely used in various applications of computer vision including image segmentation, while providing very promising results. In our work, we utilize robustly segmentable structures like the spine, body surface, and sagittal plane. They are used as key points for position estimation inside the body. The signed distance fields derived from these structures are calculated and used as an additional channel on the input of our convolutional neural network, to be more specific U-Net, which is widely used in medical image segmentation tasks. Our work shows that this additional position information improves the results of the segmentation. We test our approach in two experiments on two public datasets of Computed Tomography images. To evaluate the results, we use the Accuracy, the Hausdorff distance, and the Dice coefficient. Code is publicly available at: https://gitlab.com/hachaf/liver-segmentation.git.
- Publication type
- Journal Article MeSH
PURPOSE: Necrosis quantification in the neoadjuvant setting using pathology slide review is the most important validated prognostic marker in conventional osteosarcoma. Herein, we explored three deep-learning strategies on histology samples to predict outcome for osteosarcoma in the neoadjuvant setting. EXPERIMENTAL DESIGN: Our study relies on a training cohort from New York University (NYU; New York, NY) and an external cohort from Charles University (Prague, Czechia). We trained and validated the performance of a supervised approach that integrates neural network predictions of necrosis/tumor content and compared predicted overall survival (OS) using Kaplan-Meier curves. Furthermore, we explored morphology-based supervised and self-supervised approaches to determine whether intrinsic histomorphologic features could serve as a potential marker for OS in the neoadjuvant setting. RESULTS: Excellent correlation between the trained network and pathologists was obtained for the quantification of necrosis content (R2 = 0.899; r = 0.949; P < 0.0001). OS prediction cutoffs were consistent between pathologists and the neural network (22% and 30% of necrosis, respectively). The morphology-based supervised approach predicted OS; P = 0.0028, HR = 2.43 (1.10-5.38). The self-supervised approach corroborated the findings with clusters enriched in necrosis, fibroblastic stroma, and osteoblastic morphology associating with better OS [log-2 hazard ratio (lg2 HR); -2.366; -1.164; -1.175; 95% confidence interval, (-2.996 to -0.514)]. Viable/partially viable tumor and fat necrosis were associated with worse OS [lg2 HR; 1.287; 0.822; 0.828; 95% confidence interval, (0.38-1.974)]. CONCLUSIONS: Neural networks can be used to automatically estimate the necrosis to tumor ratio, a quantitative metric predictive of survival. Furthermore, we identified alternate histomorphologic biomarkers specific to the necrotic and tumor regions, which could serve as predictors.
- MeSH
- Deep Learning MeSH
- Child MeSH
- Adult MeSH
- Kaplan-Meier Estimate MeSH
- Humans MeSH
- Adolescent MeSH
- Young Adult MeSH
- Bone Neoplasms * mortality pathology MeSH
- Necrosis * MeSH
- Neoadjuvant Therapy * methods MeSH
- Neural Networks, Computer * MeSH
- Osteosarcoma * mortality pathology therapy MeSH
- Prognosis MeSH
- Check Tag
- Child MeSH
- Adult MeSH
- Humans MeSH
- Adolescent MeSH
- Young Adult MeSH
- Male MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
BACKGROUND: Estimation of the risk of malignancy in pulmonary nodules detected by CT is central in clinical management. The use of artificial intelligence (AI) offers an opportunity to improve risk prediction. Here we compare the performance of an AI algorithm, the lung cancer prediction convolutional neural network (LCP-CNN), with that of the Brock University model, recommended in UK guidelines. METHODS: A dataset of incidentally detected pulmonary nodules measuring 5-15 mm was collected retrospectively from three UK hospitals for use in a validation study. Ground truth diagnosis for each nodule was based on histology (required for any cancer), resolution, stability or (for pulmonary lymph nodes only) expert opinion. There were 1397 nodules in 1187 patients, of which 234 nodules in 229 (19.3%) patients were cancer. Model discrimination and performance statistics at predefined score thresholds were compared between the Brock model and the LCP-CNN. RESULTS: The area under the curve for LCP-CNN was 89.6% (95% CI 87.6 to 91.5), compared with 86.8% (95% CI 84.3 to 89.1) for the Brock model (p≤0.005). Using the LCP-CNN, we found that 24.5% of nodules scored below the lowest cancer nodule score, compared with 10.9% using the Brock score. Using the predefined thresholds, we found that the LCP-CNN gave one false negative (0.4% of cancers), whereas the Brock model gave six (2.5%), while specificity statistics were similar between the two models. CONCLUSION: The LCP-CNN score has better discrimination and allows a larger proportion of benign nodules to be identified without missing cancers than the Brock model. This has the potential to substantially reduce the proportion of surveillance CT scans required and thus save significant resources.
- MeSH
- Algorithms MeSH
- Early Detection of Cancer methods MeSH
- Databases, Factual MeSH
- Adult MeSH
- Risk Assessment MeSH
- Incidence MeSH
- Neoplasm Invasiveness pathology MeSH
- Cohort Studies MeSH
- Middle Aged MeSH
- Humans MeSH
- Multiple Pulmonary Nodules epidemiology pathology physiopathology MeSH
- Cell Transformation, Neoplastic pathology MeSH
- Lung Neoplasms epidemiology pathology physiopathology MeSH
- Neural Networks, Computer * MeSH
- Area Under Curve MeSH
- Predictive Value of Tests MeSH
- Prognosis MeSH
- Retrospective Studies MeSH
- ROC Curve MeSH
- Aged MeSH
- Neoplasm Staging MeSH
- Artificial Intelligence * MeSH
- Check Tag
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Aged MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
- Multicenter Study MeSH
- Research Support, Non-U.S. Gov't MeSH
- Validation Study MeSH
This paper aims to address the segmentation and classification of lytic and sclerotic metastatic lesions that are difficult to define by using spinal 3D Computed Tomography (CT) images obtained from highly pathologically affected cases. As the lesions are ill-defined and consequently it is difficult to find relevant image features that would enable detection and classification of lesions by classical methods of texture and shape analysis, the problem is solved by automatic feature extraction provided by a deep Convolutional Neural Network (CNN). Our main contributions are: (i) individual CNN architecture, and pre-processing steps that are dependent on a patient data and a scan protocol - it enables work with different types of CT scans; (ii) medial axis transform (MAT) post-processing for shape simplification of segmented lesion candidates with Random Forest (RF) based meta-analysis; and (iii) usability of the proposed method on whole-spine CTs (cervical, thoracic, lumbar), which is not treated in other published methods (they work with thoracolumbar segments of spine only). Our proposed method has been tested on our own dataset annotated by two mutually independent radiologists and has been compared to other published methods. This work is part of the ongoing complex project dealing with spine analysis and spine lesion longitudinal studies.
- MeSH
- Middle Aged MeSH
- Humans MeSH
- Spinal Neoplasms diagnostic imaging secondary MeSH
- Neural Networks, Computer * MeSH
- Tomography, X-Ray Computed * MeSH
- Radiographic Image Interpretation, Computer-Assisted methods MeSH
- Aged, 80 and over MeSH
- Aged MeSH
- Imaging, Three-Dimensional * MeSH
- Check Tag
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Aged, 80 and over MeSH
- Aged MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
Rapid and reliable identification of insects is important in many contexts, from the detection of disease vectors and invasive species to the sorting of material from biodiversity inventories. Because of the shortage of adequate expertise, there has long been an interest in developing automated systems for this task. Previous attempts have been based on laborious and complex handcrafted extraction of image features, but in recent years it has been shown that sophisticated convolutional neural networks (CNNs) can learn to extract relevant features automatically, without human intervention. Unfortunately, reaching expert-level accuracy in CNN identifications requires substantial computational power and huge training data sets, which are often not available for taxonomic tasks. This can be addressed using feature transfer: a CNN that has been pretrained on a generic image classification task is exposed to the taxonomic images of interest, and information about its perception of those images is used in training a simpler, dedicated identification system. Here, we develop an effective method of CNN feature transfer, which achieves expert-level accuracy in taxonomic identification of insects with training sets of 100 images or less per category, depending on the nature of data set. Specifically, we extract rich representations of intermediate to high-level image features from the CNN architecture VGG16 pretrained on the ImageNet data set. This information is submitted to a linear support vector machine classifier, which is trained on the target problem. We tested the performance of our approach on two types of challenging taxonomic tasks: 1) identifying insects to higher groups when they are likely to belong to subgroups that have not been seen previously and 2) identifying visually similar species that are difficult to separate even for experts. For the first task, our approach reached $CDATA[$CDATA[$>$$92% accuracy on one data set (884 face images of 11 families of Diptera, all specimens representing unique species), and $CDATA[$CDATA[$>$$96% accuracy on another (2936 dorsal habitus images of 14 families of Coleoptera, over 90% of specimens belonging to unique species). For the second task, our approach outperformed a leading taxonomic expert on one data set (339 images of three species of the Coleoptera genus Oxythyrea; 97% accuracy), and both humans and traditional automated identification systems on another data set (3845 images of nine species of Plecoptera larvae; 98.6 % accuracy). Reanalyzing several biological image identification tasks studied in the recent literature, we show that our approach is broadly applicable and provides significant improvements over previous methods, whether based on dedicated CNNs, CNN feature transfer, or more traditional techniques. Thus, our method, which is easy to apply, can be highly successful in developing automated taxonomic identification systems even when training data sets are small and computational budgets limited. We conclude by briefly discussing some promising CNN-based research directions in morphological systematics opened up by the success of these techniques in providing accurate diagnostic tools.
Designing a cranial implant to restore the protective and aesthetic function of the patient's skull is a challenging process that requires a substantial amount of manual work, even for an experienced clinician. While computer-assisted approaches with various levels of required user interaction exist to aid this process, they are usually only validated on either a single type of simple synthetic defect or a very limited sample of real defects. The work presented in this paper aims to address two challenges: (i) design a fully automatic 3D shape reconstruction method that can address diverse shapes of real skull defects in various stages of healing and (ii) to provide an open dataset for optimization and validation of anatomical reconstruction methods on a set of synthetically broken skull shapes. We propose an application of the multi-scale cascade architecture of convolutional neural networks to the reconstruction task. Such an architecture is able to tackle the issue of trade-off between the output resolution and the receptive field of the model imposed by GPU memory limitations. Furthermore, we experiment with both generative and discriminative models and study their behavior during the task of anatomical reconstruction. The proposed method achieves an average surface error of 0.59mm for our synthetic test dataset with as low as 0.48mm for unilateral defects of parietal and temporal bone, matching state-of-the-art performance while being completely automatic. We also show that the model trained on our synthetic dataset is able to reconstruct real patient defects.