Využití umělé inteligence jako asistenční detekční metody v endoskopii se v uplynulých letech těší zvyšujícímu se zájmu. Algoritmy strojového učení slibují zefektivnění detekce polypů, a dokonce optickou lokalizaci nálezů, to vše s minimálním zaškolením endoskopisty. Praktickým cílem této studie je analýza CAD softwaru (computer-aided diagnosis) Carebot pro detekci kolorektálních polypů s využitím konvoluční neuronové sítě. Navržený binární klasifikátor pro detekci polypů dosahuje přesnosti až 98 %, specificity 0,99 a preciznosti 0,96. Současně je diskutována nezbytnost dostupnosti rozsáhlých klinických dat pro vývoj modelů na bázi umělé inteligence pro automatickou detekci adenomů a benigních neoplastických lézí.
The use of artificial intelligence as an assistive detection method in endoscopy has attracted increasing interest in recent years. Machine learning algorithms promise to improve the efficiency of polyp detection and even optical localization of findings, all with minimal training of the endoscopist. The practical goal of this study is to analyse the CAD software (computer-aided diagnosis) Carebot for colorectal polyp detection using a convolutional neural network. The proposed binary classifier for polyp detection achieves accuracy of up to 98%, specificity of 0.99 and precision of 0.96. At the same time, the need for the availability of large-scale clinical data for the development of artificial--intelligence-based models for the automatic detection of adenomas and benign neoplastic lesions is discussed.
BACKGROUND AND OBJECTIVES: Cardiovascular diseases are critical diseases and need to be diagnosed as early as possible. There is a lack of medical professionals in remote areas to diagnose these diseases. Artificial intelligence-based automatic diagnostic tools can help to diagnose cardiac diseases. This work presents an automatic classification method using machine learning to diagnose multiple cardiac diseases from phonocardiogram signals. METHODS: The proposed system involves a convolutional neural network (CNN) model because of its high accuracy and robustness to automatically diagnose the cardiac disorders from the heart sounds. To improve the accuracy in a noisy environment and make the method robust, the proposed method has used data augmentation techniques for training and multi-classification of multiple cardiac diseases. RESULTS: The model has been validated both heart sound data and augmented data using n-fold cross-validation. Results of all fold have been shown reported in this work. The model has achieved accuracy on the test set up to 98.60% to diagnose multiple cardiac diseases. CONCLUSIONS: The proposed model can be ported to any computing devices like computers, single board computing processors, android handheld devices etc. To make a stand-alone diagnostic tool that may be of help in remote primary health care centres. The proposed method is non-invasive, efficient, robust, and has low time complexity making it suitable for real-time applications.
- MeSH
- Humans MeSH
- Heart Diseases * diagnostic imaging MeSH
- Neural Networks, Computer MeSH
- Heart Sounds * MeSH
- Machine Learning MeSH
- Artificial Intelligence MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
BACKGROUND: Estimation of the risk of malignancy in pulmonary nodules detected by CT is central in clinical management. The use of artificial intelligence (AI) offers an opportunity to improve risk prediction. Here we compare the performance of an AI algorithm, the lung cancer prediction convolutional neural network (LCP-CNN), with that of the Brock University model, recommended in UK guidelines. METHODS: A dataset of incidentally detected pulmonary nodules measuring 5-15 mm was collected retrospectively from three UK hospitals for use in a validation study. Ground truth diagnosis for each nodule was based on histology (required for any cancer), resolution, stability or (for pulmonary lymph nodes only) expert opinion. There were 1397 nodules in 1187 patients, of which 234 nodules in 229 (19.3%) patients were cancer. Model discrimination and performance statistics at predefined score thresholds were compared between the Brock model and the LCP-CNN. RESULTS: The area under the curve for LCP-CNN was 89.6% (95% CI 87.6 to 91.5), compared with 86.8% (95% CI 84.3 to 89.1) for the Brock model (p≤0.005). Using the LCP-CNN, we found that 24.5% of nodules scored below the lowest cancer nodule score, compared with 10.9% using the Brock score. Using the predefined thresholds, we found that the LCP-CNN gave one false negative (0.4% of cancers), whereas the Brock model gave six (2.5%), while specificity statistics were similar between the two models. CONCLUSION: The LCP-CNN score has better discrimination and allows a larger proportion of benign nodules to be identified without missing cancers than the Brock model. This has the potential to substantially reduce the proportion of surveillance CT scans required and thus save significant resources.
- MeSH
- Algorithms MeSH
- Early Detection of Cancer methods MeSH
- Databases, Factual MeSH
- Adult MeSH
- Risk Assessment MeSH
- Incidence MeSH
- Neoplasm Invasiveness pathology MeSH
- Cohort Studies MeSH
- Middle Aged MeSH
- Humans MeSH
- Multiple Pulmonary Nodules epidemiology pathology physiopathology MeSH
- Cell Transformation, Neoplastic pathology MeSH
- Lung Neoplasms epidemiology pathology physiopathology MeSH
- Neural Networks, Computer * MeSH
- Area Under Curve MeSH
- Predictive Value of Tests MeSH
- Prognosis MeSH
- Retrospective Studies MeSH
- ROC Curve MeSH
- Aged MeSH
- Neoplasm Staging MeSH
- Artificial Intelligence * MeSH
- Check Tag
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Aged MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
- Multicenter Study MeSH
- Research Support, Non-U.S. Gov't MeSH
- Validation Study MeSH
This paper aims to address the segmentation and classification of lytic and sclerotic metastatic lesions that are difficult to define by using spinal 3D Computed Tomography (CT) images obtained from highly pathologically affected cases. As the lesions are ill-defined and consequently it is difficult to find relevant image features that would enable detection and classification of lesions by classical methods of texture and shape analysis, the problem is solved by automatic feature extraction provided by a deep Convolutional Neural Network (CNN). Our main contributions are: (i) individual CNN architecture, and pre-processing steps that are dependent on a patient data and a scan protocol - it enables work with different types of CT scans; (ii) medial axis transform (MAT) post-processing for shape simplification of segmented lesion candidates with Random Forest (RF) based meta-analysis; and (iii) usability of the proposed method on whole-spine CTs (cervical, thoracic, lumbar), which is not treated in other published methods (they work with thoracolumbar segments of spine only). Our proposed method has been tested on our own dataset annotated by two mutually independent radiologists and has been compared to other published methods. This work is part of the ongoing complex project dealing with spine analysis and spine lesion longitudinal studies.
- MeSH
- Middle Aged MeSH
- Humans MeSH
- Spinal Neoplasms diagnostic imaging secondary MeSH
- Neural Networks, Computer * MeSH
- Tomography, X-Ray Computed * MeSH
- Radiographic Image Interpretation, Computer-Assisted methods MeSH
- Aged, 80 and over MeSH
- Aged MeSH
- Imaging, Three-Dimensional * MeSH
- Check Tag
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Aged, 80 and over MeSH
- Aged MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
Deep learning has recently been utilized with great success in a large number of diverse application domains, such as visual and face recognition, natural language processing, speech recognition, and handwriting identification. Convolutional neural networks, that belong to the deep learning models, are a subtype of artificial neural networks, which are inspired by the complex structure of the human brain and are often used for image classification tasks. One of the biggest challenges in all deep neural networks is the overfitting issue, which happens when the model performs well on the training data, but fails to make accurate predictions for the new data that is fed into the model. Several regularization methods have been introduced to prevent the overfitting problem. In the research presented in this manuscript, the overfitting challenge was tackled by selecting a proper value for the regularization parameter dropout by utilizing a swarm intelligence approach. Notwithstanding that the swarm algorithms have already been successfully applied to this domain, according to the available literature survey, their potential is still not fully investigated. Finding the optimal value of dropout is a challenging and time-consuming task if it is performed manually. Therefore, this research proposes an automated framework based on the hybridized sine cosine algorithm for tackling this major deep learning issue. The first experiment was conducted over four benchmark datasets: MNIST, CIFAR10, Semeion, and UPS, while the second experiment was performed on the brain tumor magnetic resonance imaging classification task. The obtained experimental results are compared to those generated by several similar approaches. The overall experimental results indicate that the proposed method outperforms other state-of-the-art methods included in the comparative analysis in terms of classification error and accuracy.
- MeSH
- Algorithms MeSH
- Humans MeSH
- Magnetic Resonance Imaging MeSH
- Brain Neoplasms * MeSH
- Neural Networks, Computer * MeSH
- Handwriting MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
This study aims to develop a fully automated imaging protocol independent system for pituitary adenoma segmentation from magnetic resonance imaging (MRI) scans that can work without user interaction and evaluate its accuracy and utility for clinical applications. We trained two independent artificial neural networks on MRI scans of 394 patients. The scans were acquired according to various imaging protocols over the course of 11 years on 1.5T and 3T MRI systems. The segmentation model assigned a class label to each input pixel (pituitary adenoma, internal carotid artery, normal pituitary gland, background). The slice segmentation model classified slices as clinically relevant (structures of interest in slice) or irrelevant (anterior or posterior to sella turcica). We used MRI data of another 99 patients to evaluate the performance of the model during training. We validated the model on a prospective cohort of 28 patients, Dice coefficients of 0.910, 0.719, and 0.240 for tumour, internal carotid artery, and normal gland labels, respectively, were achieved. The slice selection model achieved 82.5% accuracy, 88.7% sensitivity, 76.7% specificity, and an AUC of 0.904. A human expert rated 71.4% of the segmentation results as accurate, 21.4% as slightly inaccurate, and 7.1% as coarsely inaccurate. Our model achieved good results comparable with recent works of other authors on the largest dataset to date and generalized well for various imaging protocols. We discussed future clinical applications, and their considerations. Models and frameworks for clinical use have yet to be developed and evaluated.
- MeSH
- Adenoma * diagnostic imaging surgery MeSH
- Humans MeSH
- Magnetic Resonance Imaging MeSH
- Pituitary Neoplasms * diagnostic imaging surgery MeSH
- Neural Networks, Computer MeSH
- Image Processing, Computer-Assisted methods MeSH
- Prospective Studies MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
Objective.Functional specialization is fundamental to neural information processing. Here, we study whether and how functional specialization emerges in artificial deep convolutional neural networks (CNNs) during a brain-computer interfacing (BCI) task.Approach.We trained CNNs to predict hand movement speed from intracranial electroencephalography (iEEG) and delineated how units across the different CNN hidden layers learned to represent the iEEG signal.Main results.We show that distinct, functionally interpretable neural populations emerged as a result of the training process. While some units became sensitive to either iEEG amplitude or phase, others showed bimodal behavior with significant sensitivity to both features. Pruning of highly sensitive units resulted in a steep drop of decoding accuracy not observed for pruning of less sensitive units, highlighting the functional relevance of the amplitude- and phase-specialized populations.Significance.We anticipate that emergent functional specialization as uncovered here will become a key concept in research towards interpretable deep learning for neuroscience and BCI applications.
Designing a cranial implant to restore the protective and aesthetic function of the patient's skull is a challenging process that requires a substantial amount of manual work, even for an experienced clinician. While computer-assisted approaches with various levels of required user interaction exist to aid this process, they are usually only validated on either a single type of simple synthetic defect or a very limited sample of real defects. The work presented in this paper aims to address two challenges: (i) design a fully automatic 3D shape reconstruction method that can address diverse shapes of real skull defects in various stages of healing and (ii) to provide an open dataset for optimization and validation of anatomical reconstruction methods on a set of synthetically broken skull shapes. We propose an application of the multi-scale cascade architecture of convolutional neural networks to the reconstruction task. Such an architecture is able to tackle the issue of trade-off between the output resolution and the receptive field of the model imposed by GPU memory limitations. Furthermore, we experiment with both generative and discriminative models and study their behavior during the task of anatomical reconstruction. The proposed method achieves an average surface error of 0.59mm for our synthetic test dataset with as low as 0.48mm for unilateral defects of parietal and temporal bone, matching state-of-the-art performance while being completely automatic. We also show that the model trained on our synthetic dataset is able to reconstruct real patient defects.
Manual and semi-automatic identification of artifacts and unwanted physiological signals in large intracerebral electroencephalographic (iEEG) recordings is time consuming and inaccurate. To date, unsupervised methods to accurately detect iEEG artifacts are not available. This study introduces a novel machine-learning approach for detection of artifacts in iEEG signals in clinically controlled conditions using convolutional neural networks (CNN) and benchmarks the method's performance against expert annotations. The method was trained and tested on data obtained from St Anne's University Hospital (Brno, Czech Republic) and validated on data from Mayo Clinic (Rochester, Minnesota, U.S.A). We show that the proposed technique can be used as a generalized model for iEEG artifact detection. Moreover, a transfer learning process might be used for retraining of the generalized version to form a data-specific model. The generalized model can be efficiently retrained for use with different EEG acquisition systems and noise environments. The generalized and specialized model F1 scores on the testing dataset were 0.81 and 0.96, respectively. The CNN model provides faster, more objective, and more reproducible iEEG artifact detection compared to manual approaches.
- MeSH
- Artifacts * MeSH
- Electroencephalography methods MeSH
- Humans MeSH
- Brain physiology MeSH
- Neural Networks, Computer * MeSH
- Retrospective Studies MeSH
- Machine Learning * MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
- Research Support, N.I.H., Extramural MeSH
BACKGROUND AND OBJECTIVES: The lack of medical facilities in isolated areas makes many patients remain aloof from quick and timely diagnosis of cardiovascular diseases, leading to high mortality rates. A deep learning based method for automatic diagnosis of multiple cardiac diseases from Phonocardiogram (PCG) signals is proposed in this paper. METHODS: The proposed system is a combination of deep learning based convolutional neural network (CNN) and power spectrogram Cardi-Net, which can extract deep discriminating features of PCG signals from the power spectrogram to identify the diseases. The choice of Power Spectral Density (PSD) makes the model extract highly discriminatory features significant for the multi-classification of four common cardiac disorders. RESULTS: Data augmentation techniques are applied to make the model robust, and the model undergoes 10-fold cross-validation to yield an overall accuracy of 98.879% on the test dataset to diagnose multi heart diseases from PCG signals. CONCLUSION: The proposed model is completely automatic, where signal pre-processing and feature engineering are not required. The conversion time of power spectrogram from PCG signals is very low range from 0.10 s to 0.11 s. This reduces the complexity of the model, making it highly reliable and robust for real-time applications. The proposed architecture can be deployed on cloud and a low cost processor, desktop, android app leading to proper access to the dispensaries in remote areas.