Deep convolutional neural networks
Dotaz
Zobrazit nápovědu
OBJECTIVES: Artificial Intelligence (AI), particularly deep learning, has significantly impacted healthcare, including dentistry, by improving diagnostics, treatment planning, and prognosis prediction. This systematic mapping review explores the current applications of deep learning in dentistry, offering a comprehensive overview of trends, models, and their clinical significance. MATERIALS AND METHODS: Following a structured methodology, relevant studies published from January 2012 to September 2023 were identified through database searches in PubMed, Scopus, and Embase. Key data, including clinical purpose, deep learning tasks, model architectures, and data modalities, were extracted for qualitative synthesis. RESULTS: From 21,242 screened studies, 1,007 were included. Of these, 63.5% targeted diagnostic tasks, primarily with convolutional neural networks (CNNs). Classification (43.7%) and segmentation (22.9%) were the main methods, and imaging data-such as cone-beam computed tomography and orthopantomograms-were used in 84.4% of cases. Most studies (95.2%) applied fully supervised learning, emphasizing the need for annotated data. Pathology (21.5%), radiology (17.5%), and orthodontics (10.2%) were prominent fields, with 24.9% of studies relating to more than one specialty. CONCLUSION: This review explores the advancements in deep learning in dentistry, particulary for diagnostics, and identifies areas for further improvement. While CNNs have been used successfully, it is essential to explore emerging model architectures, learning approaches, and ways to obtain diverse and reliable data. Furthermore, fostering trust among all stakeholders by advancing explainable AI and addressing ethical considerations is crucial for transitioning AI from research to clinical practice. CLINICAL RELEVANCE: This review offers a comprehensive overview of a decade of deep learning in dentistry, showcasing its significant growth in recent years. By mapping its key applications and identifying research trends, it provides a valuable guide for future studies and highlights emerging opportunities for advancing AI-driven dental care.
- MeSH
- deep learning * MeSH
- lidé MeSH
- zubní lékařství * MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- přehledy MeSH
- systematický přehled MeSH
PURPOSE: Necrosis quantification in the neoadjuvant setting using pathology slide review is the most important validated prognostic marker in conventional osteosarcoma. Herein, we explored three deep-learning strategies on histology samples to predict outcome for osteosarcoma in the neoadjuvant setting. EXPERIMENTAL DESIGN: Our study relies on a training cohort from New York University (NYU; New York, NY) and an external cohort from Charles University (Prague, Czechia). We trained and validated the performance of a supervised approach that integrates neural network predictions of necrosis/tumor content and compared predicted overall survival (OS) using Kaplan-Meier curves. Furthermore, we explored morphology-based supervised and self-supervised approaches to determine whether intrinsic histomorphologic features could serve as a potential marker for OS in the neoadjuvant setting. RESULTS: Excellent correlation between the trained network and pathologists was obtained for the quantification of necrosis content (R2 = 0.899; r = 0.949; P < 0.0001). OS prediction cutoffs were consistent between pathologists and the neural network (22% and 30% of necrosis, respectively). The morphology-based supervised approach predicted OS; P = 0.0028, HR = 2.43 (1.10-5.38). The self-supervised approach corroborated the findings with clusters enriched in necrosis, fibroblastic stroma, and osteoblastic morphology associating with better OS [log-2 hazard ratio (lg2 HR); -2.366; -1.164; -1.175; 95% confidence interval, (-2.996 to -0.514)]. Viable/partially viable tumor and fat necrosis were associated with worse OS [lg2 HR; 1.287; 0.822; 0.828; 95% confidence interval, (0.38-1.974)]. CONCLUSIONS: Neural networks can be used to automatically estimate the necrosis to tumor ratio, a quantitative metric predictive of survival. Furthermore, we identified alternate histomorphologic biomarkers specific to the necrotic and tumor regions, which could serve as predictors.
- MeSH
- deep learning MeSH
- dítě MeSH
- dospělí MeSH
- Kaplanův-Meierův odhad MeSH
- lidé MeSH
- mladiství MeSH
- mladý dospělý MeSH
- nádory kostí * mortalita patologie MeSH
- nekróza * MeSH
- neoadjuvantní terapie * metody MeSH
- neuronové sítě * MeSH
- osteosarkom * mortalita patologie terapie MeSH
- prognóza MeSH
- Check Tag
- dítě MeSH
- dospělí MeSH
- lidé MeSH
- mladiství MeSH
- mladý dospělý MeSH
- mužské pohlaví MeSH
- ženské pohlaví MeSH
- Publikační typ
- časopisecké články MeSH
Radiologists utilize pictures from X-rays, magnetic resonance imaging, or computed tomography scans to diagnose bone cancer. Manual methods are labor-intensive and may need specialized knowledge. As a result, creating an automated process for distinguishing between malignant and healthy bone is essential. Bones that have cancer have a different texture than bones in unaffected areas. Diagnosing hematological illnesses relies on correct labeling and categorizing nucleated cells in the bone marrow. However, timely diagnosis and treatment are hampered by pathologists' need to identify specimens, which can be sensitive and time-consuming manually. Humanity's ability to evaluate and identify these more complicated illnesses has significantly been bolstered by the development of artificial intelligence, particularly machine, and deep learning. Conversely, much research and development is needed to enhance cancer cell identification-and lower false alarm rates. We built a deep learning model for morphological analysis to solve this problem. This paper introduces a novel deep convolutional neural network architecture in which hybrid multi-objective and category-based optimization algorithms are used to optimize the hyperparameters adaptively. Using the processed cell pictures as input, the proposed model is then trained with an optimized attention-based multi-scale convolutional neural network to identify the kind of cancer cells in the bone marrow. Extensive experiments are run on publicly available datasets, with the results being measured and evaluated using a wide range of performance indicators. In contrast to deep learning models that have already been trained, the total accuracy of 99.7% was determined to be superior.
In eukaryotes, genes produce a variety of distinct RNA isoforms, each with potentially unique protein products, coding potential or regulatory signals such as poly(A) tail and nucleotide modifications. Assessing the kinetics of RNA isoform metabolism, such as transcription and decay rates, is essential for unraveling gene regulation. However, it is currently impeded by lack of methods that can differentiate between individual isoforms. Here, we introduce RNAkinet, a deep convolutional and recurrent neural network, to detect nascent RNA molecules following metabolic labeling with the nucleoside analog 5-ethynyl uridine and long-read, direct RNA sequencing with nanopores. RNAkinet processes electrical signals from nanopore sequencing directly and distinguishes nascent from pre-existing RNA molecules. Our results show that RNAkinet prediction performance generalizes in various cell types and organisms and can be used to quantify RNA isoform half-lives. RNAkinet is expected to enable the identification of the kinetic parameters of RNA isoforms and to facilitate studies of RNA metabolism and the regulatory elements that influence it.
- Publikační typ
- časopisecké články MeSH
The accurate identification of the primary tumor origin in metastatic cancer cases is crucial for guiding treatment decisions and improving patient outcomes. Copy number alterations (CNAs) and copy number variation (CNV) have emerged as valuable genomic markers for predicting the origin of metastases. However, current models that predict cancer type based on CNV or CNA suffer from low AUC values. To address this challenge, we employed a cutting-edge neural network approach utilizing a dataset comprising CNA profiles from twenty different cancer types. We developed two workflows: the first evaluated the performance of two deep neural networks-one ReLU-based and the other a 2D convolutional network. In the second workflow, we stratified cancer types based on anatomical and physiological classifications, constructing shallow neural networks to differentiate between cancer types within the same cluster. Both approaches demonstrated high AUC values, with deep neural networks achieving a precision of 60%, suggesting a mathematical relationship between CNV type, location, and cancer type. Our findings highlight the potential of using CNA/CNV to aid pathologists in accurately identifying cancer origins with accessible clinical tests.
- Publikační typ
- časopisecké články MeSH
OBJECTIVE: The aim of this work was to assemble a large annotated dataset of bitewing radiographs and to use convolutional neural networks to automate the detection of dental caries in bitewing radiographs with human-level performance. MATERIALS AND METHODS: A dataset of 3989 bitewing radiographs was created, and 7257 carious lesions were annotated using minimal bounding boxes. The dataset was then divided into 3 parts for the training (70%), validation (15%), and testing (15%) of multiple object detection convolutional neural networks (CNN). The tested CNN architectures included YOLOv5, Faster R-CNN, RetinaNet, and EfficientDet. To further improve the detection performance, model ensembling was used, and nested predictions were removed during post-processing. The models were compared in terms of the [Formula: see text] score and average precision (AP) with various thresholds of the intersection over union (IoU). RESULTS: The twelve tested architectures had [Formula: see text] scores of 0.72-0.76. Their performance was improved by ensembling which increased the [Formula: see text] score to 0.79-0.80. The best-performing ensemble detected caries with the precision of 0.83, recall of 0.77, [Formula: see text], and AP of 0.86 at IoU=0.5. Small carious lesions were predicted with slightly lower accuracy (AP 0.82) than medium or large lesions (AP 0.88). CONCLUSIONS: The trained ensemble of object detection CNNs detected caries with satisfactory accuracy and performed at least as well as experienced dentists (see companion paper, Part II). The performance on small lesions was likely limited by inconsistencies in the training dataset. CLINICAL SIGNIFICANCE: Caries can be automatically detected using convolutional neural networks. However, detecting incipient carious lesions remains challenging.
- MeSH
- deep learning * MeSH
- lidé MeSH
- náchylnost k zubnímu kazu MeSH
- neuronové sítě MeSH
- zubní kaz * diagnostické zobrazování MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
BACKGROUND: Recently, deep neural networks have been successfully applied in many biological fields. In 2020, a deep learning model AlphaFold won the protein folding competition with predicted structures within the error tolerance of experimental methods. However, this solution to the most prominent bioinformatic challenge of the past 50 years has been possible only thanks to a carefully curated benchmark of experimentally predicted protein structures. In Genomics, we have similar challenges (annotation of genomes and identification of functional elements) but currently, we lack benchmarks similar to protein folding competition. RESULTS: Here we present a collection of curated and easily accessible sequence classification datasets in the field of genomics. The proposed collection is based on a combination of novel datasets constructed from the mining of publicly available databases and existing datasets obtained from published articles. The collection currently contains nine datasets that focus on regulatory elements (promoters, enhancers, open chromatin region) from three model organisms: human, mouse, and roundworm. A simple convolution neural network is also included in a repository and can be used as a baseline model. Benchmarks and the baseline model are distributed as the Python package 'genomic-benchmarks', and the code is available at https://github.com/ML-Bioinfo-CEITEC/genomic_benchmarks . CONCLUSIONS: Deep learning techniques revolutionized many biological fields but mainly thanks to the carefully curated benchmarks. For the field of Genomics, we propose a collection of benchmark datasets for the classification of genomic sequences with an interface for the most commonly used deep learning libraries, implementation of the simple neural network and a training framework that can be used as a starting point for future research. The main aim of this effort is to create a repository for shared datasets that will make machine learning for genomics more comparable and reproducible while reducing the overhead of researchers who want to enter the field, leading to healthy competition and new discoveries.
Parkinson's disease dysgraphia (PDYS), one of the earliest signs of Parkinson's disease (PD), has been researched as a promising biomarker of PD and as the target of a noninvasive and inexpensive approach to monitoring the progress of the disease. However, although several approaches to supportive PDYS diagnosis have been proposed (mainly based on handcrafted features (HF) extracted from online handwriting or the utilization of deep neural networks), it remains unclear which approach provides the highest discrimination power and how these approaches can be transferred between different datasets and languages. This study aims to compare classification performance based on two types of features: features automatically extracted by a pretrained convolutional neural network (CNN) and HF designed by human experts. Both approaches are evaluated on a multilingual dataset collected from 143 PD patients and 151 healthy controls in the Czech Republic, United States, Colombia, and Hungary. The subjects performed the spiral drawing task (SDT; a language-independent task) and the sentence writing task (SWT; a language-dependent task). Models based on logistic regression and gradient boosting were trained in several scenarios, specifically single language (SL), leave one language out (LOLO), and all languages combined (ALC). We found that the HF slightly outperformed the CNN-extracted features in all considered evaluation scenarios for the SWT. In detail, the following balanced accuracy (BACC) scores were achieved: SL-0.65 (HF), 0.58 (CNN); LOLO-0.65 (HF), 0.57 (CNN); and ALC-0.69 (HF), 0.66 (CNN). However, in the case of the SDT, features extracted by a CNN provided competitive results: SL-0.66 (HF), 0.62 (CNN); LOLO-0.56 (HF), 0.54 (CNN); and ALC-0.60 (HF), 0.60 (CNN). In summary, regarding the SWT, the HF outperformed the CNN-extracted features over 6% (mean BACC of 0.66 for HF, and 0.60 for CNN). In the case of the SDT, both feature sets provided almost identical classification performance (mean BACC of 0.60 for HF, and 0.58 for CNN).
- Publikační typ
- časopisecké články MeSH
The complex shape of embryonic cartilage represents a true challenge for phenotyping and basic understanding of skeletal development. X-ray computed microtomography (μCT) enables inspecting relevant tissues in all three dimensions; however, most 3D models are still created by manual segmentation, which is a time-consuming and tedious task. In this work, we utilised a convolutional neural network (CNN) to automatically segment the most complex cartilaginous system represented by the developing nasal capsule. The main challenges of this task stem from the large size of the image data (over a thousand pixels in each dimension) and a relatively small training database, including genetically modified mouse embryos, where the phenotype of the analysed structures differs from the norm. We propose a CNN-based segmentation model optimised for the large image size that we trained using a unique manually annotated database. The segmentation model was able to segment the cartilaginous nasal capsule with a median accuracy of 84.44% (Dice coefficient). The time necessary for segmentation of new samples shortened from approximately 8 h needed for manual segmentation to mere 130 s per sample. This will greatly accelerate the throughput of μCT analysis of cartilaginous skeletal elements in animal models of developmental diseases.
Objective.Functional specialization is fundamental to neural information processing. Here, we study whether and how functional specialization emerges in artificial deep convolutional neural networks (CNNs) during a brain-computer interfacing (BCI) task.Approach.We trained CNNs to predict hand movement speed from intracranial electroencephalography (iEEG) and delineated how units across the different CNN hidden layers learned to represent the iEEG signal.Main results.We show that distinct, functionally interpretable neural populations emerged as a result of the training process. While some units became sensitive to either iEEG amplitude or phase, others showed bimodal behavior with significant sensitivity to both features. Pruning of highly sensitive units resulted in a steep drop of decoding accuracy not observed for pruning of less sensitive units, highlighting the functional relevance of the amplitude- and phase-specialized populations.Significance.We anticipate that emergent functional specialization as uncovered here will become a key concept in research towards interpretable deep learning for neuroscience and BCI applications.