BACKGROUND: This study develops a deep learning-based automated lesion segmentation model for whole-body 3D18F-fluorodeoxyglucose (FDG)-Position emission tomography (PET) with computed tomography (CT) images agnostic to disease location and site. METHOD: A publicly available lesion-annotated dataset of 1014 whole-body FDG-PET/CT images was used to train, validate, and test (70:10:20) eight configurations with 3D U-Net as the backbone architecture. The best-performing model on the test set was further evaluated on 3 different unseen cohorts consisting of osteosarcoma or neuroblastoma (OS cohort) (n = 13), pediatric solid tumors (ST cohort) (n = 14), and adult Pheochromocytoma/Paraganglioma (PHEO cohort) (n = 40). Both lesion-level and patient-level statistical analyses were conducted to validate the performance of the model on different cohorts. RESULTS: The best performing 3D full resolution nnUNet model achieved a lesion-level sensitivity and DISC of 71.70 % and 0.40 for the test set, 97.83 % and 0.73 for ST, 40.15 % and 0.36 for OS, and 78.37 % and 0.50 for the PHEO cohort. For the test set and PHEO cohort, the model has missed small volume and lower uptake lesions (p < 0.01), whereas no statistically significant differences (p > 0.05) were found in the false positive (FP) and false negative lesions volume and uptake for the OS and ST cohort. The predicted total lesion glycolysis is slightly higher than the ground truth because of FP calls, which experts can easily check and reject. CONCLUSION: The developed deep learning-based automated lesion segmentation AI model which utilizes 3D_FullRes configuration of the nnUNet framework showed promising and reliable performance for the whole-body FDG-PET/CT images.
- Keywords
- Artificial intelligence, Deep learning, FDG PET/CT, Oncology,
- MeSH
- Whole Body Imaging * methods MeSH
- Deep Learning * MeSH
- Child MeSH
- Adult MeSH
- Fluorodeoxyglucose F18 * MeSH
- Cohort Studies MeSH
- Middle Aged MeSH
- Humans MeSH
- Adolescent MeSH
- Neoplasms * diagnostic imaging MeSH
- Positron Emission Tomography Computed Tomography * methods MeSH
- Image Processing, Computer-Assisted * methods MeSH
- Check Tag
- Child MeSH
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Adolescent MeSH
- Male MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
- Validation Study MeSH
- Names of Substances
- Fluorodeoxyglucose F18 * MeSH
Large language models (LLMs) are artificial intelligence (AI) based computational models designed to understand and generate human like text. With billions of training parameters, LLMs excel in identifying intricate language patterns, enabling remarkable performance across a variety of natural language processing (NLP) tasks. After the introduction of transformer architectures, they are impacting the industry with their text generation capabilities. LLMs play an innovative role across various industries by automating NLP tasks. In healthcare, they assist in diagnosing diseases, personalizing treatment plans, and managing patient data. LLMs provide predictive maintenance in automotive industry. LLMs provide recommendation systems, and consumer behavior analyzers. LLMs facilitates researchers and offer personalized learning experiences in education. In finance and banking, LLMs are used for fraud detection, customer service automation, and risk management. LLMs are driving significant advancements across the industries by automating tasks, improving accuracy, and providing deeper insights. Despite these advancements, LLMs face challenges such as ethical concerns, biases in training data, and significant computational resource requirements, which must be addressed to ensure impartial and sustainable deployment. This study provides a comprehensive analysis of LLMs, their evolution, and their diverse applications across industries, offering researchers valuable insights into their transformative potential and the accompanying limitations.
- Keywords
- LLMs, Large Language models, NLP, Transformers,
- MeSH
- Humans MeSH
- Industry * MeSH
- Artificial Intelligence * MeSH
- Large Language Models MeSH
- Natural Language Processing * MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
Accurate segmentation of biomedical time-series, such as intracardiac electrograms, is vital for understanding physiological states and supporting clinical interventions. Traditional rule-based and feature engineering approaches often struggle with complex clinical patterns and noise. Recent deep learning advancements offer solutions, showing various benefits and drawbacks in segmentation tasks. This study evaluates five segmentation algorithms, from traditional rule-based methods to advanced deep learning models, using a unique clinical dataset of intracardiac signals from 100 patients. We compared a rule-based method, a support vector machine (SVM), fully convolutional semantic neural network (UNet), region proposal network (Faster R-CNN), and recurrent neural network for electrocardiographic signals (DENS-ECG). Notably, Faster R-CNN has never been applied to 1D signals segmentation before. Each model underwent Bayesian optimization to minimize hyperparameter bias. Results indicated that deep learning models outperformed traditional methods, with UNet achieving the highest segmentation score of 88.9 % (root mean square errors for onset and offset of 8.43 ms and 7.49 ms), closely followed by DENS-ECG at 87.8 %. Faster R-CNN and SVM showed moderate performance, while the rule-based method had the lowest accuracy (77.7 %). UNet and DENS-ECG excelled in capturing detailed features and handling noise, highlighting their potential for clinical application. Despite greater computational demands, their superior performance and diagnostic potential support further exploration in biomedical time-series analysis.
- Keywords
- DENS-ECG, Electrophysiology Study, Faster R-CNN, Rule-based Delineation, Support Vector Machines, Time-series Segmentation, U-Net,
- MeSH
- Algorithms MeSH
- Bayes Theorem MeSH
- Deep Learning MeSH
- Electrocardiography * methods MeSH
- Humans MeSH
- Neural Networks, Computer MeSH
- Signal Processing, Computer-Assisted * MeSH
- Support Vector Machine MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
In cryo-electron microscopy, accurate particle localization and classification are imperative. Recent deep learning solutions, though successful, require extensive training datasets. The protracted generation time of physics-based models, often employed to produce these datasets, limits their broad applicability. We introduce FakET, a method based on neural style transfer, capable of simulating the forward operator of any cryo transmission electron microscope. It can be used to adapt a synthetic training dataset according to reference data producing high-quality simulated micrographs or tilt-series. To assess the quality of our generated data, we used it to train a state-of-the-art localization and classification architecture and compared its performance with a counterpart trained on benchmark data. Remarkably, our technique matches the performance, boosts data generation speed 750×, uses 33× less memory, and scales well to typical transmission electron microscope detector sizes. It leverages GPU acceleration and parallel processing. The source code is available at https://github.com/paloha/faket/.
- Keywords
- CryoEM, CryoET, deep learning, domain adaptation, forward model, machine learning, neural style transfer, surrogate model, synthetic data generation, transmission electron microscope,
- MeSH
- Algorithms MeSH
- Deep Learning * MeSH
- Cryoelectron Microscopy * methods MeSH
- Image Processing, Computer-Assisted methods MeSH
- Software MeSH
- Electron Microscope Tomography * methods MeSH
- Publication type
- Journal Article MeSH
Predicting and quantifying phenotypic consequences of genetic variants in rare disorders is a major challenge, particularly pertinent for 'actionable' genes such as thyroid hormone transporter MCT8 (encoded by the X-linked SLC16A2 gene), where loss-of-function (LoF) variants cause a rare neurodevelopmental and (treatable) metabolic disorder in males. The combination of deep phenotyping data with functional and computational tests and with outcomes in population cohorts, enabled us to: (i) identify the genetic aetiology of divergent clinical phenotypes of MCT8 deficiency with genotype-phenotype relationships present across survival and 24 out of 32 disease features; (ii) demonstrate a mild phenocopy in ~400,000 individuals with common genetic variants in MCT8; (iii) assess therapeutic effectiveness, which did not differ among LoF-categories; (iv) advance structural insights in normal and mutated MCT8 by delineating seven critical functional domains; (v) create a pathogenicity-severity MCT8 variant classifier that accurately predicted pathogenicity (AUC:0.91) and severity (AUC:0.86) for 8151 variants. Our information-dense mapping provides a generalizable approach to advance multiple dimensions of rare genetic disorders.
- MeSH
- Deep Learning * MeSH
- Child MeSH
- Adult MeSH
- Phenotype * MeSH
- Genetic Variation MeSH
- Genetic Association Studies MeSH
- Genomics methods MeSH
- Thyroid Hormones metabolism genetics MeSH
- Humans MeSH
- X-Linked Intellectual Disability genetics metabolism MeSH
- Adolescent MeSH
- Loss of Function Mutation MeSH
- Child, Preschool MeSH
- Monocarboxylic Acid Transporters * genetics metabolism MeSH
- Severity of Illness Index MeSH
- Muscular Atrophy genetics metabolism pathology MeSH
- Muscle Hypotonia genetics metabolism MeSH
- Symporters * genetics metabolism MeSH
- Check Tag
- Child MeSH
- Adult MeSH
- Humans MeSH
- Adolescent MeSH
- Male MeSH
- Child, Preschool MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
- Names of Substances
- Thyroid Hormones MeSH
- Monocarboxylic Acid Transporters * MeSH
- SLC16A2 protein, human MeSH Browser
- Symporters * MeSH
Figure-ground organisation is a perceptual grouping mechanism for detecting objects and boundaries, essential for an agent interacting with the environment. Current figure-ground segmentation methods rely on classical computer vision or deep learning, requiring extensive computational resources, especially during training. Inspired by the primate visual system, we developed a bio-inspired perception system for the neuromorphic robot iCub. The model uses a hierarchical, biologically plausible architecture and event-driven vision to distinguish foreground objects from the background. Unlike classical approaches, event-driven cameras reduce data redundancy and computation. The system has been qualitatively and quantitatively assessed in simulations and with event-driven cameras on iCub in various scenarios. It successfully segments items in diverse real-world settings, showing comparable results to its frame-based version on simple stimuli and the Berkeley Segmentation dataset. This model enhances hybrid systems, complementing conventional deep learning models by processing only relevant data in Regions of Interest (ROI), enabling low-latency autonomous robotic applications.
- MeSH
- Deep Learning MeSH
- Humans MeSH
- Neural Networks, Computer MeSH
- Computer Simulation MeSH
- Robotics * instrumentation methods MeSH
- Animals MeSH
- Check Tag
- Humans MeSH
- Animals MeSH
- Publication type
- Journal Article MeSH
OBJECTIVES: Artificial Intelligence (AI), particularly deep learning, has significantly impacted healthcare, including dentistry, by improving diagnostics, treatment planning, and prognosis prediction. This systematic mapping review explores the current applications of deep learning in dentistry, offering a comprehensive overview of trends, models, and their clinical significance. MATERIALS AND METHODS: Following a structured methodology, relevant studies published from January 2012 to September 2023 were identified through database searches in PubMed, Scopus, and Embase. Key data, including clinical purpose, deep learning tasks, model architectures, and data modalities, were extracted for qualitative synthesis. RESULTS: From 21,242 screened studies, 1,007 were included. Of these, 63.5% targeted diagnostic tasks, primarily with convolutional neural networks (CNNs). Classification (43.7%) and segmentation (22.9%) were the main methods, and imaging data-such as cone-beam computed tomography and orthopantomograms-were used in 84.4% of cases. Most studies (95.2%) applied fully supervised learning, emphasizing the need for annotated data. Pathology (21.5%), radiology (17.5%), and orthodontics (10.2%) were prominent fields, with 24.9% of studies relating to more than one specialty. CONCLUSION: This review explores the advancements in deep learning in dentistry, particulary for diagnostics, and identifies areas for further improvement. While CNNs have been used successfully, it is essential to explore emerging model architectures, learning approaches, and ways to obtain diverse and reliable data. Furthermore, fostering trust among all stakeholders by advancing explainable AI and addressing ethical considerations is crucial for transitioning AI from research to clinical practice. CLINICAL RELEVANCE: This review offers a comprehensive overview of a decade of deep learning in dentistry, showcasing its significant growth in recent years. By mapping its key applications and identifying research trends, it provides a valuable guide for future studies and highlights emerging opportunities for advancing AI-driven dental care.
- Keywords
- Artificial intelligence, Deep learning, Dentistry, Diagnostic imaging, Neural networks,
- MeSH
- Deep Learning * MeSH
- Humans MeSH
- Dentistry * MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Review MeSH
- Systematic Review MeSH
PURPOSE: Necrosis quantification in the neoadjuvant setting using pathology slide review is the most important validated prognostic marker in conventional osteosarcoma. Herein, we explored three deep-learning strategies on histology samples to predict outcome for osteosarcoma in the neoadjuvant setting. EXPERIMENTAL DESIGN: Our study relies on a training cohort from New York University (NYU; New York, NY) and an external cohort from Charles University (Prague, Czechia). We trained and validated the performance of a supervised approach that integrates neural network predictions of necrosis/tumor content and compared predicted overall survival (OS) using Kaplan-Meier curves. Furthermore, we explored morphology-based supervised and self-supervised approaches to determine whether intrinsic histomorphologic features could serve as a potential marker for OS in the neoadjuvant setting. RESULTS: Excellent correlation between the trained network and pathologists was obtained for the quantification of necrosis content (R2 = 0.899; r = 0.949; P < 0.0001). OS prediction cutoffs were consistent between pathologists and the neural network (22% and 30% of necrosis, respectively). The morphology-based supervised approach predicted OS; P = 0.0028, HR = 2.43 (1.10-5.38). The self-supervised approach corroborated the findings with clusters enriched in necrosis, fibroblastic stroma, and osteoblastic morphology associating with better OS [log-2 hazard ratio (lg2 HR); -2.366; -1.164; -1.175; 95% confidence interval, (-2.996 to -0.514)]. Viable/partially viable tumor and fat necrosis were associated with worse OS [lg2 HR; 1.287; 0.822; 0.828; 95% confidence interval, (0.38-1.974)]. CONCLUSIONS: Neural networks can be used to automatically estimate the necrosis to tumor ratio, a quantitative metric predictive of survival. Furthermore, we identified alternate histomorphologic biomarkers specific to the necrotic and tumor regions, which could serve as predictors.
- MeSH
- Deep Learning * MeSH
- Adult MeSH
- Kaplan-Meier Estimate MeSH
- Convolutional Neural Networks MeSH
- Humans MeSH
- Adolescent MeSH
- Young Adult MeSH
- Bone Neoplasms * mortality pathology therapy MeSH
- Necrosis pathology MeSH
- Neoadjuvant Therapy MeSH
- Neural Networks, Computer * MeSH
- Osteosarcoma * mortality pathology therapy MeSH
- Prognosis MeSH
- Check Tag
- Adult MeSH
- Humans MeSH
- Adolescent MeSH
- Young Adult MeSH
- Male MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
Purpose To develop a deep learning tool for the automatic segmentation of the spinal cord and intramedullary lesions in spinal cord injury (SCI) on T2-weighted MRI scans. Materials and Methods This retrospective study included MRI data acquired between July 2002 and February 2023. The data consisted of T2-weighted MRI scans acquired using different scanner manufacturers with various image resolutions (isotropic and anisotropic) and orientations (axial and sagittal). Patients had different lesion etiologies (traumatic, ischemic, and hemorrhagic) and lesion locations across the cervical, thoracic, and lumbar spine. A deep learning model, SCIseg (which is open source and accessible through the Spinal Cord Toolbox, version 6.2 and above), was trained in a three-phase process involving active learning for the automatic segmentation of intramedullary SCI lesions and the spinal cord. The segmentations from the proposed model were visually and quantitatively compared with those from three other open-source methods (PropSeg, DeepSeg, and contrast-agnostic, all part of the Spinal Cord Toolbox). The Wilcoxon signed rank test was used to compare quantitative MRI biomarkers of SCI (lesion volume, lesion length, and maximal axial damage ratio) derived from the manual reference standard lesion masks and biomarkers obtained automatically with SCIseg segmentations. Results The study included 191 patients with SCI (mean age, 48.1 years ± 17.9 [SD]; 142 [74%] male patients). SCIseg achieved a mean Dice score of 0.92 ± 0.07 and 0.61 ± 0.27 for spinal cord and SCI lesion segmentation, respectively. There was no evidence of a difference between lesion length (P = .42) and maximal axial damage ratio (P = .16) computed from manually annotated lesions and the lesion segmentations obtained using SCIseg. Conclusion SCIseg accurately segmented intramedullary lesions on a diverse dataset of T2-weighted MRI scans and automatically extracted clinically relevant lesion characteristics. Keywords: Spinal Cord, Trauma, Segmentation, MR Imaging, Supervised Learning, Convolutional Neural Network (CNN) Published under a CC BY 4.0 license.
- Keywords
- Convolutional Neural Network (CNN), MR Imaging, Segmentation, Spinal Cord, Supervised Learning, Trauma,
- MeSH
- Deep Learning * MeSH
- Adult MeSH
- Image Interpretation, Computer-Assisted methods MeSH
- Middle Aged MeSH
- Humans MeSH
- Magnetic Resonance Imaging * methods MeSH
- Spinal Cord Injuries * diagnostic imaging pathology MeSH
- Retrospective Studies MeSH
- Aged MeSH
- Check Tag
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Aged MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
Radiologists utilize pictures from X-rays, magnetic resonance imaging, or computed tomography scans to diagnose bone cancer. Manual methods are labor-intensive and may need specialized knowledge. As a result, creating an automated process for distinguishing between malignant and healthy bone is essential. Bones that have cancer have a different texture than bones in unaffected areas. Diagnosing hematological illnesses relies on correct labeling and categorizing nucleated cells in the bone marrow. However, timely diagnosis and treatment are hampered by pathologists' need to identify specimens, which can be sensitive and time-consuming manually. Humanity's ability to evaluate and identify these more complicated illnesses has significantly been bolstered by the development of artificial intelligence, particularly machine, and deep learning. Conversely, much research and development is needed to enhance cancer cell identification-and lower false alarm rates. We built a deep learning model for morphological analysis to solve this problem. This paper introduces a novel deep convolutional neural network architecture in which hybrid multi-objective and category-based optimization algorithms are used to optimize the hyperparameters adaptively. Using the processed cell pictures as input, the proposed model is then trained with an optimized attention-based multi-scale convolutional neural network to identify the kind of cancer cells in the bone marrow. Extensive experiments are run on publicly available datasets, with the results being measured and evaluated using a wide range of performance indicators. In contrast to deep learning models that have already been trained, the total accuracy of 99.7% was determined to be superior.
- Keywords
- Attention-based multi-scale convolutional neural network, Automated diagnosis, Bone marrow, Deep convolutional neural networks, Radiologists,
- MeSH
- Algorithms MeSH
- Deep Learning * MeSH
- Bone Marrow diagnostic imaging pathology MeSH
- Humans MeSH
- Bone Neoplasms pathology diagnostic imaging diagnosis MeSH
- Neural Networks, Computer * MeSH
- Image Processing, Computer-Assisted methods MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH