Introduction: Arterial brain vessel assessment is crucial for the diagnostic process in patients with cerebrovascular disease. Non-invasive neuroimaging techniques, such as time-of-flight (TOF) magnetic resonance angiography (MRA) imaging are applied in the clinical routine to depict arteries. They are, however, only visually assessed. Fully automated vessel segmentation integrated into the clinical routine could facilitate the time-critical diagnosis of vessel abnormalities and might facilitate the identification of valuable biomarkers for cerebrovascular events. In the present work, we developed and validated a new deep learning model for vessel segmentation, coined BRAVE-NET, on a large aggregated dataset of patients with cerebrovascular diseases. Methods: BRAVE-NET is a multiscale 3-D convolutional neural network (CNN) model developed on a dataset of 264 patients from three different studies enrolling patients with cerebrovascular diseases. A context path, dually capturing high- and low-resolution volumes, and deep supervision were implemented. The BRAVE-NET model was compared to a baseline Unet model and variants with only context paths and deep supervision, respectively. The models were developed and validated using high-quality manual labels as ground truth. Next to precision and recall, the performance was assessed quantitatively by Dice coefficient (DSC); average Hausdorff distance (AVD); 95-percentile Hausdorff distance (95HD); and via visual qualitative rating. Results: The BRAVE-NET performance surpassed the other models for arterial brain vessel segmentation with a DSC = 0.931, AVD = 0.165, and 95HD = 29.153. The BRAVE-NET model was also the most resistant toward false labelings as revealed by the visual analysis. The performance improvement is primarily attributed to the integration of the multiscaling context path into the 3-D Unet and to a lesser extent to the deep supervision architectural component. Discussion: We present a new state-of-the-art of arterial brain vessel segmentation tailored to cerebrovascular pathology. We provide an extensive experimental validation of the model using a large aggregated dataset encompassing a large variability of cerebrovascular disease and an external set of healthy volunteers. The framework provides the technological foundation for improving the clinical workflow and can serve as a biomarker extraction tool in cerebrovascular diseases.
- Keywords
- UNET, artificial intelligence (AI), cerebrovascular disease (CVD), machine learning, segmentation (image processing),
- Publication type
- Journal Article MeSH
OBJECTIVE: The objective of this study was to develop a deep learning model for automated pituitary adenoma segmentation in MRI scans for stereotactic radiosurgery planning and to assess its accuracy and efficiency in clinical settings. METHODS: An nnU-Net-based model was trained on MRI scans with expert segmentations of 582 patients treated with Leksell Gamma Knife over the course of 12 years. The accuracy of the model was evaluated by a human expert on a separate dataset of 146 previously unseen patients. The primary outcome was the comparison of expert ratings between the predicted segmentations and a control group consisting of original manual segmentations. Secondary outcomes were the influence of tumor volume, previous surgery, previous stereotactic radiosurgery (SRS), and endocrinological status on expert ratings, performance in a subgroup of nonfunctioning macroadenomas (measuring 1000-4000 mm3) without previous surgery and/or radiosurgery, and influence of using additional MRI modalities as model input and time cost reduction. RESULTS: The model achieved Dice similarity coefficients of 82.3%, 63.9%, and 79.6% for tumor, normal gland, and optic nerve, respectively. A human expert rated 20.6% of the segmentations as applicable in treatment planning without any modifications, 52.7% as applicable with minor manual modifications, and 26.7% as inapplicable. The ratings for predicted segmentations were lower than for the control group of original segmentations (p < 0.001). Larger tumor volume, history of a previous radiosurgery, and nonfunctioning pituitary adenoma were associated with better expert ratings (p = 0.005, p = 0.007, and p < 0.001, respectively). In the subgroup without previous surgery, although expert ratings were more favorable, the association did not reach statistical significance (p = 0.074). In the subgroup of noncomplex cases (n = 9), 55.6% of the segmentations were rated as applicable without any manual modifications and no segmentations were rated as inapplicable. Manually improving inaccurate segmentations instead of creating them from scratch led to 53.6% reduction of the time cost (p < 0.001). CONCLUSIONS: The results were applicable for treatment planning with either no or minor manual modifications, demonstrating a significant increase in the efficiency of the planning process. The predicted segmentations can be loaded into the planning software used in clinical practice for treatment planning. The authors discuss some considerations of the clinical utility of the automated segmentation models, as well as their integration within established clinical workflows, and outline directions for future research.
- Keywords
- Leksell Gamma Knife, automated segmentation, machine learning, pituitary adenoma, pituitary surgery, stereotactic radiosurgery,
- MeSH
- Adenoma * diagnostic imaging radiotherapy surgery MeSH
- Deep Learning * MeSH
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Magnetic Resonance Imaging methods MeSH
- Pituitary Neoplasms * diagnostic imaging radiotherapy surgery MeSH
- Radiosurgery * methods MeSH
- Aged MeSH
- Artificial Intelligence * MeSH
- Check Tag
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Aged MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
Biocompatibility testing of new materials is often performed in vitro by measuring the growth rate of mammalian cancer cells in time-lapse images acquired by phase contrast microscopes. The growth rate is measured by tracking cell coverage, which requires an accurate automatic segmentation method. However, cancer cells have irregular shapes that change over time, the mottled background pattern is partially visible through the cells and the images contain artifacts such as halos. We developed a novel algorithm for cell segmentation that copes with the mentioned challenges. It is based on temporal differences of consecutive images and a combination of thresholding, blurring, and morphological operations. We tested the algorithm on images of four cell types acquired by two different microscopes, evaluated the precision of segmentation against manual segmentation performed by a human operator, and finally provided comparison with other freely available methods. We propose a new, fully automated method for measuring the cell growth rate based on fitting a coverage curve with the Verhulst population model. The algorithm is fast and shows accuracy comparable with manual segmentation. Most notably it can correctly separate live from dead cells.
- Keywords
- biocompatibility assessment, cytotoxicity testing, phase contrast microscopy, segmentation, time-lapse,
- MeSH
- Algorithms MeSH
- Artifacts MeSH
- Time-Lapse Imaging * MeSH
- Cytological Techniques instrumentation methods MeSH
- Humans MeSH
- Microscopy * MeSH
- Pattern Recognition, Automated MeSH
- Animals MeSH
- Check Tag
- Humans MeSH
- Animals MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
BACKGROUND: Segmentation of pre-operative low-grade gliomas (LGGs) from magnetic resonance imaging is a crucial step for studying imaging biomarkers. However, segmentation of LGGs is particularly challenging because they rarely enhance after gadolinium administration. Like other gliomas, they have irregular tumor shape, heterogeneous composition, ill-defined tumor boundaries, and limited number of image types. To overcome these challenges we propose a semi-automated segmentation method that relies only on T2-weighted (T2W) and optionally post-contrast T1-weighted (T1W) images. METHODS: First, the user draws a region-of-interest (ROI) that completely encloses the tumor and some normal tissue. Second, a normal brain atlas and post-contrast T1W images are registered to T2W images. Third, the posterior probability of each pixel/voxel belonging to normal and abnormal tissues is calculated based on information derived from the atlas and ROI. Finally, geodesic active contours use the probability map of the tumor to shrink the ROI until optimal tumor boundaries are found. This method was validated against the true segmentation (TS) of 30 LGG patients for both 2D (1 slice) and 3D. The TS was obtained from manual segmentations of three experts using the Simultaneous Truth and Performance Level Estimation (STAPLE) software. Dice and Jaccard indices and other descriptive statistics were computed for the proposed method, as well as the experts' segmentation versus the TS. We also tested the method with the BraTS datasets, which supply expert segmentations. RESULTS AND DISCUSSION: For 2D segmentation vs. TS, the mean Dice index was 0.90 ± 0.06 (standard deviation), sensitivity was 0.92, and specificity was 0.99. For 3D segmentation vs. TS, the mean Dice index was 0.89 ± 0.06, sensitivity was 0.91, and specificity was 0.99. The automated results are comparable with the experts' manual segmentation results. CONCLUSIONS: We present an accurate, robust, efficient, and reproducible segmentation method for pre-operative LGGs.
- MeSH
- Algorithms MeSH
- Glioma pathology surgery MeSH
- Humans MeSH
- Magnetic Resonance Imaging * methods MeSH
- Brain Neoplasms pathology surgery MeSH
- Image Processing, Computer-Assisted * MeSH
- Sensitivity and Specificity MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
- Research Support, N.I.H., Extramural MeSH
This study aims to develop a fully automated imaging protocol independent system for pituitary adenoma segmentation from magnetic resonance imaging (MRI) scans that can work without user interaction and evaluate its accuracy and utility for clinical applications. We trained two independent artificial neural networks on MRI scans of 394 patients. The scans were acquired according to various imaging protocols over the course of 11 years on 1.5T and 3T MRI systems. The segmentation model assigned a class label to each input pixel (pituitary adenoma, internal carotid artery, normal pituitary gland, background). The slice segmentation model classified slices as clinically relevant (structures of interest in slice) or irrelevant (anterior or posterior to sella turcica). We used MRI data of another 99 patients to evaluate the performance of the model during training. We validated the model on a prospective cohort of 28 patients, Dice coefficients of 0.910, 0.719, and 0.240 for tumour, internal carotid artery, and normal gland labels, respectively, were achieved. The slice selection model achieved 82.5% accuracy, 88.7% sensitivity, 76.7% specificity, and an AUC of 0.904. A human expert rated 71.4% of the segmentation results as accurate, 21.4% as slightly inaccurate, and 7.1% as coarsely inaccurate. Our model achieved good results comparable with recent works of other authors on the largest dataset to date and generalized well for various imaging protocols. We discussed future clinical applications, and their considerations. Models and frameworks for clinical use have yet to be developed and evaluated.
- Keywords
- Image segmentation, Machine learning, Magnetic resonance imaging, Pituitary adenoma,
- MeSH
- Adenoma * diagnostic imaging surgery MeSH
- Humans MeSH
- Magnetic Resonance Imaging MeSH
- Pituitary Neoplasms * diagnostic imaging surgery MeSH
- Neural Networks, Computer MeSH
- Image Processing, Computer-Assisted methods MeSH
- Prospective Studies MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
Microscopic image analysis plays a significant role in initial leukemia screening and its efficient diagnostics. Since the present conventional methodologies partly rely on manual examination, which is time consuming and depends greatly on the experience of domain experts, automated leukemia detection opens up new possibilities to minimize human intervention and provide more accurate clinical information. This paper proposes a novel approach based on conventional digital image processing techniques and machine learning algorithms to automatically identify acute lymphoblastic leukemia from peripheral blood smear images. To overcome the greatest challenges in the segmentation phase, we implemented extensive pre-processing and introduced a three-phase filtration algorithm to achieve the best segmentation results. Moreover, sixteen robust features were extracted from the images in the way that hematological experts do, which significantly increased the capability of the classifiers to recognize leukemic cells in microscopic images. To perform the classification, we applied two traditional machine learning classifiers, the artificial neural network and the support vector machine. Both methods reached a specificity of 95.31%, and the sensitivity of the support vector machine and artificial neural network reached 98.25 and 100%, respectively.
- Keywords
- acute leukemia, automated leukemia detection, blood smear image analysis, cell segmentation, image processing, leukemic cell identification, machine learning,
- Publication type
- Journal Article MeSH
The number of publications describing chemical structures has increased steadily over the last decades. However, the majority of published chemical information is currently not available in machine-readable form in public databases. It remains a challenge to automate the process of information extraction in a way that requires less manual intervention - especially the mining of chemical structure depictions. As an open-source platform that leverages recent advancements in deep learning, computer vision, and natural language processing, DECIMER.ai (Deep lEarning for Chemical IMagE Recognition) strives to automatically segment, classify, and translate chemical structure depictions from the printed literature. The segmentation and classification tools are the only openly available packages of their kind, and the optical chemical structure recognition (OCSR) core application yields outstanding performance on all benchmark datasets. The source code, the trained models and the datasets developed in this work have been published under permissive licences. An instance of the DECIMER web application is available at https://decimer.ai .
- Publication type
- Journal Article MeSH
Time-lapse imaging is a rich data source offering potential kinetic information of cellular activity and behavior. Tracking and extracting measurements of objects from time-lapse datasets are challenges that result from the complexity and dynamics of each object's motion and intensity or the appearance of new objects in the field of view. A wide range of strategies for proper data sampling, object detection, image analysis, and post-analysis interpretation are available. Theory and methods for single-particle tracking, spot detection, and object linking are discussed in this unit, as well as examples with step-by-step procedures for utilizing semi-automated software and visualization tools for achieving tracking results and interpreting this output.
- Keywords
- digital imaging, image analysis, image segmentation methods, object linking, object tracking, sampling frequency, single-particle tracking,
- MeSH
- Time-Lapse Imaging MeSH
- Chlamydomonas cytology MeSH
- Zebrafish MeSH
- Fluorescence MeSH
- Blood Cells cytology MeSH
- RNA, Small Interfering metabolism MeSH
- Regional Blood Flow MeSH
- Pattern Recognition, Automated methods MeSH
- Imaging, Three-Dimensional * MeSH
- Animals MeSH
- Check Tag
- Animals MeSH
- Publication type
- Journal Article MeSH
- Names of Substances
- RNA, Small Interfering MeSH
To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.
- Keywords
- connectomics, electron microscopy, image segmentation, machine learning, reconstruction,
- Publication type
- Journal Article MeSH
Automatic detection and segmentation of biological objects in 2D and 3D image data is central for countless biomedical research questions to be answered. While many existing computational methods are used to reduce manual labeling time, there is still a huge demand for further quality improvements of automated solutions. In the natural image domain, spatial embedding-based instance segmentation methods are known to yield high-quality results, but their utility to biomedical data is largely unexplored. Here we introduce EmbedSeg, an embedding-based instance segmentation method designed to segment instances of desired objects visible in 2D or 3D biomedical image data. We apply our method to four 2D and seven 3D benchmark datasets, showing that we either match or outperform existing state-of-the-art methods. While the 2D datasets and three of the 3D datasets are well known, we have created the required training data for four new 3D datasets, which we make publicly available online. Next to performance, also usability is important for a method to be useful. Hence, EmbedSeg is fully open source (https://github.com/juglab/EmbedSeg), offering (i) tutorial notebooks to train EmbedSeg models and use them to segment object instances in new data, and (ii) a napari plugin that can also be used for training and segmentation without requiring any programming experience. We believe that this renders EmbedSeg accessible to virtually everyone who requires high-quality instance segmentations in 2D or 3D biomedical image data.
- Keywords
- Embeddings, Instance Segmentation, Microscopy,
- MeSH
- Algorithms * MeSH
- Humans MeSH
- Microscopy * methods MeSH
- Image Processing, Computer-Assisted methods MeSH
- Imaging, Three-Dimensional methods MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH