BACKGROUND: Segmentation of pre-operative low-grade gliomas (LGGs) from magnetic resonance imaging is a crucial step for studying imaging biomarkers. However, segmentation of LGGs is particularly challenging because they rarely enhance after gadolinium administration. Like other gliomas, they have irregular tumor shape, heterogeneous composition, ill-defined tumor boundaries, and limited number of image types. To overcome these challenges we propose a semi-automated segmentation method that relies only on T2-weighted (T2W) and optionally post-contrast T1-weighted (T1W) images. METHODS: First, the user draws a region-of-interest (ROI) that completely encloses the tumor and some normal tissue. Second, a normal brain atlas and post-contrast T1W images are registered to T2W images. Third, the posterior probability of each pixel/voxel belonging to normal and abnormal tissues is calculated based on information derived from the atlas and ROI. Finally, geodesic active contours use the probability map of the tumor to shrink the ROI until optimal tumor boundaries are found. This method was validated against the true segmentation (TS) of 30 LGG patients for both 2D (1 slice) and 3D. The TS was obtained from manual segmentations of three experts using the Simultaneous Truth and Performance Level Estimation (STAPLE) software. Dice and Jaccard indices and other descriptive statistics were computed for the proposed method, as well as the experts' segmentation versus the TS. We also tested the method with the BraTS datasets, which supply expert segmentations. RESULTS AND DISCUSSION: For 2D segmentation vs. TS, the mean Dice index was 0.90 ± 0.06 (standard deviation), sensitivity was 0.92, and specificity was 0.99. For 3D segmentation vs. TS, the mean Dice index was 0.89 ± 0.06, sensitivity was 0.91, and specificity was 0.99. The automated results are comparable with the experts' manual segmentation results. CONCLUSIONS: We present an accurate, robust, efficient, and reproducible segmentation method for pre-operative LGGs.
- MeSH
- Algorithms MeSH
- Glioma pathology surgery MeSH
- Humans MeSH
- Magnetic Resonance Imaging * methods MeSH
- Brain Neoplasms pathology surgery MeSH
- Image Processing, Computer-Assisted * MeSH
- Sensitivity and Specificity MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
- Research Support, N.I.H., Extramural MeSH
This study aims to develop a fully automated imaging protocol independent system for pituitary adenoma segmentation from magnetic resonance imaging (MRI) scans that can work without user interaction and evaluate its accuracy and utility for clinical applications. We trained two independent artificial neural networks on MRI scans of 394 patients. The scans were acquired according to various imaging protocols over the course of 11 years on 1.5T and 3T MRI systems. The segmentation model assigned a class label to each input pixel (pituitary adenoma, internal carotid artery, normal pituitary gland, background). The slice segmentation model classified slices as clinically relevant (structures of interest in slice) or irrelevant (anterior or posterior to sella turcica). We used MRI data of another 99 patients to evaluate the performance of the model during training. We validated the model on a prospective cohort of 28 patients, Dice coefficients of 0.910, 0.719, and 0.240 for tumour, internal carotid artery, and normal gland labels, respectively, were achieved. The slice selection model achieved 82.5% accuracy, 88.7% sensitivity, 76.7% specificity, and an AUC of 0.904. A human expert rated 71.4% of the segmentation results as accurate, 21.4% as slightly inaccurate, and 7.1% as coarsely inaccurate. Our model achieved good results comparable with recent works of other authors on the largest dataset to date and generalized well for various imaging protocols. We discussed future clinical applications, and their considerations. Models and frameworks for clinical use have yet to be developed and evaluated.
- MeSH
- Adenoma * diagnostic imaging surgery MeSH
- Humans MeSH
- Magnetic Resonance Imaging MeSH
- Pituitary Neoplasms * diagnostic imaging surgery MeSH
- Neural Networks, Computer MeSH
- Image Processing, Computer-Assisted methods MeSH
- Prospective Studies MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
Diabetic retinopathy is a diabetes complication that affects the eyes, caused by damage to the blood vessels of the light-sensitive tissue of the retina. At the onset, diabetic retinopathy may cause no symptoms or only mild vision problems, but eventually it can cause blindness. Totally automated segmentation of Eye Fundus Images (EFI) is a necessary step for accurate and early quantification of lesions, useful in the future for better automated diagnosis of degree of diabetic retinopathy and damage caused by the disease. Deep Learning segmentation networks are the state-of-the-art, but quality, limitations and comparison of architectures of segmentation networks is necessary. We build off-theshelf deep learning architectures and evaluate them on a publicly available dataset, to conclude the strengths and limitations of the approaches and to compare architectures. Results show that the segmentation networks score high on important metrics, such as 87.5% weighted IoU on FCN. We also show that network architecture is very important, with DeepLabV3 and FCN outperforming other networks tested by more than 30 pp. We also show that DeepLabV3 outperforms prior related work using deep learning to detect lesions. Finally, we identify and investigate the problem of very low IoU and precision scores, such as 17% IoU of microaneurisms in DeepLabV3, concluding it is due to a large number of false positives. This leads us to discuss the challenges that lie ahead to improve the limitations that we identified
An automatic method of segmenting the retinal vessel tree and estimating status of retinal neural fibre layer (NFL) from high resolution fundus camera images is presented. First, reliable blood vessel segmentation, using 2D directional matched filtering, enables to remove areas occluded by blood vessels thus leaving remaining retinal area available to the following NFL detection. The local existence of rather faint and hardly visible NFL is detected by combining several newly designed local textural features, sensitive to subtle NFL characteristics, into feature vectors submitted to a trained neural-network classifier. Obtained binary retinal maps of NFL distribution show a good agreement with both medical expert evaluations and quantitative results obtained by optical coherence tomography.
- MeSH
- Fluorescein Angiography methods MeSH
- Image Interpretation, Computer-Assisted methods MeSH
- Humans MeSH
- Optic Nerve Diseases pathology MeSH
- Nerve Net pathology MeSH
- Reproducibility of Results MeSH
- Retinal Vessels pathology MeSH
- Retinoscopy methods MeSH
- Pattern Recognition, Automated methods MeSH
- Sensitivity and Specificity MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
Although the field of sleep study has greatly developed over recent years, the most common and efficient way to detect sleep issues remains a sleep examination performed in a sleep laboratory. This examination measures several vital signals by polysomnograph during a full night's sleep using multiple sensors connected to the patient's body. Nevertheless, despite being the gold standard, the sensors and the unfamiliar environment's connection inevitably impact the quality of the patient's sleep and the examination itself. Therefore, with the novel development of accurate and affordable 3D sensing devices, new approaches for non-contact sleep study have emerged. These methods utilize different techniques to extract the same breathing parameters but with contactless methods. However, to enable reliable remote extraction, these methods require accurate identification of the basic region of interest (ROI), i.e., the patient's chest area. The lack of automated ROI segmenting of 3D time series is currently holding back the development process. We propose an automatic chest area segmentation algorithm that given a time series of 3D frames containing a sleeping patient as input outputs a segmentation image with the pixels that correspond to the chest area. Beyond significantly speeding up the development process of the non-contact methods, accurate automatic segmentation can enable a more precise feature extraction. In addition, further tests of the algorithm on existing data demonstrate its ability to improve the sensitivity of a prior solution that uses manual ROI selection. The approach is on average 46.9% more sensitive with a maximal improvement of 220% when compared to manual ROI. All mentioned can pave the way for placing non-contact algorithms as leading candidates to replace existing traditional methods used today.
- MeSH
- Algorithms * MeSH
- Respiration MeSH
- Humans MeSH
- Image Processing, Computer-Assisted methods MeSH
- Polysomnography MeSH
- Sleep MeSH
- Imaging, Three-Dimensional * methods MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
Biocompatibility testing of new materials is often performed in vitro by measuring the growth rate of mammalian cancer cells in time-lapse images acquired by phase contrast microscopes. The growth rate is measured by tracking cell coverage, which requires an accurate automatic segmentation method. However, cancer cells have irregular shapes that change over time, the mottled background pattern is partially visible through the cells and the images contain artifacts such as halos. We developed a novel algorithm for cell segmentation that copes with the mentioned challenges. It is based on temporal differences of consecutive images and a combination of thresholding, blurring, and morphological operations. We tested the algorithm on images of four cell types acquired by two different microscopes, evaluated the precision of segmentation against manual segmentation performed by a human operator, and finally provided comparison with other freely available methods. We propose a new, fully automated method for measuring the cell growth rate based on fitting a coverage curve with the Verhulst population model. The algorithm is fast and shows accuracy comparable with manual segmentation. Most notably it can correctly separate live from dead cells.
- MeSH
- Algorithms MeSH
- Artifacts MeSH
- Time-Lapse Imaging * MeSH
- Cytological Techniques instrumentation methods MeSH
- Humans MeSH
- Microscopy * MeSH
- Pattern Recognition, Automated MeSH
- Animals MeSH
- Check Tag
- Humans MeSH
- Animals MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.
To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.
- Publication type
- Journal Article MeSH
Introduction: Arterial brain vessel assessment is crucial for the diagnostic process in patients with cerebrovascular disease. Non-invasive neuroimaging techniques, such as time-of-flight (TOF) magnetic resonance angiography (MRA) imaging are applied in the clinical routine to depict arteries. They are, however, only visually assessed. Fully automated vessel segmentation integrated into the clinical routine could facilitate the time-critical diagnosis of vessel abnormalities and might facilitate the identification of valuable biomarkers for cerebrovascular events. In the present work, we developed and validated a new deep learning model for vessel segmentation, coined BRAVE-NET, on a large aggregated dataset of patients with cerebrovascular diseases. Methods: BRAVE-NET is a multiscale 3-D convolutional neural network (CNN) model developed on a dataset of 264 patients from three different studies enrolling patients with cerebrovascular diseases. A context path, dually capturing high- and low-resolution volumes, and deep supervision were implemented. The BRAVE-NET model was compared to a baseline Unet model and variants with only context paths and deep supervision, respectively. The models were developed and validated using high-quality manual labels as ground truth. Next to precision and recall, the performance was assessed quantitatively by Dice coefficient (DSC); average Hausdorff distance (AVD); 95-percentile Hausdorff distance (95HD); and via visual qualitative rating. Results: The BRAVE-NET performance surpassed the other models for arterial brain vessel segmentation with a DSC = 0.931, AVD = 0.165, and 95HD = 29.153. The BRAVE-NET model was also the most resistant toward false labelings as revealed by the visual analysis. The performance improvement is primarily attributed to the integration of the multiscaling context path into the 3-D Unet and to a lesser extent to the deep supervision architectural component. Discussion: We present a new state-of-the-art of arterial brain vessel segmentation tailored to cerebrovascular pathology. We provide an extensive experimental validation of the model using a large aggregated dataset encompassing a large variability of cerebrovascular disease and an external set of healthy volunteers. The framework provides the technological foundation for improving the clinical workflow and can serve as a biomarker extraction tool in cerebrovascular diseases.
- Publication type
- Journal Article MeSH
BACKGROUND: Optical coherence tomography (OCT)-based studies of cardiac allograft vasculopathy (CAV) published thus far have focused mainly on frame-based qualitative analysis of the vascular wall. Full capabilities of this inherently 3-dimensional (3D) imaging modality to quantify CAV have not been fully exploited. METHODS: Coronary OCT imaging was performed at 1 month and 12 months after heart transplant (HTx) during routine surveillance cardiac catheterization. Both baseline and follow-up OCT examinations were analyzed using proprietary, highly automated 3D graph-based optimal segmentation software. Automatically identified borders were efficiently adjudicated using our "just-enough-interaction" graph-based segmentation approach that allows to efficiently correct local and regional segmentation errors without slice-by-slice retracing of borders. RESULTS: A total of 50 patients with paired baseline and follow-up OCT studies were included. After registration of baseline and follow-up pullbacks, a total of 356 ± 89 frames were analyzed per patient. During the first post-transplant year, significant reduction in the mean luminal area (p = 0.028) and progression in mean intimal thickness (p = 0.001) were observed. Proximal parts of imaged coronary arteries were affected more than distal parts (p < 0.001). High levels of LDL cholesterol (p = 0.02) and total cholesterol (p = 0.031) in the first month after HTx were the main factors associated with early CAV development. CONCLUSIONS: Our novel, highly automated 3D OCT image analysis method for analyzing intimal and medial thickness in HTx recipients provides fast, accurate, and highly detailed quantitative data on early CAV changes, which are characterized by significant luminal reduction and intimal thickness progression as early as within the first 12 months after HTx.
- MeSH
- Early Diagnosis MeSH
- Adult MeSH
- Image Interpretation, Computer-Assisted methods MeSH
- Coronary Angiography methods MeSH
- Middle Aged MeSH
- Humans MeSH
- Follow-Up Studies MeSH
- Coronary Artery Disease diagnostic imaging MeSH
- Tomography, Optical Coherence * MeSH
- Postoperative Complications diagnostic imaging MeSH
- Disease Progression MeSH
- Aged MeSH
- Heart Transplantation * MeSH
- Imaging, Three-Dimensional * MeSH
- Check Tag
- Adult MeSH
- Middle Aged MeSH
- Humans MeSH
- Male MeSH
- Aged MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
- Clinical Trial MeSH
- Research Support, Non-U.S. Gov't MeSH
- Research Support, N.I.H., Extramural MeSH