Segmentation
Dotaz
Zobrazit nápovědu
Wall segmentation is a special case of semantic segmentation, and the task is to classify each pixel into one of two classes: wall and no-wall. The segmentation model returns a mask showing where objects like windows and furniture are located, as well as walls. This article proposes the module's structure for semantic segmentation of walls in 2D images, which can effectively address the problem of wall segmentation. The proposed model achieved higher accuracy and faster execution than other solutions. An encoder-decoder architecture of the segmentation module was used. Dilated ResNet50/101 network was used as an encoder, representing ResNet50/101 network in which dilated convolutional layers replaced the last convolutional layers. The ADE20K dataset subset containing only interior images, was used for model training, while only its subset was used for model evaluation. Three different approaches to model training were analyzed in the research. On the validation dataset, the best approach based on the proposed structure with the ResNet101 network resulted in an average accuracy at the pixel level of 92.13% and an intersection over union (IoU) of 72.58%. Moreover, all proposed approaches can be applied to recognize other objects in the image to solve specific tasks.
- Klíčová slova
- ADE20K, Encoder-decoder, PSPNet, Semantic segmentation, Wall segmentation,
- Publikační typ
- časopisecké články MeSH
Computer Tomography (CT) is an imaging procedure that combines many X-ray measurements taken from different angles. The segmentation of areas in the CT images provides a valuable aid to physicians and radiologists in order to better provide a patient diagnose. The CT scans of a body torso usually include different neighboring internal body organs. Deep learning has become the state-of-the-art in medical image segmentation. For such techniques, in order to perform a successful segmentation, it is of great importance that the network learns to focus on the organ of interest and surrounding structures and also that the network can detect target regions of different sizes. In this paper, we propose the extension of a popular deep learning methodology, Convolutional Neural Networks (CNN), by including deep supervision and attention gates. Our experimental evaluation shows that the inclusion of attention and deep supervision results in consistent improvement of the tumor prediction accuracy across the different datasets and training sizes while adding minimal computational overhead.
- Klíčová slova
- CNN, UNet, VNet, attention gates, deep supervision, medical image segmentation, organ segmentation, tumor segmentation,
- Publikační typ
- časopisecké články MeSH
To stimulate progress in automating the reconstruction of neural circuits, we organized the first international challenge on 2D segmentation of electron microscopic (EM) images of the brain. Participants submitted boundary maps predicted for a test set of images, and were scored based on their agreement with a consensus of human expert annotations. The winning team had no prior experience with EM images, and employed a convolutional network. This "deep learning" approach has since become accepted as a standard for segmentation of EM images. The challenge has continued to accept submissions, and the best so far has resulted from cooperation between two teams. The challenge has probably saturated, as algorithms cannot progress beyond limits set by ambiguities inherent in 2D scoring and the size of the test dataset. Retrospective evaluation of the challenge scoring system reveals that it was not sufficiently robust to variations in the widths of neurite borders. We propose a solution to this problem, which should be useful for a future 3D segmentation challenge.
- Klíčová slova
- connectomics, electron microscopy, image segmentation, machine learning, reconstruction,
- Publikační typ
- časopisecké články MeSH
Segmentation of macromolecular structures is the primary bottleneck for studying biomolecules and their organization with electron microscopy in 2D/3D - requiring months of manual effort. Transformer-based Rapid Dimensionless Instance Segmentation (TARDIS) is a deep learning framework that automatically and accurately annotates membranes and filaments. Pre-trained TARDIS models can segment electron tomography (ET) reconstructions from both 3D and 2D electron micrographs of cryo and plastic-embedded samples. Furthermore, by implementing a novel geometric transformer architecture, TARDIS is the only method to provide accurate instance segmentations of these structures. Reducing the annotation time for ET data from months to minutes, we demonstrate segmentation of membranes and filaments in over 13,000 tomograms in the CZII Data Portal. TARDIS thus enables quantitative biophysical analysis at scale for the first time. We show this in application to kinetochore-microtubule attachment and viral-membrane interactions. TARDIS can be extended to new biomolecules and applications and open-source at https://github.com/SMLC-NYSBC/TARDIS.
- Klíčová slova
- Actin, CNN, Cryo-EM, Cryo-ET, DIST, Filaments, Instance Segmentation, Membranes, Microtubules, Point Cloud, Segmentation, Semantic Segmentation, TARDIS, TEM EM/ET,
- Publikační typ
- časopisecké články MeSH
- preprinty MeSH
Segmentation is one of the most important steps in microscopy image analysis. Unfortunately, most of the methods use fluorescence images for this task, which is not suitable for analysis that requires a knowledge of area occupied by cells and an experimental design that does not allow necessary labeling. In this protocol, we present a simple method, based on edge detection and morphological operations, that separates total area occupied by cells from the background using only brightfield channel image. The resulting segmented picture can be further used as a mask for fluorescence quantification and other analyses. The whole procedure is carried out in open source software Fiji.
- Klíčová slova
- Fiji, ImageJ, brightfield segmentation, cells, image analysis, microscopy,
- Publikační typ
- časopisecké články MeSH
Precise identification of spinal nerve rootlets is relevant to delineate spinal levels for the study of functional activity in the spinal cord. The goal of this study was to develop an automatic method for the semantic segmentation of spinal nerve rootlets from T2-weighted magnetic resonance imaging (MRI) scans. Images from two open-access 3T MRI datasets were used to train a 3D multi-class convolutional neural network using an active learning approach to segment C2-C8 dorsal nerve rootlets. Each output class corresponds to a spinal level. The method was tested on 3T T2-weighted images from three datasets unseen during training to assess inter-site, inter-session, and inter-resolution variability. The test Dice score was 0.67 ± 0.16 (mean ± standard deviation across testing images and rootlets levels), suggesting a good performance. The method also demonstrated low inter-vendor and inter-site variability (coefficient of variation ≤ 1.41%), as well as low inter-session variability (coefficient of variation ≤ 1.30%), indicating stable predictions across different MRI vendors, sites, and sessions. The proposed methodology is open-source and readily available in the Spinal Cord Toolbox (SCT) v6.2 and higher.
- Klíčová slova
- deep learning, magnetic resonance imaging, nerve rootlets, segmentation, spinal cord,
- Publikační typ
- časopisecké články MeSH
In recent years, computed tomography (CT) has become a standard technique in cardiac imaging because it provides detailed information that may facilitate the diagnosis of the conditions that interfere with correct heart function. However, CT-based cardiac diagnosis requires manual segmentation of heart cavities, which is a difficult and time-consuming task. Thus, in this paper, we propose a novel technique to segment endocardium and epicardium boundaries based on a 2D approach. The proposal computes relevant information of the left ventricle and its adjacent structures using the Hermite transform. The novelty of the work is that the information is combined with active shape models and level sets to improve the segmentation. Our database consists of mid-third slices selected from 28 volumes manually segmented by expert physicians. The segmentation is assessed using Dice coefficient and Hausdorff distance. In addition, we introduce a novel metric called Ray Feature error to evaluate our method. The results show that the proposal accurately discriminates cardiac tissue. Thus, it may be a useful tool for supporting heart disease diagnosis and tailoring treatments.
- Klíčová slova
- Active shape models, Left ventricle segmentation, Level sets, Local binary patterns, Ray Feature error, Steered Hermite transform,
- MeSH
- biologické modely MeSH
- lidé MeSH
- počítačová rentgenová tomografie metody MeSH
- srdeční komory patologie MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
The segmentation of teeth in 3D dental scans is difficult due to variations in teeth shapes, misalignments, occlusions, or the present dental appliances. Existing methods consistently adhere to geometric representations, omitting the perceptual aspects of the inputs. In addition, current works often lack evaluation on anatomically complex cases due to the unavailability of such datasets. We present a projection-based approach towards accurate teeth segmentation that operates in a detect-and-segment manner locally on each tooth in a multi-view fashion. Information is spatially correlated via recurrent units. We show that a projection-based framework can precisely segment teeth in cases with anatomical anomalies with negligible information loss. It outperforms point-based, edge-based, and Graph Cut-based geometric approaches, achieving an average weighted IoU score of 0.97122±0.038 and a Hausdorff distance at 95 percentile of 0.49012±0.571 mm. We also release Poseidon's Teeth 3D (Poseidon3D), a novel dataset of real orthodontic cases with various dental anomalies like teeth crowding and missing teeth.
- Klíčová slova
- 3D mesh segmentation, LMVSegRNN, Poseidon3D, Poseidon’s Teeth 3D, dental scans, orthodontic mesh segmentation dataset, tooth segmentation,
- Publikační typ
- časopisecké články MeSH
Ever-increasing availability of experimental volumetric data (e.g., in .ccp4, .mrc, .map, .rec, .zarr, .ome.tif formats) and advances in segmentation software (e.g., Amira, Segger, IMOD) and formats (e.g., .am, .seg, .mod, etc.) have led to a demand for efficient web-based visualization tools. Despite this, current solutions remain scarce, hindering data interpretation and dissemination. Previously, we introduced Mol* Volumes & Segmentations (Mol* VS), a web application for the visualization of volumetric, segmentation, and annotation data (e.g., semantically relevant information on biological entities corresponding to individual segmentations such as Gene Ontology terms or PDB IDs). However, this lacked important features such as the ability to edit annotations (e.g., assigning user-defined descriptions of a segment) and seamlessly share visualizations. Additionally, setting up Mol* VS required a substantial programming background. This article presents an updated version, Mol* VS 2.0, that addresses these limitations. As part of Mol* VS 2.0, we introduce the Annotation Editor, a user-friendly graphical interface for editing annotations, and the Volumes & Segmentations Toolkit (VSToolkit) for generating shareable files with visualization data. The outlined protocols illustrate the utilization of Mol* VS 2.0 for visualization of volumetric and segmentation data across various scales, showcasing the progress in the field of molecular complex visualization. © 2024 The Author(s). Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: VSToolkit-setting up and visualizing a user-constructed Mol* VS 2.0 database entry Basic Protocol 2: VSToolkit-visualizing multiple time frames and volume channels Support Protocol 1: Example: Adding database entry idr-13457537 Alternate Protocol 1: Local-server-and-viewer-visualizing multiple time frames and volume channels Support Protocol 2: Addition of database entry custom-tubhiswt Basic Protocol 3: VSToolkit-visualizing a specific channel and time frame Basic Protocol 4: VSToolkit-visualizing geometric segmentation Basic Protocol 5: VSToolkit-visualizing lattice segmentations Alternate Protocol 2: "Local-server-and-viewer"-visualizing lattice segmentations Basic Protocol 6: "Local-server-and-viewer"-visualizing multiple volume channels Support Protocol 3: Deploying a server API Support Protocol 4: Hosting Mol* viewer with VS extension 2.0 Support Protocol 5: Example: Addition of database entry empiar-11756 Support Protocol 6: Example: Addition of database entry emd-1273 Support Protocol 7: Editing annotations Support Protocol 8: Addition of database entry idr-5025553.
- Klíčová slova
- 3D visualization tools, annotation data, large‐scale datasets, segmentation data, volumetric data,
- MeSH
- internet MeSH
- počítačová grafika MeSH
- software * MeSH
- uživatelské rozhraní počítače MeSH
- vizualizace dat MeSH
- Publikační typ
- časopisecké články MeSH
Morphometric measures derived from spinal cord segmentations can serve as diagnostic and prognostic biomarkers in neurological diseases and injuries affecting the spinal cord. For instance, the spinal cord cross-sectional area can be used to monitor cord atrophy in multiple sclerosis and to characterize compression in degenerative cervical myelopathy. While robust, automatic segmentation methods to a wide variety of contrasts and pathologies have been developed over the past few years, whether their predictions are stable as the model is updated using new datasets has not been assessed. This is particularly important for deriving normative values from healthy participants. In this study, we present a spinal cord segmentation model trained on a multisite (n=75) dataset, including 9 different MRI contrasts and several spinal cord pathologies. We also introduce a lifelong learning framework to automatically monitor the morphometric drift as the model is updated using additional datasets. The framework is triggered by an automatic GitHub Actions workflow every time a new model is created, recording the morphometric values derived from the model's predictions over time. As a real-world application of the proposed framework, we employed the spinal cord segmentation model to update a recently-introduced normative database of healthy participants containing commonly used measures of spinal cord morphometry. Results showed that: (i) our model performs well compared to its previous versions and existing pathology-specific models on the lumbar spinal cord, images with severe compression, and in the presence of intramedullary lesions and/or atrophy achieving an average Dice score of 0.95 ± 0.03; (ii) the automatic workflow for monitoring morphometric drift provides a quick feedback loop for developing future segmentation models; and (iii) the scaling factor required to update the database of morphometric measures is nearly constant among slices across the given vertebral levels, showing minimum drift between the current and previous versions of the model monitored by the framework. The model is freely available in Spinal Cord Toolbox v7.0.
- Klíčová slova
- Lifelong Learning, MLOps, MRI, Morphometric Drift, Segmentation, Spinal Cord,
- Publikační typ
- časopisecké články MeSH
- preprinty MeSH