data segmentation Dotaz Zobrazit nápovědu
Ever-increasing availability of experimental volumetric data (e.g., in .ccp4, .mrc, .map, .rec, .zarr, .ome.tif formats) and advances in segmentation software (e.g., Amira, Segger, IMOD) and formats (e.g., .am, .seg, .mod, etc.) have led to a demand for efficient web-based visualization tools. Despite this, current solutions remain scarce, hindering data interpretation and dissemination. Previously, we introduced Mol* Volumes & Segmentations (Mol* VS), a web application for the visualization of volumetric, segmentation, and annotation data (e.g., semantically relevant information on biological entities corresponding to individual segmentations such as Gene Ontology terms or PDB IDs). However, this lacked important features such as the ability to edit annotations (e.g., assigning user-defined descriptions of a segment) and seamlessly share visualizations. Additionally, setting up Mol* VS required a substantial programming background. This article presents an updated version, Mol* VS 2.0, that addresses these limitations. As part of Mol* VS 2.0, we introduce the Annotation Editor, a user-friendly graphical interface for editing annotations, and the Volumes & Segmentations Toolkit (VSToolkit) for generating shareable files with visualization data. The outlined protocols illustrate the utilization of Mol* VS 2.0 for visualization of volumetric and segmentation data across various scales, showcasing the progress in the field of molecular complex visualization. © 2024 The Author(s). Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: VSToolkit-setting up and visualizing a user-constructed Mol* VS 2.0 database entry Basic Protocol 2: VSToolkit-visualizing multiple time frames and volume channels Support Protocol 1: Example: Adding database entry idr-13457537 Alternate Protocol 1: Local-server-and-viewer-visualizing multiple time frames and volume channels Support Protocol 2: Addition of database entry custom-tubhiswt Basic Protocol 3: VSToolkit-visualizing a specific channel and time frame Basic Protocol 4: VSToolkit-visualizing geometric segmentation Basic Protocol 5: VSToolkit-visualizing lattice segmentations Alternate Protocol 2: "Local-server-and-viewer"-visualizing lattice segmentations Basic Protocol 6: "Local-server-and-viewer"-visualizing multiple volume channels Support Protocol 3: Deploying a server API Support Protocol 4: Hosting Mol* viewer with VS extension 2.0 Support Protocol 5: Example: Addition of database entry empiar-11756 Support Protocol 6: Example: Addition of database entry emd-1273 Support Protocol 7: Editing annotations Support Protocol 8: Addition of database entry idr-5025553.
- Klíčová slova
- 3D visualization tools, annotation data, large‐scale datasets, segmentation data, volumetric data,
- MeSH
- internet MeSH
- počítačová grafika MeSH
- software * MeSH
- uživatelské rozhraní počítače MeSH
- vizualizace dat MeSH
- Publikační typ
- časopisecké články MeSH
Automatic detection and segmentation of biological objects in 2D and 3D image data is central for countless biomedical research questions to be answered. While many existing computational methods are used to reduce manual labeling time, there is still a huge demand for further quality improvements of automated solutions. In the natural image domain, spatial embedding-based instance segmentation methods are known to yield high-quality results, but their utility to biomedical data is largely unexplored. Here we introduce EmbedSeg, an embedding-based instance segmentation method designed to segment instances of desired objects visible in 2D or 3D biomedical image data. We apply our method to four 2D and seven 3D benchmark datasets, showing that we either match or outperform existing state-of-the-art methods. While the 2D datasets and three of the 3D datasets are well known, we have created the required training data for four new 3D datasets, which we make publicly available online. Next to performance, also usability is important for a method to be useful. Hence, EmbedSeg is fully open source (https://github.com/juglab/EmbedSeg), offering (i) tutorial notebooks to train EmbedSeg models and use them to segment object instances in new data, and (ii) a napari plugin that can also be used for training and segmentation without requiring any programming experience. We believe that this renders EmbedSeg accessible to virtually everyone who requires high-quality instance segmentations in 2D or 3D biomedical image data.
- Klíčová slova
- Embeddings, Instance Segmentation, Microscopy,
- MeSH
- algoritmy * MeSH
- lidé MeSH
- mikroskopie * metody MeSH
- počítačové zpracování obrazu metody MeSH
- zobrazování trojrozměrné metody MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
In our paper, we present a performance evaluation of image segmentation algorithms on microscopic image data. In spite of the existence of many algorithms for image data partitioning, there is no universal and 'the best' method yet. Moreover, images of microscopic samples can be of various character and quality which can negatively influence the performance of image segmentation algorithms. Thus, the issue of selecting suitable method for a given set of image data is of big interest. We carried out a large number of experiments with a variety of segmentation methods to evaluate the behaviour of individual approaches on the testing set of microscopic images (cross-section images taken in three different modalities from the field of art restoration). The segmentation results were assessed by several indices used for measuring the output quality of image segmentation algorithms. In the end, the benefit of segmentation combination approach is studied and applicability of achieved results on another representatives of microscopic data category - biological samples - is shown.
- Klíčová slova
- Image analysis, image segmentation, microscopic images, performance evaluation,
- MeSH
- algoritmy * MeSH
- mikroskopie * metody MeSH
- myši MeSH
- počítačové zpracování obrazu metody normy MeSH
- zvířata MeSH
- Check Tag
- myši MeSH
- zvířata MeSH
- Publikační typ
- časopisecké články MeSH
- hodnotící studie MeSH
- práce podpořená grantem MeSH
OBJECTIVES: Class imbalance in datasets is one of the challenges of machine learning (ML) in medical image analysis. We employed synthetic data to overcome class imbalance when segmenting bitewing radiographs as an exemplary task for using ML. METHODS: After segmenting bitewings into classes, i.e. dental structures, restorations, and background, the pixel-level representation of implants in the training set (1543 bitewings) and testing set (177 bitewings) was 0.03 % and 0.07 %, respectively. A diffusion model and a generative adversarial network (pix2pix) were used to generate a dataset synthetically enriched in implants. A U-Net segmentation model was trained on (1) the original dataset, (2) the synthetic dataset, (3) on the synthetic dataset and fine-tuned on the original dataset, or (4) on a dataset which was naïvely oversampled with images containing implants. RESULTS: U-Net trained on the original dataset was unable to segment implants in the testing set. Model performance was significantly improved by naïve over-sampling, achieving the highest precision. The model trained only on synthetic data performed worse than naïve over-sampling in all metrics, but with fine-tuning on original data, it resulted in the highest Dice score, recall, F1 score and ROC AUC, respectively. The performance on other classes than implants was similar for all strategies except training only on synthetic data, which tended to perform worse. CONCLUSIONS: The use of synthetic data alone may deteriorate the performance of segmentation models. However, fine-tuning on original data could significantly enhance model performance, especially for heavily underrepresented classes. CLINICAL SIGNIFICANCE: This study explored the use of synthetic data to enhance segmentation of bitewing radiographs, focusing on underrepresented classes like implants. Pre-training on synthetic data followed by fine-tuning on original data yielded the best results, highlighting the potential of synthetic data to advance AI-driven dental imaging and ultimately support clinical decision-making.
- Klíčová slova
- Artificial intelligence, Dataset imbalance, Dentistry, Diffusion model, Generative adversarial network, Synthetic medical data,
- MeSH
- lidé MeSH
- počítačové zpracování obrazu * metody MeSH
- strojové učení * MeSH
- zubní implantáty MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- Názvy látek
- zubní implantáty MeSH
Accurate segmentation of biomedical time-series, such as intracardiac electrograms, is vital for understanding physiological states and supporting clinical interventions. Traditional rule-based and feature engineering approaches often struggle with complex clinical patterns and noise. Recent deep learning advancements offer solutions, showing various benefits and drawbacks in segmentation tasks. This study evaluates five segmentation algorithms, from traditional rule-based methods to advanced deep learning models, using a unique clinical dataset of intracardiac signals from 100 patients. We compared a rule-based method, a support vector machine (SVM), fully convolutional semantic neural network (UNet), region proposal network (Faster R-CNN), and recurrent neural network for electrocardiographic signals (DENS-ECG). Notably, Faster R-CNN has never been applied to 1D signals segmentation before. Each model underwent Bayesian optimization to minimize hyperparameter bias. Results indicated that deep learning models outperformed traditional methods, with UNet achieving the highest segmentation score of 88.9 % (root mean square errors for onset and offset of 8.43 ms and 7.49 ms), closely followed by DENS-ECG at 87.8 %. Faster R-CNN and SVM showed moderate performance, while the rule-based method had the lowest accuracy (77.7 %). UNet and DENS-ECG excelled in capturing detailed features and handling noise, highlighting their potential for clinical application. Despite greater computational demands, their superior performance and diagnostic potential support further exploration in biomedical time-series analysis.
- Klíčová slova
- DENS-ECG, Electrophysiology Study, Faster R-CNN, Rule-based Delineation, Support Vector Machines, Time-series Segmentation, U-Net,
- MeSH
- algoritmy MeSH
- Bayesova věta MeSH
- deep learning MeSH
- elektrokardiografie * metody MeSH
- lidé MeSH
- neuronové sítě MeSH
- počítačové zpracování signálu * MeSH
- support vector machine MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
Although the field of sleep study has greatly developed over recent years, the most common and efficient way to detect sleep issues remains a sleep examination performed in a sleep laboratory. This examination measures several vital signals by polysomnograph during a full night's sleep using multiple sensors connected to the patient's body. Nevertheless, despite being the gold standard, the sensors and the unfamiliar environment's connection inevitably impact the quality of the patient's sleep and the examination itself. Therefore, with the novel development of accurate and affordable 3D sensing devices, new approaches for non-contact sleep study have emerged. These methods utilize different techniques to extract the same breathing parameters but with contactless methods. However, to enable reliable remote extraction, these methods require accurate identification of the basic region of interest (ROI), i.e., the patient's chest area. The lack of automated ROI segmenting of 3D time series is currently holding back the development process. We propose an automatic chest area segmentation algorithm that given a time series of 3D frames containing a sleeping patient as input outputs a segmentation image with the pixels that correspond to the chest area. Beyond significantly speeding up the development process of the non-contact methods, accurate automatic segmentation can enable a more precise feature extraction. In addition, further tests of the algorithm on existing data demonstrate its ability to improve the sensitivity of a prior solution that uses manual ROI selection. The approach is on average 46.9% more sensitive with a maximal improvement of 220% when compared to manual ROI. All mentioned can pave the way for placing non-contact algorithms as leading candidates to replace existing traditional methods used today.
- Klíčová slova
- 3D data processing, Breathing analysis, Depth sensors, Human-machine interaction, MS Kinect data acquisition, Segmentation,
- MeSH
- algoritmy * MeSH
- dýchání MeSH
- lidé MeSH
- počítačové zpracování obrazu metody MeSH
- polysomnografie MeSH
- spánek MeSH
- zobrazování trojrozměrné * metody MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated. We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved.
- Klíčová slova
- Algorithm comparison, Evaluation framework, IVUS (intravascular ultrasound), Image segmentation,
- MeSH
- databáze faktografické normy MeSH
- internacionalita MeSH
- interpretace obrazu počítačem metody normy MeSH
- intervenční ultrasonografie metody normy MeSH
- lidé MeSH
- nemoci koronárních tepen diagnostické zobrazování MeSH
- referenční hodnoty MeSH
- reprodukovatelnost výsledků MeSH
- senzitivita a specificita MeSH
- směrnice pro lékařskou praxi jako téma * MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
- Research Support, N.I.H., Extramural MeSH
- Research Support, U.S. Gov't, Non-P.H.S. MeSH
It is now possible to generate large volumes of high-quality images of biomolecules at near-atomic resolution and in near-native states using cryogenic electron microscopy/electron tomography (Cryo-EM/ET). However, the precise annotation of structures like filaments and membranes remains a major barrier towards applying these methods in high-throughput. To address this, we present TARDIS (Transformer-based Rapid Dimensionless Instance Segmentation), a machine-learning framework for fast and accurate annotation of micrographs and tomograms. TARDIS combines deep learning for semantic segmentation with a novel geometric model for precise instance segmentation of various macromolecules. We develop pre-trained models within TARDIS for segmenting microtubules and membranes, demonstrating high accuracy across multiple modalities and resolutions, enabling segmentation of over 13,000 tomograms from the CZI Cryo-Electron Tomography data portal. As a modular framework, TARDIS can be extended to new structures and imaging modalities with minimal modification. TARDIS is open-source and freely available at https://github.com/SMLC-NYSBC/TARDIS, and accelerates analysis of high-resolution biomolecular structural imaging data.
- Klíčová slova
- CNN, Cryo-EM/ET, DIST, Filaments, Instance Segmentation, Membranes, Microtubules, Point Cloud, Segmentation, Semantic Segmentation, TARDIS, TEM EM/ET,
- Publikační typ
- časopisecké články MeSH
- preprinty MeSH
BACKGROUND: Manual analysis of (mini-)rhizotron (MR) images is tedious. Several methods have been proposed for semantic root segmentation based on homogeneous, single-source MR datasets. Recent advances in deep learning (DL) have enabled automated feature extraction, but comparisons of segmentation accuracy, false positives and transferability are virtually lacking. Here we compare six state-of-the-art methods and propose two improved DL models for semantic root segmentation using a large MR dataset with and without augmented data. We determine the performance of the methods on a homogeneous maize dataset, and a mixed dataset of > 8 species (mixtures), 6 soil types and 4 imaging systems. The generalisation potential of the derived DL models is determined on a distinct, unseen dataset. RESULTS: The best performance was achieved by the U-Net models; the more complex the encoder the better the accuracy and generalisation of the model. The heterogeneous mixed MR dataset was a particularly challenging for the non-U-Net techniques. Data augmentation enhanced model performance. We demonstrated the improved performance of deep meta-architectures and feature extractors, and a reduction in the number of false positives. CONCLUSIONS: Although correction factors are still required to match human labelled root lengths, neural network architectures greatly reduce the time required to compute the root length. The more complex architectures illustrate how future improvements in root segmentation within MR images can be achieved, particularly reaching higher segmentation accuracies and model generalisation when analysing real-world datasets with artefacts-limiting the need for model retraining.
- Klíčová slova
- Automatic image segmentation, Data augmentation, Deep learning, False positives, Fine roots, Image processing, Minirhizotron, Neural networks, Root segmentation, U-Net,
- Publikační typ
- časopisecké články MeSH
OBJECTIVE: The most important part of signal processing for classification is feature extraction as a mapping from original input electroencephalographic (EEG) data space to new features space with the biggest class separability value. Features are not only the most important, but also the most difficult task from the classification process as they define input data and classification quality. An ideal set of features would make the classification problem trivial. This article presents novel methods of feature extraction processing and automatic epilepsy seizure classification combining machine learning methods with genetic evolution algorithms. METHODS: Classification is performed on EEG data that represent electric brain activity. At first, the signal is preprocessed with digital filtration and adaptive segmentation using fractal dimensions as the only segmentation measure. In the next step, a novel method using genetic programming (GP) combined with support vector machine (SVM) confusion matrix as fitness function weight is used to extract feature vectors compressed into lower dimension space and classify the final result into ictal or interictal epochs. RESULTS: The final application of GP-SVM method improves the discriminatory performance of a classifier by reducing feature dimensionality at the same time. Members of the GP tree structure represent the features themselves and their number is automatically decided by the compression function introduced in this paper. This novel method improves the overall performance of the SVM classification by dramatically reducing the size of input feature vector. CONCLUSION: According to results, the accuracy of this algorithm is very high and comparable, or even superior to other automatic detection algorithms. In combination with the great efficiency, this algorithm can be used in real-time epilepsy detection applications. From the results of the algorithm's classification, we can observe high sensitivity, specificity results, except for the Generalized Tonic Clonic Seizure (GTCS). As the next step, the optimization of the compression stage and final SVM evaluation stage is in place. More data need to be obtained on GTCS to improve the overall classification score for GTCS.
- Klíčová slova
- EEG, SVM, adaptive segmentation, fractal dimensions, genetic programming,
- Publikační typ
- časopisecké články MeSH