Most cited article - PubMed ID 29083403
An objective comparison of cell-tracking algorithms
Artificial intelligence (AI) methods are powerful tools for biological image analysis and processing. High-quality annotated images are key to training and developing new algorithms, but access to such data is often hindered by the lack of standards for sharing datasets. We discuss the barriers to sharing annotated image datasets and suggest specific guidelines to improve the reuse of bioimages and annotations for AI applications. These include standards on data formats, metadata, data presentation and sharing, and incentives to generate new datasets. We are sure that the Metadata, Incentives, Formats and Accessibility (MIFA) recommendations will accelerate the development of AI tools for bioimage analysis by facilitating access to high-quality training and benchmarking data.
- Publication type
- Journal Article MeSH
- Review MeSH
The preservation of morphological features, such as protrusions and concavities, and of the topology of input shapes is important when establishing reference data for benchmarking segmentation algorithms or when constructing a mean or median shape. We present a contourwise topology-preserving fusion method, called shape-aware topology-preserving means (SATM), for merging complex simply connected shapes. The method is based on key point matching and piecewise contour averaging. Unlike existing pixelwise and contourwise fusion methods, SATM preserves topology and does not smooth morphological features. We also present a detailed comparison of SATM with state-of-the-art fusion techniques for the purpose of benchmarking and median shape construction. Our experiments show that SATM outperforms these techniques in terms of shape-related measures that reflect shape complexity, manifesting itself as a reliable method for both establishing a consensus of segmentation annotations and for computing mean shapes.
- Keywords
- Average shape, Mean shape, Median shape, Segmentation mask fusion, Shape analysis,
- Publication type
- Journal Article MeSH
Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.
- MeSH
- Algorithms * MeSH
- Image Processing, Computer-Assisted * MeSH
- Semantics MeSH
- Machine Learning MeSH
- Publication type
- Journal Article MeSH
- Review MeSH
The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a significant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.
DNA double-strand breaks (DSBs), marked by ionizing radiation-induced (repair) foci (IRIFs), are the most serious DNA lesions and are dangerous to human health. IRIF quantification based on confocal microscopy represents the most sensitive and gold-standard method in radiation biodosimetry and allows research on DSB induction and repair at the molecular and single-cell levels. In this study, we introduce DeepFoci - a deep learning-based fully automatic method for IRIF counting and morphometric analysis. DeepFoci is designed to work with 3D multichannel data (trained for 53BP1 and γH2AX) and uses U-Net for nucleus segmentation and IRIF detection, together with maximally stable extremal region-based IRIF segmentation. The proposed method was trained and tested on challenging datasets consisting of mixtures of nonirradiated and irradiated cells of different types and IRIF characteristics - permanent cell lines (NHDFs, U-87) and primary cell cultures prepared from tumors and adjacent normal tissues of head and neck cancer patients. The cells were dosed with 0.5-8 Gy γ-rays and fixed at multiple (0-24 h) postirradiation times. Under all circumstances, DeepFoci quantified the number of IRIFs with the highest accuracy among current advanced algorithms. Moreover, while the detection error of DeepFoci remained comparable to the variability between two experienced experts, the software maintained its sensitivity and fidelity across dramatically different IRIF counts per nucleus. In addition, information was extracted on IRIF 3D morphometric features and repair protein colocalization within IRIFs. This approach allowed multiparameter IRIF categorization of single- or multichannel data, thereby refining the analysis of DSB repair processes and classification of patient tumors, with the potential to identify specific cell subclones. The developed software improves IRIF quantification for various practical applications (radiotherapy monitoring, biodosimetry, etc.) and opens the door to advanced DSB focus analysis and, in turn, a better understanding of (radiation-induced) DNA damage and repair.
- Keywords
- 53BP1, P53-binding protein 1, Biodosimetry, CNN, convolutional neural network, Confocal Microscopy, Convolutional Neural Network, DNA Damage and Repair, DSB, DNA double-strand break, Deep Learning, FOV, field of view, GUI, graphical user interface, IRIF, ionizing radiation-induced (repair) foci, Image Analysis, Ionizing Radiation-Induced Foci (IRIFs), MSER, maximally stable extremal region (algorithm), Morphometry, NHDFs, normal human dermal fibroblasts, RAD51, DNA repair protein RAD51 homolog 1, U-87, U-87 glioblastoma cell line, γH2AX, histone H2AX phosphorylated at serine 139,
- Publication type
- Journal Article MeSH
In this paper, a novel U-Net-based method for robust adherent cell segmentation for quantitative phase microscopy image is designed and optimised. We designed and evaluated four specific post-processing pipelines. To increase the transferability to different cell types, non-deep learning transfer with adjustable parameters is used in the post-processing step. Additionally, we proposed a self-supervised pretraining technique using nonlabelled data, which is trained to reconstruct multiple image distortions and improved the segmentation performance from 0.67 to 0.70 of object-wise intersection over union. Moreover, we publish a new dataset of manually labelled images suitable for this task together with the unlabelled data for self-supervised pretraining.
- Publication type
- Journal Article MeSH
Image analysis is key to extracting quantitative information from scientific microscopy images, but the methods involved are now often so refined that they can no longer be unambiguously described by written protocols. We introduce BIAFLOWS, an open-source web tool enabling to reproducibly deploy and benchmark bioimage analysis workflows coming from any software ecosystem. A curated instance of BIAFLOWS populated with 34 image analysis workflows and 15 microscopy image datasets recapitulating common bioimage analysis problems is available online. The workflows can be launched and assessed remotely by comparing their performance visually and according to standard benchmark metrics. We illustrated these features by comparing seven nuclei segmentation workflows, including deep-learning methods. BIAFLOWS enables to benchmark and share bioimage analysis workflows, hence safeguarding research results and promoting high-quality standards in image analysis. The platform is thoroughly documented and ready to gather annotated microscopy datasets and workflows contributed by the bioimaging community.
- Keywords
- benchmarking, bioimaging, community, deep learning, deployment, image analysis, reproducibility, software, web application,
- Publication type
- Journal Article MeSH
Cell viability and cytotoxicity assays are highly important for drug screening and cytotoxicity tests of antineoplastic or other therapeutic drugs. Even though biochemical-based tests are very helpful to obtain preliminary preview, their results should be confirmed by methods based on direct cell death assessment. In this study, time-dependent changes in quantitative phase-based parameters during cell death were determined and methodology useable for rapid and label-free assessment of direct cell death was introduced. The goal of our study was distinction between apoptosis and primary lytic cell death based on morphologic features. We have distinguished the lytic and non-lytic type of cell death according to their end-point features (Dance of Death typical for apoptosis versus swelling and membrane rupture typical for all kinds of necrosis common for necroptosis, pyroptosis, ferroptosis and accidental cell death). Our method utilizes Quantitative Phase Imaging (QPI) which enables the time-lapse observation of subtle changes in cell mass distribution. According to our results, morphological and dynamical features extracted from QPI micrographs are suitable for cell death detection (76% accuracy in comparison with manual annotation). Furthermore, based on QPI data alone and machine learning, we were able to classify typical dynamical changes of cell morphology during both caspase 3,7-dependent and -independent cell death subroutines. The main parameters used for label-free detection of these cell death modalities were cell density (pg/pixel) and average intensity change of cell pixels further designated as Cell Dynamic Score (CDS). To the best of our knowledge, this is the first study introducing CDS and cell density as a parameter typical for individual cell death subroutines with prediction accuracy 75.4% for caspase 3,7-dependent and -independent cell death.
- MeSH
- Algorithms MeSH
- Apoptosis * drug effects MeSH
- Cell Death * drug effects MeSH
- Cells ultrastructure MeSH
- Time-Lapse Imaging methods MeSH
- Time Factors MeSH
- Cells, Cultured MeSH
- Humans MeSH
- Cell Line, Tumor MeSH
- Optical Imaging methods MeSH
- Cell Count MeSH
- Models, Statistical MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
MOTIVATION: Objective assessment of bioimage analysis methods is an essential step towards understanding their robustness and parameter sensitivity, calling for the availability of heterogeneous bioimage datasets accompanied by their reference annotations. Because manual annotations are known to be arduous, highly subjective and barely reproducible, numerous simulators have emerged over past decades, generating synthetic bioimage datasets complemented with inherent reference annotations. However, the installation and configuration of these tools generally constitutes a barrier to their widespread use. RESULTS: We present a modern, modular web-interface, CytoPacq, to facilitate the generation of synthetic benchmark datasets relevant for multi-dimensional cell imaging. CytoPacq poses a user-friendly graphical interface with contextual tooltips and currently allows a comfortable access to various cell simulation systems of fluorescence microscopy, which have already been recognized and used by the scientific community, in a straightforward and self-contained form. AVAILABILITY AND IMPLEMENTATION: CytoPacq is a publicly available online service running at https://cbia.fi.muni.cz/simulator. More information about it as well as examples of generated bioimage datasets are available directly through the web-interface. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
- MeSH
- Computer Simulation MeSH
- Software * MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
Small extracellular vesicles (sEVs) are cell-derived vesicles of nanoscale size (~30-200 nm) that function as conveyors of information between cells, reflecting the cell of their origin and its physiological condition in their content. Valuable information on the shape and even on the composition of individual sEVs can be recorded using transmission electron microscopy (TEM). Unfortunately, sample preparation for TEM image acquisition is a complex procedure, which often leads to noisy images and renders automatic quantification of sEVs an extremely difficult task. We present a completely deep-learning-based pipeline for the segmentation of sEVs in TEM images. Our method applies a residual convolutional neural network to obtain fine masks and use the Radon transform for splitting clustered sEVs. Using three manually annotated datasets that cover a natural variability typical for sEV studies, we show that the proposed method outperforms two different state-of-the-art approaches in terms of detection and segmentation performance. Furthermore, the diameter and roundness of the segmented vesicles are estimated with an error of less than 10%, which supports the high potential of our method in biological applications.
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
