processing pipeline
Dotaz
Zobrazit nápovědu
INTRODUCTION: Recent advances in machine learning provide new possibilities to process and analyse observational patient data to predict patient outcomes. In this paper, we introduce a data processing pipeline for cardiogenic shock (CS) prediction from the MIMIC III database of intensive cardiac care unit patients with acute coronary syndrome. The ability to identify high-risk patients could possibly allow taking pre-emptive measures and thus prevent the development of CS. METHODS: We mainly focus on techniques for the imputation of missing data by generating a pipeline for imputation and comparing the performance of various multivariate imputation algorithms, including k-nearest neighbours, two singular value decomposition (SVD)-based methods, and Multiple Imputation by Chained Equations. After imputation, we select the final subjects and variables from the imputed dataset and showcase the performance of the gradient-boosted framework that uses a tree-based classifier for cardiogenic shock prediction. RESULTS: We achieved good classification performance thanks to data cleaning and imputation (cross-validated mean area under the curve 0.805) without hyperparameter optimization. CONCLUSION: We believe our pre-processing pipeline would prove helpful also for other classification and regression experiments.
- Publikační typ
- časopisecké články MeSH
BACKGROUND: High-throughput bioinformatics analyses of next generation sequencing (NGS) data often require challenging pipeline optimization. The key problem is choosing appropriate tools and selecting the best parameters for optimal precision and recall. RESULTS: Here we introduce ToTem, a tool for automated pipeline optimization. ToTem is a stand-alone web application with a comprehensive graphical user interface (GUI). ToTem is written in Java and PHP with an underlying connection to a MySQL database. Its primary role is to automatically generate, execute and benchmark different variant calling pipeline settings. Our tool allows an analysis to be started from any level of the process and with the possibility of plugging almost any tool or code. To prevent an over-fitting of pipeline parameters, ToTem ensures the reproducibility of these by using cross validation techniques that penalize the final precision, recall and F-measure. The results are interpreted as interactive graphs and tables allowing an optimal pipeline to be selected, based on the user's priorities. Using ToTem, we were able to optimize somatic variant calling from ultra-deep targeted gene sequencing (TGS) data and germline variant detection in whole genome sequencing (WGS) data. CONCLUSIONS: ToTem is a tool for automated pipeline optimization which is freely available as a web application at https://totem.software .
BACKGROUND: Next generation sequencing (NGS) technology allows laboratories to investigate virome composition in clinical and environmental samples in a culture-independent way. There is a need for bioinformatic tools capable of parallel processing of virome sequencing data by exactly identical methods: this is especially important in studies of multifactorial diseases, or in parallel comparison of laboratory protocols. RESULTS: We have developed a web-based application allowing direct upload of sequences from multiple virome samples using custom parameters. The samples are then processed in parallel using an identical protocol, and can be easily reanalyzed. The pipeline performs de-novo assembly, taxonomic classification of viruses as well as sample analyses based on user-defined grouping categories. Tables of virus abundance are produced from cross-validation by remapping the sequencing reads to a union of all observed reference viruses. In addition, read sets and reports are created after processing unmapped reads against known human and bacterial ribosome references. Secured interactive results are dynamically plotted with population and diversity charts, clustered heatmaps and a sortable and searchable abundance table. CONCLUSIONS: The Vipie web application is a unique tool for multi-sample metagenomic analysis of viral data, producing searchable hits tables, interactive population maps, alpha diversity measures and clustered heatmaps that are grouped in applicable custom sample categories. Known references such as human genome and bacterial ribosomal genes are optionally removed from unmapped ('dark matter') reads. Secured results are accessible and shareable on modern browsers. Vipie is a freely available web-based tool whose code is open source.
- MeSH
- genetická variace MeSH
- genomika metody MeSH
- internet * MeSH
- lidé MeSH
- mikrobiota genetika MeSH
- software * MeSH
- viry genetika MeSH
- vysoce účinné nukleotidové sekvenování * MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
Cryo-electron microscopy has established as a mature structural biology technique to elucidate the three-dimensional structure of biological macromolecules. The Coulomb potential of the sample is imaged by an electron beam, and fast semi-conductor detectors produce movies of the sample under study. These movies have to be further processed by a whole pipeline of image-processing algorithms that produce the final structure of the macromolecule. In this chapter, we illustrate this whole processing pipeline putting in value the strength of "meta algorithms," which are the combination of several algorithms, each one with different mathematical rationale, in order to distinguish correctly from incorrectly estimated parameters. We show how this strategy leads to superior performance of the whole pipeline as well as more confident assessments about the reconstructed structures. The "meta algorithms" strategy is common to many fields and, in particular, it has provided excellent results in bioinformatics. We illustrate this combination using the workflow engine, Scipion.
- MeSH
- algoritmy * MeSH
- elektronová kryomikroskopie metody MeSH
- makromolekulární látky ultrastruktura MeSH
- molekulární biologie metody MeSH
- počítačové zpracování obrazu metody MeSH
- průběh práce MeSH
- výpočetní biologie MeSH
- zobrazení jednotlivé molekuly metody MeSH
- zobrazování trojrozměrné metody MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
This work presents a novel fully automated method for retinal analysis in images acquired with a flood illuminated adaptive optics retinal camera (AO-FIO). The proposed processing pipeline consists of several steps: First, we register single AO-FIO images in a montage image capturing a larger retinal area. The registration is performed by combination of phase correlation and the scale-invariant feature transform method. A set of 200 AO-FIO images from 10 healthy subjects (10 images from left eye and 10 images from right eye) is processed into 20 montage images and mutually aligned according to the automatically detected fovea center. As a second step, the photoreceptors in the montage images are detected using a method based on regional maxima localization, where the detector parameters were determined with Bayesian optimization according to manually labeled photoreceptors by three evaluators. The detection assessment, based on Dice coefficient, ranges from 0.72 to 0.8. In the next step, the corresponding density maps are generated for each of the montage images. As a final step, representative averaged photoreceptor density maps are created for the left and right eye and thus enabling comprehensive analysis across the montage images and a straightforward comparison with available histological data and other published studies. Our proposed method and software thus enable us to generate AO-based photoreceptor density maps for all measured locations fully automatically, and thus it is suitable for large studies, as those are in pressing need for automated approaches. In addition, the application MATADOR (MATlab ADaptive Optics Retinal Image Analysis) that implements the described pipeline and the dataset with photoreceptor labels are made publicly available.
- Publikační typ
- časopisecké články MeSH
BACKGROUND: Environmental DNA and metabarcoding allow the identification of a mixture of species and launch a new era in bio- and eco-assessment. Many steps are required to obtain taxonomically assigned matrices from raw data. For most of these, a plethora of tools are available; each tool's execution parameters need to be tailored to reflect each experiment's idiosyncrasy. Adding to this complexity, the computation capacity of high-performance computing systems is frequently required for such analyses. To address the difficulties, bioinformatic pipelines need to combine state-of-the art technologies and algorithms with an easy to get-set-use framework, allowing researchers to tune each study. Software containerization technologies ease the sharing and running of software packages across operating systems; thus, they strongly facilitate pipeline development and usage. Likewise programming languages specialized for big data pipelines incorporate features like roll-back checkpoints and on-demand partial pipeline execution. FINDINGS: PEMA is a containerized assembly of key metabarcoding analysis tools that requires low effort in setting up, running, and customizing to researchers' needs. Based on third-party tools, PEMA performs read pre-processing, (molecular) operational taxonomic unit clustering, amplicon sequence variant inference, and taxonomy assignment for 16S and 18S ribosomal RNA, as well as ITS and COI marker gene data. Owing to its simplified parameterization and checkpoint support, PEMA allows users to explore alternative algorithms for specific steps of the pipeline without the need of a complete re-execution. PEMA was evaluated against both mock communities and previously published datasets and achieved results of comparable quality. CONCLUSIONS: A high-performance computing-based approach was used to develop PEMA; however, it can be used in personal computers as well. PEMA's time-efficient performance and good results will allow it to be used for accurate environmental DNA metabarcoding analysis, thus enhancing the applicability of next-generation biodiversity assessment studies.
- MeSH
- Archaea MeSH
- Bacteria MeSH
- environmentální DNA chemie genetika MeSH
- houby MeSH
- metagenomika metody normy MeSH
- referenční standardy MeSH
- respirační komplex IV genetika MeSH
- RNA ribozomální 16S genetika MeSH
- RNA ribozomální 18S genetika MeSH
- rostliny MeSH
- senzitivita a specificita MeSH
- software MeSH
- taxonomické DNA čárové kódování metody normy MeSH
- zvířata MeSH
- Check Tag
- zvířata MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
SUMMARY: Here we introduce a Fiji plugin utilizing the HPC-as-a-Service concept, significantly mitigating the challenges life scientists face when delegating complex data-intensive processing workflows to HPC clusters. We demonstrate on a common Selective Plane Illumination Microscopy image processing task that execution of a Fiji workflow on a remote supercomputer leads to improved turnaround time despite the data transfer overhead. The plugin allows the end users to conveniently transfer image data to remote HPC resources, manage pipeline jobs and visualize processed results directly from the Fiji graphical user interface. AVAILABILITY AND IMPLEMENTATION: The code is distributed free and open source under the MIT license. Source code: https://github.com/fiji-hpc/hpc-workflow-manager/, documentation: https://imagej.net/SPIM_Workflow_Manager_For_HPC. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
Functional connectivity analysis of resting-state fMRI data has recently become one of the most common approaches to characterizing individual brain function. It has been widely suggested that the functional connectivity matrix is a useful approximate representation of the brain's connectivity, potentially providing behaviorally or clinically relevant markers. However, functional connectivity estimates are known to be detrimentally affected by various artifacts, including those due to in-scanner head motion. Moreover, as individual functional connections generally covary only very weakly with head motion estimates, motion influence is difficult to quantify robustly, and prone to be neglected in practice. Although the use of individual estimates of head motion, or group-level correlation of motion and functional connectivity has been suggested, a sufficiently sensitive measure of individual functional connectivity quality has not yet been established. We propose a new intuitive summary index, Typicality of Functional Connectivity, to capture deviations from standard brain functional connectivity patterns. In a resting-state fMRI dataset of 245 healthy subjects, this measure was significantly correlated with individual head motion metrics. The results were further robustly reproduced across atlas granularity, preprocessing options, and other datasets, including 1,081 subjects from the Human Connectome Project. In principle, Typicality of Functional Connectivity should be sensitive also to other types of artifacts, processing errors, and possibly also brain pathology, allowing extensive use in data quality screening and quantification in functional connectivity studies as well as methodological investigations.
- MeSH
- artefakty MeSH
- atlasy jako téma * MeSH
- datové soubory jako téma * MeSH
- dospělí MeSH
- hlava - pohyby MeSH
- konektom * metody normy MeSH
- lidé MeSH
- magnetická rezonanční tomografie * metody normy MeSH
- mladý dospělý MeSH
- mozek diagnostické zobrazování fyziologie MeSH
- počítačové zpracování obrazu * metody normy MeSH
- Check Tag
- dospělí MeSH
- lidé MeSH
- mladý dospělý MeSH
- mužské pohlaví MeSH
- ženské pohlaví MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
The human brain represents a complex computational system, the function and structure of which may be measured using various neuroimaging techniques focusing on separate properties of the brain tissue and activity. We capture the organization of white matter fibers acquired by diffusion-weighted imaging using probabilistic diffusion tractography. By segmenting the results of tractography into larger anatomical units, it is possible to draw inferences about the structural relationships between these parts of the system. This pipeline results in a structural connectivity matrix, which contains an estimate of connection strength among all regions. However, raw data processing is complex, computationally intensive, and requires expert quality control, which may be discouraging for researchers with less experience in the field. We thus provide brain structural connectivity matrices in a form ready for modelling and analysis and thus usable by a wide community of scientists. The presented dataset contains brain structural connectivity matrices together with the underlying raw diffusion and structural data, as well as basic demographic data of 88 healthy subjects.
Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse.