workflow automation Dotaz Zobrazit nápovědu
OBJECTIVES: Minimal residual disease (MRD) status in multiple myeloma (MM) is an important prognostic biomarker. Personalized blood-based targeted mass spectrometry detecting M-proteins (MS-MRD) was shown to provide a sensitive and minimally invasive alternative to MRD-assessment in bone marrow. However, MS-MRD still comprises of manual steps that hamper upscaling of MS-MRD testing. Here, we introduce a proof-of-concept for a novel workflow using data independent acquisition-parallel accumulation and serial fragmentation (dia-PASEF) and automated data processing. METHODS: Using automated data processing of dia-PASEF measurements, we developed a workflow that identified unique targets from MM patient sera and personalized protein sequence databases. We generated patient-specific libraries linked to dia-PASEF methods and subsequently quantitated and reported M-protein concentrations in MM patient follow-up samples. Assay performance of parallel reaction monitoring (prm)-PASEF and dia-PASEF workflows were compared and we tested mixing patient intake sera for multiplexed target selection. RESULTS: No significant differences were observed in lowest detectable concentration, linearity, and slope coefficient when comparing prm-PASEF and dia-PASEF measurements of serial dilutions of patient sera. To improve assay development times, we tested multiplexing patient intake sera for target selection which resulted in the selection of identical clonotypic peptides for both simplex and multiplex dia-PASEF. Furthermore, assay development times improved up to 25× when measuring multiplexed samples for peptide selection compared to simplex. CONCLUSIONS: Dia-PASEF technology combined with automated data processing and multiplexed target selection facilitated the development of a faster MS-MRD workflow which benefits upscaling and is an important step towards the clinical implementation of MS-MRD.
- MeSH
- automatizace MeSH
- individualizovaná medicína metody MeSH
- lidé MeSH
- mnohočetný myelom * diagnóza krev MeSH
- průběh práce * MeSH
- reziduální nádor * diagnóza MeSH
- rychlé screeningové testy metody MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
OBJECTIVES: The analysis of organic acids in urine is an important part of the diagnosis of inherited metabolic disorders (IMDs), for which gas chromatography coupled with mass spectrometry is still predominantly used. METHODS: Ultra-performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) assay for urinary organic acids, acylcarnitines and acylglycines was developed and validated. Sample preparation consists only of dilution and the addition of internal standards. Raw data processing is quick and easy using selective scheduled multiple reaction monitoring mode. A robust standardised value calculation as a data transformation together with advanced automatic visualisation tools are applied for easy evaluation of complex data. RESULTS: The developed method covers 146 biomarkers consisting of organic acids (n=99), acylglycines (n=15) and acylcarnitines (n=32) including all clinically important isomeric compounds present. Linearity with r2>0.98 for 118 analytes, inter-day accuracy between 80 and 120 % and imprecision under 15 % for 120 analytes were achieved. Over 2 years, more than 800 urine samples from children tested for IMDs were analysed. The workflow was evaluated on 93 patient samples and ERNDIM External Quality Assurance samples involving a total of 34 different IMDs. CONCLUSIONS: The established LC-MS/MS workflow offers a comprehensive analysis of a wide range of organic acids, acylcarnitines and acylglycines in urine to perform effective, rapid and sensitive semi-automated diagnosis of more than 80 IMDs.
- MeSH
- chromatografie kapalinová metody MeSH
- dítě MeSH
- lidé MeSH
- metabolické nemoci * MeSH
- organické látky MeSH
- průběh práce MeSH
- tandemová hmotnostní spektrometrie * metody MeSH
- Check Tag
- dítě MeSH
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
Poor lifestyle leads potentially to chronic diseases and low-grade physical and mental fitness. However, ahead of time, we can measure and analyze multiple aspects of physical and mental health, such as body parameters, health risk factors, degrees of motivation, and the overall willingness to change the current lifestyle. In conjunction with data representing human brain activity, we can obtain and identify human health problems resulting from a long-term lifestyle more precisely and, where appropriate, improve the quality and length of human life. Currently, brain and physical health-related data are not commonly collected and evaluated together. However, doing that is supposed to be an interesting and viable concept, especially when followed by a more detailed definition and description of their whole processing lifecycle. Moreover, when best practices are used to store, annotate, analyze, and evaluate such data collections, the necessary infrastructure development and more intense cooperation among scientific teams and laboratories are facilitated. This approach also improves the reproducibility of experimental work. As a result, large collections of physical and brain health-related data could provide a robust basis for better interpretation of a person's overall health. This work aims to overview and reflect some best practices used within global communities to ensure the reproducibility of experiments, collected datasets and related workflows. These best practices concern, e.g., data lifecycle models, FAIR principles, and definitions and implementations of terminologies and ontologies. Then, an example of how an automated workflow system could be created to support the collection, annotation, storage, analysis, and publication of findings is shown. The Body in Numbers pilot system, also utilizing software engineering best practices, was developed to implement the concept of such an automated workflow system. It is unique just due to the combination of the processing and evaluation of physical and brain (electrophysiological) data. Its implementation is explored in greater detail, and opportunities to use the gained findings and results throughout various application domains are discussed.
- Publikační typ
- časopisecké články MeSH
- přehledy MeSH
Capillary electrophoresis-frontal analysis (CE-FA) together with mobility shift affinity CE is the most frequently used mode of affinity CE for a study of plasma protein-drug interactions, which is a substantial part of the early stage of drug discovery. Whereas in the classic CE-FA setup the sample is prepared by off-line mixing of the interaction partners in the sample vial outside the CE instrument and after a short incubation period loaded into the capillary and analysed, in this work a new methodological approach has been developed that combines CE-FA with the mixing of interacting partners directly inside the capillary. This combination gives rise to a fully automated and versatile methodology for the characterization of these binding interactions besides a substantial reduction in the amounts of sample compounds used. The minimization of possible experimental errors due to the full involving of sophisticated CE instrument in the injection procedure, mixing and separation instead of manual manipulation is another fundamental benefit. The in-capillary mixing is based on the transverse diffusion of laminar flow profile methodology introduced by Krylov et al. using its multi-zone injection modification presented by Řemínek at al.. Actually, after the method optimization, the alternate introduction of six plugs of drug and six plugs of bovine serum protein in BGE, each injected for 3 s at a pressure of -10 mbar (-1 kPa) into the capillary filled by BGE, was found to be the best injection procedure. The method repeatability calculated as RSDs of plateau highs of bovine serum albumin and propranolol as model sample compounds were better than 3.44 %. Its applicability was finally demonstrated on the determination of apparent binding parameters of bovine serum albumin for basic drugs propranolol and lidocaine and acid drug phenylbutazone. The values obtained by a new on-line CE-FA methodology are in agreement with values estimated by classic off-line CE-FA, as well as with literature data obtained using different techniques.
Background: The Human Cell Differentiation Molecules (HCDM) organizes Human Leukocyte Differentiation Antigen (HLDA) workshops to test and name clusters of antibodies that react with a specific antigen. These cluster of differentiation (CD) markers have provided the scientific community with validated antibody clones, consistent naming of targets and reproducible identification of leukocyte subsets. Still, quantitative CD marker expression profiles and benchmarking of reagents at the single-cell level are currently lacking. Objective: To develop a flow cytometric procedure for quantitative expression profiling of surface antigens on blood leukocyte subsets that is standardized across multiple research laboratories. Methods: A high content framework to evaluate the titration and reactivity of Phycoerythrin (PE)-conjugated monoclonal antibodies (mAbs) was created. Two flow cytometry panels were designed: an innate cell tube for granulocytes, dendritic cells, monocytes, NK cells and innate lymphoid cells (12-color) and an adaptive lymphocyte tube for naive and memory B and T cells, including TCRγδ+, regulatory-T and follicular helper T cells (11-color). The potential of these 2 panels was demonstrated via expression profiling of selected CD markers detected by PE-conjugated antibodies and evaluated using 561 nm excitation. Results: Using automated data annotation and dried backbone reagents, we reached a robust workflow amenable to processing hundreds of measurements in each experiment in a 96-well plate format. The immunophenotyping panels enabled discrimination of 27 leukocyte subsets and quantitative detection of the expression of PE-conjugated CD markers of interest that could quantify protein expression above 400 units of antibody binding capacity. Expression profiling of 4 selected CD markers (CD11b, CD31, CD38, CD40) showed high reproducibility across centers, as well as the capacity to benchmark unique clones directed toward the same CD3 antigen. Conclusion: We optimized a procedure for quantitative expression profiling of surface antigens on blood leukocyte subsets. The workflow, bioinformatics pipeline and optimized flow panels enable the following: 1) mapping the expression patterns of HLDA-approved mAb clones to CD markers; 2) benchmarking new antibody clones to established CD markers; 3) defining new clusters of differentiation in future HLDA workshops.
- MeSH
- antigeny povrchové * metabolismus MeSH
- buňky NK metabolismus MeSH
- CD antigeny metabolismus MeSH
- leukocyty MeSH
- lidé MeSH
- monoklonální protilátky MeSH
- přirozená imunita * MeSH
- průběh práce MeSH
- průtoková cytometrie metody MeSH
- referenční standardy MeSH
- reprodukovatelnost výsledků MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
The inherent diversity of approaches in proteomics research has led to a wide range of software solutions for data analysis. These software solutions encompass multiple tools, each employing different algorithms for various tasks such as peptide-spectrum matching, protein inference, quantification, statistical analysis, and visualization. To enable an unbiased comparison of commonly used bottom-up label-free proteomics workflows, we introduce WOMBAT-P, a versatile platform designed for automated benchmarking and comparison. WOMBAT-P simplifies the processing of public data by utilizing the sample and data relationship format for proteomics (SDRF-Proteomics) as input. This feature streamlines the analysis of annotated local or public ProteomeXchange data sets, promoting efficient comparisons among diverse outputs. Through an evaluation using experimental ground truth data and a realistic biological data set, we uncover significant disparities and a limited overlap in the quantified proteins. WOMBAT-P not only enables rapid execution and seamless comparison of workflows but also provides valuable insights into the capabilities of different software solutions. These benchmarking metrics are a valuable resource for researchers in selecting the most suitable workflow for their specific data sets. The modular architecture of WOMBAT-P promotes extensibility and customization. The software is available at https://github.com/wombat-p/WOMBAT-Pipelines.
- MeSH
- analýza dat MeSH
- benchmarking * MeSH
- proteiny MeSH
- proteomika * MeSH
- průběh práce MeSH
- software MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
TransCelerate reports on the results of 2019, 2020, and 2021 member company (MC) surveys on the use of intelligent automation in pharmacovigilance processes. MCs increased the number and extent of implementation of intelligent automation solutions throughout Individual Case Safety Report (ICSR) processing, especially with rule-based automations such as robotic process automation, lookups, and workflows, moving from planning to piloting to implementation over the 3 survey years. Companies remain highly interested in other technologies such as machine learning (ML) and artificial intelligence, which can deliver a human-like interpretation of data and decision making rather than just automating tasks. Intelligent automation solutions are usually used in combination with more than one technology being used simultaneously for the same ICSR process step. Challenges to implementing intelligent automation solutions include finding/having appropriate training data for ML models and the need for harmonized regulatory guidance.
- MeSH
- automatizace MeSH
- farmakovigilance * MeSH
- lidé MeSH
- strojové učení MeSH
- technologie MeSH
- umělá inteligence * MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
Non-target analysis (NTA) employing high-resolution mass spectrometry is a commonly applied approach for the detection of novel chemicals of emerging concern in complex environmental samples. NTA typically results in large and information-rich datasets that require computer aided (ideally automated) strategies for their processing and interpretation. Such strategies do however raise the challenge of reproducibility between and within different processing workflows. An effective strategy to mitigate such problems is the implementation of inter-laboratory studies (ILS) with the aim to evaluate different workflows and agree on harmonized/standardized quality control procedures. Here we present the data generated during such an ILS. This study was organized through the Norman Network and included 21 participants from 11 countries. A set of samples based on the passive sampling of drinking water pre and post treatment was shipped to all the participating laboratories for analysis, using one pre-defined method and one locally (i.e. in-house) developed method. The data generated represents a valuable resource (i.e. benchmark) for future developments of algorithms and workflows for NTA experiments.