MINING Dotaz Zobrazit nápovědu
V souvislosti s narůstajícím objemem dostupných klinických dat dochází ke stále častějším aplikacím metod tzv. dolování dat („data-mining“) v klinickém výzkumu a praxi. Celý proces data-miningu lze rozdělit na řadu samostatných a poměrně snadno uchopitelných kroků od uložení dat a jejich přípravy, přes pochopení datové struktury souboru až po modelování a extrakci využitelných poznatků. Ve vytvořeném e-kurzu přinášíme kromě teoretického popisu metod i řadu řešených případových studií, např. při mapování genové exprese nebo při modelování strukturovaných dat z klinické praxe.
Data mining has become a standard approach in many fields of clinical research. The whole data-mining process can be divided into sets of simple logical steps from the data preparation and validation, through definition of data structure and statistical description, up to data modelling and mining. The newly developed e-learning course addresses all the main steps of the data mining together with case studies of microarrays data analysis.
- Klíčová slova
- CRISP-DM, microarrays,
- MeSH
- data mining * MeSH
- multimédia využití MeSH
- počítačem řízená výuka * MeSH
- vzdělávání odborné metody MeSH
- Geografické názvy
- Česká republika MeSH
elektronický časopis
- MeSH
- data mining MeSH
- lékařství MeSH
- Konspekt
- Lékařské vědy. Lékařství
- NLK Obory
- lékařství
- lékařská informatika
- NLK Publikační typ
- elektronické časopisy
Data mining (DM) is a widely adopted methodology for the analysis of large datasets which is on the other hand often overestimated or incorrectly considered as a universal solution. This statement is also valid for clinical research, in which large and heterogeneous datasets are often processed. DM in general uses standard methods available in common statistical software and combines them into a complex workflow methodology covering all the steps of data analysis from data acquisition through pre-processing and data analysis to interpretation of the results. The whole workflow is aimed at one final goal – to find any interesting, non-trivially hidden and potentially useful information. This innovative concept of data mining was adopted in our educational course of the Faculty of Medicine at the Masaryk University accessible from its e-learning portal http://portal. med.muni.cz/clanek-318-zavedeni-technologie-data-miningu-a-analyzy-dat--genovych-expresnich-map-do-vyuky.html.
- MeSH
- biostatistika metody MeSH
- data mining * metody trendy MeSH
- lidé MeSH
- multifaktorová rozměrová redukce metody MeSH
- počítačem řízená výuka * metody trendy MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- práce podpořená grantem MeSH
Background: Many previous studies on mining prescription sequences are based only on frequency information, such as the number of prescriptions and the total number of patients issued the prescription. However, in cases where a very small number of doctors issue a prescription representative of a certain medication pattern to many patients many times, the prescribing intention of this very small number of doctors has a great influence on pattern extraction, which introduces bias into the final extracted frequent prescription sequence pattern. Objectives: We attempt to extract frequent prescription sequences from more diverse perspectives by considering factors other than frequency information to ensure highly reliable medication patterns. Methods: We propose the concept of unbiased frequent use by doctors as a factor in addition to frequency information based on the hypothesis that a prescription used by many doctors unbiasedly is a highly reliable prescription. We propose a medication pattern mining method that considers unbiased frequent use by doctors. We conducted an evaluation experiment using indicators based on clinical laboratory test results as a comparative evaluation of the existing method, which relied only on frequency, and included consideration of unbiased frequent use by doctors by the proposed method. Results: The weighted average value of the top k for two different evaluation methods is obtained. Conclusions: The study suggested that our medication pattern mining method considering unbiased frequent use by doctors is useful in certain situations such as when the clinical laboratory test value is outside of the normal value range.
As the amount of genome information increases rapidly, there is a correspondingly greater need for methods that provide accurate and automated annotation of gene function. For example, many high-throughput technologies--e.g., next-generation sequencing--are being used today to generate lists of genes associated with specific conditions. However, their functional interpretation remains a challenge and many tools exist trying to characterize the function of gene-lists. Such systems rely typically in enrichment analysis and aim to give a quick insight into the underlying biology by presenting it in a form of a summary-report. While the load of annotation may be alleviated by such computational approaches, the main challenge in modern annotation remains to develop a systems form of analysis in which a pipeline can effectively analyze gene-lists quickly and identify aggregated annotations through computerized resources. In this article we survey some of the many such tools and methods that have been developed to automatically interpret the biological functions underlying gene-lists. We overview current functional annotation aspects from the perspective of their epistemology (i.e., the underlying theories used to organize information about gene function into a body of verified and documented knowledge) and find that most of the currently used functional annotation methods fall broadly into one of two categories: they are based either on 'known' formally-structured ontology annotations created by 'experts' (e.g., the GO terms used to describe the function of Entrez Gene entries), or--perhaps more adventurously--on annotations inferred from literature (e.g., many text-mining methods use computer-aided reasoning to acquire knowledge represented in natural languages). Overall however, deriving detailed and accurate insight from such gene lists remains a challenging task, and improved methods are called for. In particular, future methods need to (1) provide more holistic insight into the underlying molecular systems; (2) provide better follow-up experimental testing and treatment options, and (3) better manage gene lists derived from organisms that are not well-studied. We discuss some promising approaches that may help achieve these advances, especially the use of extended dictionaries of biomedical concepts and molecular mechanisms, as well as greater use of annotation benchmarks.
A major challenge in cancer treatment is predicting the clinical response to anti-cancer drugs on a personalized basis. The success of such a task largely depends on the ability to develop computational resources that integrate big "omic" data into effective drug-response models. Machine learning is both an expanding and an evolving computational field that holds promise to cover such needs. Here we provide a focused overview of: 1) the various supervised and unsupervised algorithms used specifically in drug response prediction applications, 2) the strategies employed to develop these algorithms into applicable models, 3) data resources that are fed into these frameworks and 4) pitfalls and challenges to maximize model performance. In this context we also describe a novel in silico screening process, based on Association Rule Mining, for identifying genes as candidate drivers of drug response and compare it with relevant data mining frameworks, for which we generated a web application freely available at: https://compbio.nyumc.org/drugs/. This pipeline explores with high efficiency large sample-spaces, while is able to detect low frequency events and evaluate statistical significance even in the multidimensional space, presenting the results in the form of easily interpretable rules. We conclude with future prospects and challenges of applying machine learning based drug response prediction in precision medicine.
Objectives: The goals of this study were to examine the feasibility of using ontology-based text mining with CaringBridge social media journal entries in order to understand journal content from a whole-person perspective. Specific aims were to describe Omaha System problem concept frequencies in the journal entries over a four-step process overall, and relative to Omaha System Domains; and to examine the four step method including the use of standardized terms and related words. Design: Ontology-based retrospective observational feasibility study using text mining methods. Sample: A corpus of social media text consisting of 13,757,900 CaringBridge journal entries from June 2006 to June 2016. Measures: The Omaha System terms, including problems and signs/symptoms, were used as the foundational lexicon for this study. Development of an extended lexicon with related words for each problem concept expanded the semantics-powered data analytics approach to reflect consumer word choices. Results: All Omaha System problem concepts were identified in the journal entries, with consistent representation across domains. The approach was most successful when common words were used to represent clinical terms. Preliminary validation of journal examples showed appropriate representation of the problem concepts. Conclusions: This is the first study to evaluate the feasibility of using an interface terminology and ontology (the Omaha System) as a text mining information model. Further research is needed to systematically validate these findings, refine the process as needed to advance the study of CaringBridge content, and extend the use of this method to other consumer-generated journal entries and terminologies.
- Klíčová slova
- Omaha System,
- MeSH
- bio-ontologie MeSH
- data mining * metody MeSH
- lidé MeSH
- řízený slovník MeSH
- Check Tag
- lidé MeSH