34068629 OR Artificial Intelligence and Computational Methods in the Modeling of Complex Systems
Dotaz
Zobrazit nápovědu
Zpracování znalostí zatížených nejistotou je jednou z nejdůležitějších aplikací metod umělé inteligence. Použití technologie bayesovských sítí umožňuje pro tyto ucely využít výsledky po několik století budované teorie pravděpodobnosti a pracovat s mnohorozměrnými pravdepodobnostními distribucemi V tomto případě muže být rozměr distribucí roven stovkám, případně i tisícům. To znamená, že tato technologie může být použita na reálné aplikace, na skutečné problémy, jejichž složitost přesahuje možnosti většiny dalších přístupů pro modelování nejistých znalostí. Vzhledem k tomu, že se jedná o poměrně mladou disciplínu, nelze říci, že všechny teoretické problémy a problémy spojené s návrhem aplikací již byly úspěšně vyřešeny. Nejvíce otevřených problémů je spojeno právě s konstrukcí bayesovských sítu Přesto sejižobjevují aplikace, které naznačují, že bayesovské sítě se stanoujednítn z mocných nástrojů umělé inteligence pro řešení složitých problémů. Proto lze předpokládat, že se s bayesovskými sítěmi budeme v blízké budoucnosti setkávat i v medicíně, která je jednou z oblastí, kde deterministická znalost je spíše výjimkou.
Uncertain knowledge processing is one of the most important applications of artificial intelligence. Bayesian network technology, taking advantage of for several centuries developed results of probability theory, enables processing of multidimensional probability distributions whose dimensionality equals hundreds or even thousands. Therefore, this technology can be applied to real-life problems whose complexity goes beyond cambility of most other approaches for uncertain knowledge processing. It cannot be said that this relatively new discipline has Iready solved all its theoretical and practical problems. Most of still open problems are connected with zonstraction of Bayesian network models for practical applications. Nevertheless, recently published applications suggest that Bayesian network will become one of he most powerful tool of artificial intelligence for uncertain knowledge processing. Therefore, we can assume that in near future we shall meet Bayesian network in medical applications as this field is one of those where deterministic knowledge is exception.
Pathological pain subtypes can be classified as either neuropathic pain, caused by a somatosensory nervous system lesion or disease, or nociplastic pain, which develops without evidence of somatosensory system damage. Since there is no gold standard for the diagnosis of pathological pain subtypes, the proper classification of individual patients is currently an unmet challenge for clinicians. While the determination of specific biomarkers for each condition by current biochemical techniques is a complex task, the use of multimolecular techniques, such as matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS), combined with artificial intelligence allows specific fingerprints for pathological pain-subtypes to be obtained, which may be useful for diagnosis. We analyzed whether the information provided by the mass spectra of serum samples of four experimental models of neuropathic and nociplastic pain combined with their functional pain outcomes could enable pathological pain subtype classification by artificial neural networks. As a result, a simple and innovative clinical decision support method has been developed that combines MALDI-TOF MS serum spectra and pain evaluation with its subsequent data analysis by artificial neural networks and allows the identification and classification of pathological pain subtypes in experimental models with a high level of specificity.
Artificial intelligence (AI) is an integral part of clinical decision support systems (CDSS), offering methods to approximate human reasoning and computationally infer decisions. Such methods are generally based on medical knowledge, either directly encoded with rules or automatically extracted from medical data using machine learning (ML). ML techniques, such as Artificial Neural Networks (ANNs) and support vector machines (SVMs), are based on mathematical models with parameters that can be optimally tuned using appropriate algorithms. The ever-increasing computational capacity of today's computer systems enables more complex ML systems with millions of parameters, bringing AI closer to human intelligence. With this objective, the term deep learning (DL) has been introduced to characterize ML based on deep ANN (DNN) architectures with multiple layers of artificial neurons. Despite all of these promises, the impact of AI in current clinical practice is still limited. However, this could change shortly, as the significantly increased papers in AI, machine learning and deep learning in cardiology show. We highlight the significant achievements of recent years in nearly all areas of cardiology and underscore the mounting evidence suggesting how AI will take a central stage in the field.
- Publikační typ
- časopisecké články MeSH
- přehledy MeSH
BACKGROUND AND OBJECTIVES: Cardiovascular diseases are critical diseases and need to be diagnosed as early as possible. There is a lack of medical professionals in remote areas to diagnose these diseases. Artificial intelligence-based automatic diagnostic tools can help to diagnose cardiac diseases. This work presents an automatic classification method using machine learning to diagnose multiple cardiac diseases from phonocardiogram signals. METHODS: The proposed system involves a convolutional neural network (CNN) model because of its high accuracy and robustness to automatically diagnose the cardiac disorders from the heart sounds. To improve the accuracy in a noisy environment and make the method robust, the proposed method has used data augmentation techniques for training and multi-classification of multiple cardiac diseases. RESULTS: The model has been validated both heart sound data and augmented data using n-fold cross-validation. Results of all fold have been shown reported in this work. The model has achieved accuracy on the test set up to 98.60% to diagnose multiple cardiac diseases. CONCLUSIONS: The proposed model can be ported to any computing devices like computers, single board computing processors, android handheld devices etc. To make a stand-alone diagnostic tool that may be of help in remote primary health care centres. The proposed method is non-invasive, efficient, robust, and has low time complexity making it suitable for real-time applications.
- MeSH
- lidé MeSH
- nemoci srdce * diagnostické zobrazování MeSH
- neuronové sítě MeSH
- srdeční ozvy * MeSH
- strojové učení MeSH
- umělá inteligence MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
The simulations of cells and microscope images thereof have been used to facilitate the development, selection, and validation of image analysis algorithms employed in cytometry as well as for modeling and understanding cell structure and dynamics beyond what is visible in the eyepiece. The simulation approaches vary from simple parametric models of specific cell components-especially shapes of cells and cell nuclei-to learning-based synthesis and multi-stage simulation models for complex scenes that simultaneously visualize multiple object types and incorporate various properties of the imaged objects and laws of image formation. This review covers advances in artificial digital cell generation at scales ranging from particles up to tissue synthesis and microscope image simulation methods, provides examples of the use of simulated images for various purposes ranging from subcellular object detection to cell tracking, and discusses how such simulators have been validated. Finally, the future possibilities and limitations of simulation-based validation are considered. © 2016 International Society for Advancement of Cytometry.
- MeSH
- algoritmy MeSH
- interpretace obrazu počítačem metody MeSH
- lidé MeSH
- obrazová cytometrie metody MeSH
- rozpoznávání automatizované metody MeSH
- umělá inteligence MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- přehledy MeSH
The important task of generating the minimum number of sequential triangle strips (tristrips) for a given triangulated surface model is motivated by applications in computer graphics. This hard combinatorial optimization problem is reduced to the minimum energy problem in Hopfield nets by a linear-size construction. In particular, the classes of equivalent optimal stripifications are mapped one to one to the minimum energy states reached by a Hopfield network during sequential computation starting at the zero initial state. Thus, the underlying Hopfield network powered by simulated annealing (i.e., Boltzmann machine), which is implemented in the program HTGEN, can be used for computing the semioptimal stripifications. Practical experiments confirm that one can obtain much better results using HTGEN than by a leading conventional stripification program FTSG (a reference stripification method not based on neural nets), although the running time of simulated annealing grows rapidly near the global optimum. Nevertheless, HTGEN exhibits empirical linear time complexity when the parameters of simulated annealing (i.e., the initial temperature and the stopping criterion) are fixed and thus provides the semioptimal offline solutions, even for huge models of hundreds of thousands of triangles, within a reasonable time.
Contemporary molecular biology deals with wide and heterogeneous sets of measurements to model and understand underlying biological processes including complex diseases. Machine learning provides a frequent approach to build such models. However, the models built solely from measured data often suffer from overfitting, as the sample size is typically much smaller than the number of measured features. In this paper, we propose a random forest-based classifier that reduces this overfitting with the aid of prior knowledge in the form of a feature interaction network. We illustrate the proposed method in the task of disease classification based on measured mRNA and miRNA profiles complemented by the interaction network composed of the miRNA-mRNA target relations and mRNA-mRNA interactions corresponding to the interactions between their encoded proteins. We demonstrate that the proposed network-constrained forest employs prior knowledge to increase learning bias and consequently to improve classification accuracy, stability and comprehensibility of the resulting model. The experiments are carried out in the domain of myelodysplastic syndrome that we are concerned about in the long term. We validate our approach in the public domain of ovarian carcinoma, with the same data form. We believe that the idea of a network-constrained forest can straightforwardly be generalized towards arbitrary omics data with an available and non-trivial feature interaction network. The proposed method is publicly available in terms of miXGENE system (http://mixgene.felk.cvut.cz), the workflow that implements the myelodysplastic syndrome experiments is presented as a dedicated case study.
- MeSH
- genové regulační sítě MeSH
- lidé MeSH
- messenger RNA genetika MeSH
- mikro RNA genetika MeSH
- umělá inteligence MeSH
- výpočetní biologie metody MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
In this paper, we present a novel algorithm for measuring protein similarity based on their 3-D structure (protein tertiary structure). The algorithm used a suffix tree for discovering common parts of main chains of all proteins appearing in the current research collaboratory for structural bioinformatics protein data bank (PDB). By identifying these common parts, we build a vector model and use some classical information retrieval (IR) algorithms based on the vector model to measure the similarity between proteins--all to all protein similarity. For the calculation of protein similarity, we use term frequency × inverse document frequency ( tf × idf ) term weighing schema and cosine similarity measure. The goal of this paper is to introduce new protein similarity metric based on suffix trees and IR methods. Whole current PDB database was used to demonstrate very good time complexity of the algorithm as well as high precision. We have chosen the structural classification of proteins (SCOP) database for verification of the precision of our algorithm because it is maintained primarily by humans. The next success of this paper would be the ability to determine SCOP categories of proteins not included in the latest version of the SCOP database (v. 1.75) with nearly 100% precision.
- MeSH
- algoritmy MeSH
- data mining metody MeSH
- databáze proteinů MeSH
- lidé MeSH
- proteiny chemie MeSH
- reprodukovatelnost výsledků MeSH
- strukturní homologie proteinů MeSH
- terciární struktura proteinů MeSH
- umělá inteligence MeSH
- výpočetní biologie metody MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
Background/Objectives: Health and social care systems around the globe are currently undergoing a transformation towards personalized, preventive, predictive, participative precision medicine (5PM), considering the individual health status, conditions, genetic and genomic dispositions, etc., in personal, social, occupational, environmental, and behavioral contexts. This transformation is strongly supported by technologies such as micro- and nanotechnologies, advanced computing, artificial intelligence, edge computing, etc. Methods: To enable communication and cooperation between actors from different domains using different methodologies, languages, and ontologies based on different education, experiences, etc., we have to understand the transformed health ecosystem and all its components in terms of structure, function and relationships in the necessary detail, ranging from elementary particles up to the universe. In this way, we advance design and management of the complex and highly dynamic ecosystem from data to knowledge level. The challenge is the consistent, correct, and formalized representation of the transformed health ecosystem from the perspectives of all domains involved, representing and managing them based on related ontologies. The resulting business viewpoint of the real-world ecosystem must be interrelated using the ISO/IEC 21838 Top Level Ontologies standard. Thereafter, the outcome can be transformed into implementable solutions using the ISO/IEC 10746 Open Distributed Processing Reference Model. Results: The model and framework for this system-oriented, architecture-centric, ontology-based, policy-driven approach have been developed by the first author and meanwhile standardized as ISO 23903 Interoperability and Integration Reference Architecture. The formal representation of any ecosystem and its development process including examples of practical deployment of the approach, are presented in detail. This includes correct systems and standards integration and interoperability solutions. A special issue newly addressed in the paper is the correct and consistent formal representation Conclusions: of all components in the development process, enabling interoperability between and integration of any existing representational artifacts such as models, work products, as well as used terminologies and ontologies. The provided solution is meanwhile mandatory at ISOTC215, CEN/TC251 and many other standards developing organization in health informatics for all projects covering more than just one domain.
- Publikační typ
- časopisecké články MeSH
Background: We present our current approaches to improving personal data protection in (i) large (regional/ national/international) scale health information exchanges (HIEs) and (ii) UK NHS IG toolkit and ISO 27001-compliant trustworthy research environments (TREs) for discovery science communities. In particular we examine impacts of the General Data Protection Regulation (GDPR) on these technology designs and developments and the responses we have made to control complexity. Methods: The paper discusses multiple requirements to implement the key GDPR principles of “data protection by design” and “data protection by default”, each requiring new capabilities to embed multiple security tests and data protection tools in common deployable infrastructures. Methods are presented for consistent implementation of diverse data processing use cases. Results: We describe how modular compositions of GDPRcompliant data processing software have been used to implement use case(s) and deliver information governance (IG) requirements transparently. Security surveillance analysis is embedded throughout the application lifecycle, namely at design, implementation and operation (runtime) phases. A solution is described to the challenge of integrating coherent research (analytic) environments for authorized researchers to access data and analytic tools without compromising security or privacy. Conclusion: We recognise the need for wider implementation of rigorous interoperability standards concerning privacy and security management. Standards can be disseminated within low-cost commodity infrastructures that are shared across consortium partners. Comprehensive model-based approaches to information management will be fundamental to guaranteeing security and privacy in challenging areas such as ethical use of artificial intelligence in medicine. The target architecture is still in evolution but needs a number of communitycollaborative API developments to couple advanced specifications fulfilling all IG requirements.