Data controller
Dotaz
Zobrazit nápovědu
With the rapid growth of sensor networks and the enormous, fast-growing volumes of data collected from these sensors, there is a question relating to the way it will be used, and not only collected and analyzed. The data from these sensors are traditionally used for controlling and influencing the states and processes. Standard controllers are available and successfully implemented. However, with the data-driven era we are facing nowadays, there is an opportunity to use controllers, which can include much information, elusive for common controllers. Our goal is to propose a design of an intelligent controller-a conventional controller, but with a non-conventional method of designing its parameters using approaches of artificial intelligence combining fuzzy and genetics methods. Intelligent adaptation of parameters of the control system is performed using data from the sensors measured in the controlled process. All parts designed are based on non-conventional methods and are verified by simulations. The identification of the system's parameters is based on parameter optimization by means of its difference equation using genetic algorithms. The continuous monitoring of the quality control process and the design of the controller parameters are conducted using a fuzzy expert system of the Mamdani type, or the Takagi-Sugeno type. The concept of the intelligent control system is open and easily expandable.
- Klíčová slova
- PID controller, artificial intelligence, expert systems, fuzzy methods, genetic algorithms, intelligent controller, optimization, softcomputing,
- Publikační typ
- časopisecké články MeSH
The concept of Data Management Plan (DMP) has emerged as a fundamental tool to help researchers through the systematical management of data. The Research Data Alliance DMP Common Standard (DCS) working group developed a set of universal concepts characterising a DMP so it can be represented as a machine-actionable artefact, i.e., machine-actionable Data Management Plan (maDMP). The technology-agnostic approach of the current maDMP specification: (i) does not explicitly link to related data models or ontologies, (ii) has no standardised way to describe controlled vocabularies, and (iii) is extensible but has no clear mechanism to distinguish between the core specification and its extensions.This paper reports on a community effort to create the DMP Common Standard Ontology (DCSO) as a serialisation of the DCS core concepts, with a particular focus on a detailed description of the components of the ontology. Our initial result shows that the proposed DCSO can become a suitable candidate for a reference serialisation of the DMP Common Standard.
- Klíčová slova
- Data management plan, Machine-actionable data management plan, Ontology, Semantic web technologies,
- MeSH
- bio-ontologie * MeSH
- data management * MeSH
- řízený slovník MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
- Klíčová slova
- EQUIPMENT AND SUPPLIES *, TEMPERATURE *, TISSUE CULTURE *,
- MeSH
- techniky tkáňových kultur * MeSH
- teplota * MeSH
- výzkumný projekt * MeSH
- zdravotnické prostředky * MeSH
- Publikační typ
- časopisecké články MeSH
The paper shows the importance of e-health applications for electronic healthcare development. It describes several e-health applications for health data collecting and sharing that are running in the Czech Republic. These are IZIP system, electronic health record MUDR and K4CARE project applications. The e3-health concept is considered as a tool for judging e-health applications in different healthcare settings.
- MeSH
- chorobopisy - spojování metody MeSH
- elektronické zdravotní záznamy organizace a řízení MeSH
- šíření informací metody MeSH
- studie případů a kontrol MeSH
- ukládání a vyhledávání informací metody MeSH
- zdravotní záznamy osobní * MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
- Geografické názvy
- Česká republika MeSH
The EuroMISE Center focuses on new approaches in the field of electronic health record (EHR). Among others, the structured health documentation in dentistry in the form of an EHR is being systematically studied. This paper describes the evolution of the EHR developed at the EuroMISE Center named MUDRLite and its graphical component for dentists called DentCross. The summary of features of the DentCross component is followed by a brief description of automatic speech recognition and an ASR module. The problems with data insertion into EHR during examination of a dental patient lead to further research in the area of the automatic speech recognition in medical practice. Cooperation of engineers, informaticians and dental physicians resulted in an application called DentVoice which is a successful application of the ASR module and the DentCross component of the MUDRLite EHR. The junction of voice control and graphical representation of dental arch makes hand-busy activities in dental praxis easier, quicker and more comfortable. This will result in a better quality of the data stored in a structured form in dental EHR, thus enabling better decision making and use of decision support systems.
- MeSH
- chorobopisy - počítačové systémy * MeSH
- navrhování softwaru MeSH
- počítačová grafika MeSH
- software pro rozpoznávání řeči * MeSH
- ukládání a vyhledávání informací * MeSH
- uživatelské rozhraní počítače * MeSH
- využití lékařské informatiky MeSH
- znalostní báze MeSH
- zubní záznamy * MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
The Minimum Redundancy Maximum Relevance (MRMR) approach to supervised variable selection represents a successful methodology for dimensionality reduction, which is suitable for high-dimensional data observed in two or more different groups. Various available versions of the MRMR approach have been designed to search for variables with the largest relevance for a classification task while controlling for redundancy of the selected set of variables. However, usual relevance and redundancy criteria have the disadvantages of being too sensitive to the presence of outlying measurements and/or being inefficient. We propose a novel approach called Minimum Regularized Redundancy Maximum Robust Relevance (MRRMRR), suitable for noisy high-dimensional data observed in two groups. It combines principles of regularization and robust statistics. Particularly, redundancy is measured by a new regularized version of the coefficient of multiple correlation and relevance is measured by a highly robust correlation coefficient based on the least weighted squares regression with data-adaptive weights. We compare various dimensionality reduction methods on three real data sets. To investigate the influence of noise or outliers on the data, we perform the computations also for data artificially contaminated by severe noise of various forms. The experimental results confirm the robustness of the method with respect to outliers.
The integration and synthesis of the data in different areas of science is drastically slowed and hindered by a lack of standards and networking programmes. Long-term studies of individually marked animals are not an exception. These studies are especially important as instrumental for understanding evolutionary and ecological processes in the wild. Furthermore, their number and global distribution provides a unique opportunity to assess the generality of patterns and to address broad-scale global issues (e.g. climate change). To solve data integration issues and enable a new scale of ecological and evolutionary research based on long-term studies of birds, we have created the SPI-Birds Network and Database (www.spibirds.org)-a large-scale initiative that connects data from, and researchers working on, studies of wild populations of individually recognizable (usually ringed) birds. Within year and a half since the establishment, SPI-Birds has recruited over 120 members, and currently hosts data on almost 1.5 million individual birds collected in 80 populations over 2,000 cumulative years, and counting. SPI-Birds acts as a data hub and a catalogue of studied populations. It prevents data loss, secures easy data finding, use and integration and thus facilitates collaboration and synthesis. We provide community-derived data and meta-data standards and improve data integrity guided by the principles of Findable, Accessible, Interoperable and Reusable (FAIR), and aligned with the existing metadata languages (e.g. ecological meta-data language). The encouraging community involvement stems from SPI-Bird's decentralized approach: research groups retain full control over data use and their way of data management, while SPI-Birds creates tailored pipelines to convert each unique data format into a standard format. We outline the lessons learned, so that other communities (e.g. those working on other taxa) can adapt our successful model. Creating community-specific hubs (such as ours, COMADRE for animal demography, etc.) will aid much-needed large-scale ecological data integration.
- Klíčová slova
- FAIR data, birds, data standards, database, long-term studies, meta-data standards, research network,
- MeSH
- databáze faktografické MeSH
- metadata * MeSH
- ptáci * MeSH
- zvířata MeSH
- Check Tag
- zvířata MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
Cardiovascular dynamic and variability data are commonly used in experimental protocols involving cognitive challenge. Usually, the analysis is based on a sometimes more and sometimes less well motivated single specific time resolution ranging from a few seconds to several minutes. The present paper aimed at investigating in detail the impact of different time resolutions of the cardiovascular data on the interpretation of effects. We compared three template tasks involving varying types of challenge, in order to provide a case study of specific effects and combinations of effects over different time frames and using different time resolutions. Averaged values of hemodynamic variables across an entire protocol confirmed typical findings regarding the effects of mental challenge and social observation. However, the hemodynamic response also incorporates transient variations in variables reflecting important features of the control system response. The fine-grained analysis of the transient behavior of hemodynamic variables demonstrates that information that is important for interpreting effects may be lost when only average values over the entire protocol are used as a representative of the system response. The study provides useful indications of how cardiovascular measures may be fruitfully used in experiments involving cognitive demands, allowing inferences on the physiological processes underlying the responses.
- MeSH
- dospělí MeSH
- duševní procesy fyziologie MeSH
- interpretace statistických dat * MeSH
- kardiovaskulární fyziologické jevy * MeSH
- krevní tlak fyziologie MeSH
- lidé středního věku MeSH
- lidé MeSH
- matematika MeSH
- mladý dospělý MeSH
- psychický stres patofyziologie MeSH
- reakční čas fyziologie MeSH
- srdeční frekvence fyziologie MeSH
- Check Tag
- dospělí MeSH
- lidé středního věku MeSH
- lidé MeSH
- mladý dospělý MeSH
- mužské pohlaví MeSH
- ženské pohlaví MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
BACKGROUND: Recent advances in data-driven computational approaches have been helpful in devising tools to objectively diagnose psychiatric disorders. However, current machine learning studies limited to small homogeneous samples, different methodologies, and different imaging collection protocols, limit the ability to directly compare and generalize their results. Here we aimed to classify individuals with PTSD versus controls and assess the generalizability using a large heterogeneous brain datasets from the ENIGMA-PGC PTSD Working group. METHODS: We analyzed brain MRI data from 3,477 structural-MRI; 2,495 resting state-fMRI; and 1,952 diffusion-MRI. First, we identified the brain features that best distinguish individuals with PTSD from controls using traditional machine learning methods. Second, we assessed the utility of the denoising variational autoencoder (DVAE) and evaluated its classification performance. Third, we assessed the generalizability and reproducibility of both models using leave-one-site-out cross-validation procedure for each modality. RESULTS: We found lower performance in classifying PTSD vs. controls with data from over 20 sites (60 % test AUC for s-MRI, 59 % for rs-fMRI and 56 % for d-MRI), as compared to other studies run on single-site data. The performance increased when classifying PTSD from HC without trauma history in each modality (75 % AUC). The classification performance remained intact when applying the DVAE framework, which reduced the number of features. Finally, we found that the DVAE framework achieved better generalization to unseen datasets compared with the traditional machine learning frameworks, albeit performance was slightly above chance. CONCLUSION: These results have the potential to provide a baseline classification performance for PTSD when using large scale neuroimaging datasets. Our findings show that the control group used can heavily affect classification performance. The DVAE framework provided better generalizability for the multi-site data. This may be more significant in clinical practice since the neuroimaging-based diagnostic DVAE classification models are much less site-specific, rendering them more generalizable.
- Klíčová slova
- Classification, Deep learning, Machine learning, Multimodal MRI, Posttraumatic stress disorder,
- MeSH
- big data MeSH
- lidé MeSH
- magnetická rezonanční tomografie metody MeSH
- mozek diagnostické zobrazování MeSH
- neurozobrazování MeSH
- posttraumatická stresová porucha * diagnostické zobrazování MeSH
- reprodukovatelnost výsledků MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
In this paper, we would like to introduce a unique dataset that covers thousands of network flow measurements realized through TCP in a data center environment. The TCP protocol is widely used for reliable data transfers and has many different versions. The various versions of TCP are specific in how they deal with link congestion through the congestion control algorithm (CCA). Our dataset represents a unique, comprehensive comparison of the 17 currently used versions of TCP with different CCAs. Each TCP flow was measured precisely 50 times to eliminate the measurement instability. The comparison of the various TCP versions is based on the knowledge of 18 quantitative attributes representing the parameters of a TCP transmission. Our dataset is suitable for testing and comparing different versions of TCP, creating new CCAs based on machine learning models, or creating and testing machine learning models, allowing the identification and optimization of the currently existing versions of TCP.
- Klíčová slova
- Congestion control, Data center, Identification, Machine learning, TCP,
- Publikační typ
- časopisecké články MeSH