Multi-layer perceptron Dotaz Zobrazit nápovědu
- MeSH
- biomedicínské inženýrství MeSH
- elektroencefalografie MeSH
- epilepsie MeSH
- lidé MeSH
- neuronové sítě MeSH
- programovací jazyk MeSH
- software MeSH
- záchvaty MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- srovnávací studie MeSH
Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems.
BACKGROUND: The recent big data revolution in Genomics, coupled with the emergence of Deep Learning as a set of powerful machine learning methods, has shifted the standard practices of machine learning for Genomics. Even though Deep Learning methods such as Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are becoming widespread in Genomics, developing and training such models is outside the ability of most researchers in the field. RESULTS: Here we present ENNGene-Easy Neural Network model building tool for Genomics. This tool simplifies training of custom CNN or hybrid CNN-RNN models on genomic data via an easy-to-use Graphical User Interface. ENNGene allows multiple input branches, including sequence, evolutionary conservation, and secondary structure, and performs all the necessary preprocessing steps, allowing simple input such as genomic coordinates. The network architecture is selected and fully customized by the user, from the number and types of the layers to each layer's precise set-up. ENNGene then deals with all steps of training and evaluation of the model, exporting valuable metrics such as multi-class ROC and precision-recall curve plots or TensorBoard log files. To facilitate interpretation of the predicted results, we deploy Integrated Gradients, providing the user with a graphical representation of an attribution level of each input position. To showcase the usage of ENNGene, we train multiple models on the RBP24 dataset, quickly reaching the state of the art while improving the performance on more than half of the proteins by including the evolutionary conservation score and tuning the network per protein. CONCLUSIONS: As the role of DL in big data analysis in the near future is indisputable, it is important to make it available for a broader range of researchers. We believe that an easy-to-use tool such as ENNGene can allow Genomics researchers without a background in Computational Sciences to harness the power of DL to gain better insights into and extract important information from the large amounts of data available in the field.
- MeSH
- genomika MeSH
- neuronové sítě * MeSH
- sekundární struktura proteinů MeSH
- strojové učení * MeSH
- Publikační typ
- časopisecké články MeSH
Background: The guidelines recommend intensive blood pressure control. Randomized trials have focused on the relevance of the systolic blood pressure (SBP) lowering, leaving the safety of the diastolic blood pressure (DBP) reduction unresolved. There are data available which show that low DBP should not stop clinicians from achieving SBP targets; however, registries and analyses of randomized trials present conflicting results. The purpose of the study was to apply machine learning (ML) algorithms to determine, whether DBP is an important risk factor to predict stroke, heart failure (HF), myocardial infarction (MI), and primary outcome in the SPRINT trial database. Methods: ML experiments were performed using decision tree, random forest, k-nearest neighbor, naive Bayesian, multi-layer perceptron, and logistic regression algorithms, including and excluding DBP as the risk factor in an unselected and selected (DBP < 70 mmHg) study population. Results: Including DBP as the risk factor did not change the performance of the machine learning models evaluated using accuracy, AUC, mean, and weighted F-measure, and was not required to make proper predictions of stroke, MI, HF, and primary outcome. Conclusions: Analyses of the SPRINT trial data using ML algorithms imply that DBP should not be treated as an independent risk factor when intensifying blood pressure control.
- Publikační typ
- časopisecké články MeSH
The electrocardiogram (ECG) is one of the most common ways to record, in an non-invasive manner, a patient's cardiac activity. Once recorded the information can be pre-processed and subsequently analyzed to assess if the patient is suffering from any forms of cardiac abnormality which may require clinical intervention. In the current study we investigate ways in which more can be obtained from the ECG through analysis of the diagnostic properties of body surface potential maps (BSPM). A set of 192 lead BSPMs recorded from a mixture of 116 normal and abnormal subjects (59 normal vs 57 old myocardial infarction) were analyzed. For each patient, diagnostic features were obtained by calculating isointegral measurements from the QRS, STT, and entire QRST segments. These isointegrals provide a measure of the mean distribution of potential during ventricular depolarization, repolarisation, and a combination of both, respectively. For each isointegral type, 192 discrete measurements, and hence 192 features, were obtained; these correspond with the 192 leads recorded. Subsequent to this a signal-to-noise ratio-based feature ranking methodology was applied to select subsets of the best three, six and ten measurements (features) from the 192 available for each isointegral. These subsets of features were then applied to four different classifiers Naive Bayes (NB), support vector machine (SVM), multi-layer perceptron (MLP) and random forest (RF) and in each application ten-fold cross validation was employed. It was found that when using the subsets of features obtained from the STT or QRST isointegrals, classification results in excess of 80% were attainable. This was in contrast to the results obtained using the QRS isointegral features where poorer performance (between 62.9% and 74.1%) was observed. The results from this study have illustrated that, for the studied dataset, the mean distribution of potentials during ventricular depolarization, and during ventricular repolarization and depolarization combined possessed greater diagnostic information. Overall it was concluded that this approach to BSPM analysis does provide a useful means for illustrating the usefulness of various features in diagnostic classification.