LSTM, Long Short-Term Memory Dotaz Zobrazit nápovědu
In therapeutic diagnostics, early diagnosis and monitoring of heart disease is dependent on fast time-series MRI data processing. Robust encryption techniques are necessary to guarantee patient confidentiality. While deep learning (DL) algorithm have improved medical imaging, privacy and performance are still hard to balance. In this study, a novel approach for analyzing homomorphivally-encrypted (HE) time-series MRI data is introduced: The Multi-Faceted Long Short-Term Memory (MF-LSTM). This method includes privacy protection. The MF-LSTM architecture protects patient's privacy while accurately categorizing and forecasting cardiac disease, with accuracy (97.5%), precision (96.5%), recall (98.3%), and F1-score (97.4%). While segmentation methods help to improve interpretability by identifying important region in encrypted MRI images, Generalized Histogram Equalization (GHE) improves image quality. Extensive testing on selected dataset if encrypted time-series MRI images proves the method's stability and efficacy, outperforming previous approaches. The finding shows that the suggested technique can decode medical image to expose visual representation as well as sequential movement while protecting privacy and providing accurate medical image evaluation.
- Klíčová slova
- Encryption, Heart Disease, MRI Images, Multi-faceted long short-term memory (MF-LSTM),
- MeSH
- algoritmy MeSH
- deep learning MeSH
- důvěrnost informací MeSH
- lidé středního věku MeSH
- lidé MeSH
- magnetická rezonanční tomografie * metody MeSH
- nemoci srdce * diagnostické zobrazování MeSH
- neuronové sítě MeSH
- počítačové zpracování obrazu metody MeSH
- soukromí * MeSH
- zabezpečení počítačových systémů MeSH
- Check Tag
- lidé středního věku MeSH
- lidé MeSH
- mužské pohlaví MeSH
- ženské pohlaví MeSH
- Publikační typ
- časopisecké články MeSH
Currently, the Internet of Things (IoT) generates a huge amount of traffic data in communication and information technology. The diversification and integration of IoT applications and terminals make IoT vulnerable to intrusion attacks. Therefore, it is necessary to develop an efficient Intrusion Detection System (IDS) that guarantees the reliability, integrity, and security of IoT systems. The detection of intrusion is considered a challenging task because of inappropriate features existing in the input data and the slow training process. In order to address these issues, an effective meta heuristic based feature selection and deep learning techniques are developed for enhancing the IDS. The Osprey Optimization Algorithm (OOA) based feature selection is proposed for selecting the highly informative features from the input which leads to an effective differentiation among the normal and attack traffic of network. Moreover, the traditional sigmoid and tangent activation functions are replaced with the Exponential Linear Unit (ELU) activation function to propose the modified Bi-directional Long Short Term Memory (Bi-LSTM). The modified Bi-LSTM is used for classifying the types of intrusion attacks. The ELU activation function makes gradients extremely large during back-propagation and leads to faster learning. This research is analysed in three different datasets such as N-BaIoT, Canadian Institute for Cybersecurity Intrusion Detection Dataset 2017 (CICIDS-2017), and ToN-IoT datasets. The empirical investigation states that the proposed framework obtains impressive detection accuracy of 99.98 %, 99.97 % and 99.88 % on the N-BaIoT, CICIDS-2017, and ToN-IoT datasets, respectively. Compared to peer frameworks, this framework obtains high detection accuracy with better interpretability and reduced processing time.
The electroencephalogram (EEG) is a cornerstone of neurophysiological research and clinical neurology. Historically, the classification of EEG as showing normal physiological or abnormal pathological activity has been performed by expert visual review. The potential value of unbiased, automated EEG classification has long been recognized, and in recent years the application of machine learning methods has received significant attention. A variety of solutions using convolutional neural networks (CNN) for EEG classification have emerged with impressive results. However, interpretation of CNN results and their connection with underlying basic electrophysiology has been unclear. This paper proposes a CNN architecture, which enables interpretation of intracranial EEG (iEEG) transients driving classification of brain activity as normal, pathological or artifactual. The goal is accomplished using CNN with long short-term memory (LSTM). We show that the method allows the visualization of iEEG graphoelements with the highest contribution to the final classification result using a classification heatmap and thus enables review of the raw iEEG data and interpret the decision of the model by electrophysiology means.
- MeSH
- artefakty MeSH
- datové soubory jako téma MeSH
- deep learning * MeSH
- elektroencefalografie klasifikace přístrojové vybavení metody MeSH
- lidé MeSH
- ROC křivka MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- pozorovací studie MeSH
- práce podpořená grantem MeSH
- Research Support, N.I.H., Extramural MeSH
- validační studie MeSH
Deepfake (DF) is a kind of forged image or video that is developed to spread misinformation and facilitate vulnerabilities to privacy hacking and truth masking with advanced technologies, including deep learning and artificial intelligence with trained algorithms. This kind of multimedia manipulation, such as changing facial expressions or speech, can be used for a variety of purposes to spread misinformation or exploitation. This kind of multimedia manipulation, such as changing facial expressions or speech, can be used for a variety of purposes to spread misinformation or exploitation. With the recent advancement of generative adversarial networks (GANs) in deep learning models, DF has become an essential part of social media. To detect forged video and images, numerous methods have been developed, and those methods are focused on a particular domain and obsolete in the case of new attacks/threats. Hence, a novel method needs to be developed to tackle new attacks. The method introduced in this article can detect various types of spoofs of images and videos that are computationally generated using deep learning models, such as variants of long short-term memory and convolutional neural networks. The first phase of this proposed work extracts the feature frames from the forged video/image using a sparse autoencoder with a graph long short-term memory (SAE-GLSTM) method at training time. The first phase of this proposed work extracts the feature frames from the forged video/image using a sparse autoencoder with a graph long short-term memory (SAE-GLSTM) method at training time. The proposed DF detection model is tested using the FFHQ database, 100K-Faces, Celeb-DF (V2) and WildDeepfake. The evaluated results show the effectiveness of the proposed method.
- Klíčová slova
- Capsule convolution neural network, Deep learning, DeepFake, Generative adversarial networks, Graph LSTM, Long short term memory (LSTM),
- Publikační typ
- časopisecké články MeSH
Bearing degradation is the primary cause of electrical machine failures, making reliable condition monitoring essential to prevent breakdowns. This paper presents a novel hybrid model for the detection of multiple faults in bearings, combining Long Short-Term Memory (LSTM) networks with random forest (RF) classifiers, further enhanced by the Grey Wolf Optimization (GWO) algorithm. The proposed approach is structured in three stages: first, time and frequency domain features are manually extracted from vibration signals; second, these features are processed by a dual-layer LSTM network, which is specifically designed to capture complex temporal relationships within the data; finally, the GWO algorithm is employed to optimize feature selection from the LSTM outputs, feeding the most relevant features into the RF classifier for fault classification. The model was rigorously evaluated using a dataset comprising six distinct bearing health conditions: healthy, outer race fault, ball fault, inner race fault, compounded fault, and generalized degradation. The hybrid LSTM-RF-GWO model achieved a remarkable classification accuracy of 98.97%, significantly outperforming standalone models such as LSTM (93.56%) and RF (98.44%). Furthermore, the inclusion of GWO led to an additional accuracy improvement of 0.39% compared to the hybrid LSTM-RF model without optimization. Other performance metrics, including precision, kappa coefficient, false negative rate (FNR), and false positive rate (FPR), were also improved, with precision reaching 99.28% and the kappa coefficient achieving 99.13%. The FNR and FPR were reduced to 0.0071 and 0.0015, respectively, underscoring the model's effectiveness in minimizing misclassifications. The experimental results demonstrate that the proposed hybrid LSTM-RF-GWO framework not only enhances fault detection accuracy but also provides a robust solution for distinguishing between closely related fault conditions, making it a valuable tool for predictive maintenance in industrial applications.
- Klíčová slova
- Bearing fault detection, Feature selection, Grey wolf optimization, Hybrid model, LSTM, Machine learning, Random forest, Vibration signals,
- Publikační typ
- časopisecké články MeSH
The first known case of Coronavirus disease 2019 (COVID-19) was identified in December 2019. It has spread worldwide, leading to an ongoing pandemic, imposed restrictions and costs to many countries. Predicting the number of new cases and deaths during this period can be a useful step in predicting the costs and facilities required in the future. The purpose of this study is to predict new cases and deaths rate one, three and seven-day ahead during the next 100 days. The motivation for predicting every n days (instead of just every day) is the investigation of the possibility of computational cost reduction and still achieving reasonable performance. Such a scenario may be encountered in real-time forecasting of time series. Six different deep learning methods are examined on the data adopted from the WHO website. Three methods are LSTM, Convolutional LSTM, and GRU. The bidirectional extension is then considered for each method to forecast the rate of new cases and new deaths in Australia and Iran countries. This study is novel as it carries out a comprehensive evaluation of the aforementioned three deep learning methods and their bidirectional extensions to perform prediction on COVID-19 new cases and new death rate time series. To the best of our knowledge, this is the first time that Bi-GRU and Bi-Conv-LSTM models are used for prediction on COVID-19 new cases and new deaths time series. The evaluation of the methods is presented in the form of graphs and Friedman statistical test. The results show that the bidirectional models have lower errors than other models. A several error evaluation metrics are presented to compare all models, and finally, the superiority of bidirectional methods is determined. This research could be useful for organisations working against COVID-19 and determining their long-term plans.
- Klíčová slova
- ANFIS, Adaptive Network-based Fuzzy Inference System, ANN, Artificial Neural Network, AU, Australia, Bi-Conv-LSTM, Bidirectional Convolutional Long Short Term Memory, Bi-GRU, Bidirectional Gated Recurrent Unit, Bi-LSTM, Bidirectional Long Short-Term Memory, Bidirectional, COVID-19 Prediction, COVID-19, Coronavirus Disease 2019, Conv-LSTM, Convolutional Long Short Term Memory, Convolutional Long Short Term Memory (Conv-LSTM), DL, Deep Learning, DLSTM, Delayed Long Short-Term Memory, Deep learning, EMRO, Eastern Mediterranean Regional Office, ES, Exponential Smoothing, EV, Explained Variance, GRU, Gated Recurrent Unit, Gated Recurrent Unit (GRU), IR, Iran, LR, Linear Regression, LSTM, Long Short-Term Memory, Lasso, Least Absolute Shrinkage and Selection Operator, Long Short Term Memory (LSTM), MAE, Mean Absolute Error, MAPE, Mean Absolute Percentage Error, MERS, Middle East Respiratory Syndrome, ML, Machine Learning, MLP-ICA, Multi-layered Perceptron-Imperialist Competitive Calculation, MSE, Mean Square Error, MSLE, Mean Squared Log Error, Machine learning, New Cases of COVID-19, New Deaths of COVID-19, PRISMA, Preferred Reporting Items for Precise Surveys and Meta-Analyses, RMSE, Root Mean Square Error, RMSLE, Root Mean Squared Log Error, RNN, Repetitive Neural Network, ReLU, Rectified Linear Unit, SARS, Serious Intense Respiratory Disorder, SARS-COV, SARS coronavirus, SARS-COV-2, Serious Intense Respiratory Disorder Coronavirus 2, SVM, Support Vector Machine, VAE, Variational Auto Encoder, WHO, World Health Organization, WPRO, Western Pacific Regional Office,
- Publikační typ
- časopisecké články MeSH
The outbreak of COVID-19, a little more than 2 years ago, drastically affected all segments of society throughout the world. While at one end, the microbiologists, virologists, and medical practitioners were trying to find the cure for the infection; the Governments were laying emphasis on precautionary measures like lockdowns to lower the spread of the virus. This pandemic is perhaps also the first one of its kind in history that has research articles in all possible areas as like: medicine, sociology, psychology, supply chain management, mathematical modeling, etc. A lot of work is still continuing in this area, which is very important also for better preparedness if such a situation arises in future. The objective of the present study is to build a research support tool that will help the researchers swiftly identify the relevant literature on a specific field or topic regarding COVID-19 through a hierarchical classification system. The three main tasks done during this study are data preparation, data annotation and text data classification through bi-directional long short-term memory (bi-LSTM).
- Klíčová slova
- Artificial Intelligence, COVID-19, bi-directional LSTM, classification, long short-term memory,
- MeSH
- COVID-19 * MeSH
- epidemický výskyt choroby MeSH
- kontrola infekčních nemocí MeSH
- lidé MeSH
- umělá inteligence MeSH
- vláda MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
Organic photovoltaic (OPV) cells are at the forefront of sustainable energy generation due to their lightness, flexibility, and low production costs. These characteristics make OPVs a promising solution for achieving sustainable development goals. However, predicting their lifetime remains challenging task due to complex interactions between internal factors such as material degradation, interface stability, and morphological changes, and external factors like environmental conditions, mechanical stress, and encapsulation quality. In this study, we propose a machine learning-based technique to predict the degradation over time of OPVs. Specifically, we employ multi-layer perceptron (MLP) and long short-term memory (LSTM) neural networks to predict the power conversion efficiency (PCE) of inverted organic solar cells (iOSCs) made from the blend PTB7-Th:PC70BM, with PFN as the electron transport layer (ETL), fabricated under an N2 environment. We evaluate the performance of the proposed technique using several statistical metrics, including mean squared error (MSE), root mean squared error (rMSE), relative squared error (RSE), relative absolute error (RAE), and the correlation coefficient (R). The results demonstrate the high accuracy of our proposed technique, evidenced by the minimal error between predicted and experimentally measured PCE values: 0.0325 for RSE, 0.0729 for RAE, 0.2223 for rMSE, and 0.0541 for MSE using the LSTM model. These findings highlight the potential of proposed models in accurately predicting the performance of OPVs, thus contributing to the advancement of sustainable energy technologies.
- Klíčová slova
- Degradation, Inverted organic solar cells, Long short-term memory, Machine learning, Multi-layer perceptron, Power conversion efficiency, Prediction,
- Publikační typ
- časopisecké články MeSH
This study proposes an ensemble deep learning approach that integrates Bagging Ridge (BR) regression with Bi-directional Long Short-Term Memory (Bi-LSTM) neural networks used as base regressors to become a Bi-LSTM BR approach. Bi-LSTM BR was used to predict the exchange rates of 21 currencies against the USD during the pre-COVID-19 and COVID-19 periods. To demonstrate the effectiveness of our proposed model, we compared the prediction performance with several more traditional machine learning algorithms, such as the regression tree, support vector regression, and random forest regression, and deep learning-based algorithms such as LSTM and Bi-LSTM. Our proposed ensemble deep learning approach outperformed the compared models in forecasting exchange rates in terms of prediction error. However, the performance of the model significantly varied during non-COVID-19 and COVID-19 periods across currencies, indicating the essential role of prediction models in periods of highly volatile foreign currency markets. By providing an improved prediction performance and identifying the most seriously affected currencies, this study is beneficial for foreign exchange traders and other stakeholders in that it offers opportunities for potential trading profitability and for reducing the impact of increased currency risk during the pandemic.
- Klíčová slova
- Bagging ridge, Bi-LSTM, COVID-19, Deep learning, Exchange rate forecasting, Machine learning,
- Publikační typ
- časopisecké články MeSH
This paper proposes a model called X-LSTM-EO, which integrates explainable artificial intelligence (XAI), long short-term memory (LSTM), and equilibrium optimizer (EO) to reliably forecast solar power generation. The LSTM component forecasts power generation rates based on environmental conditions, while the EO component optimizes the LSTM model's hyper-parameters through training. The XAI-based Local Interpretable and Model-independent Explanation (LIME) is adapted to identify the critical factors that influence the accuracy of the power generation forecasts model in smart solar systems. The effectiveness of the proposed X-LSTM-EO model is evaluated through the use of five metrics; R-squared (R2), root mean square error (RMSE), coefficient of variation (COV), mean absolute error (MAE), and efficiency coefficient (EC). The proposed model gains values 0.99, 0.46, 0.35, 0.229, and 0.95, for R2, RMSE, COV, MAE, and EC respectively. The results of this paper improve the performance of the original model's conventional LSTM, where the improvement rate is; 148%, 21%, 27%, 20%, 134% for R2, RMSE, COV, MAE, and EC respectively. The performance of LSTM is compared with other machine learning algorithm such as Decision tree (DT), Linear regression (LR) and Gradient Boosting. It was shown that the LSTM model worked better than DT and LR when the results were compared. Additionally, the PSO optimizer was employed instead of the EO optimizer to validate the outcomes, which further demonstrated the efficacy of the EO optimizer. The experimental results and simulations demonstrate that the proposed model can accurately estimate PV power generation in response to abrupt changes in power generation patterns. Moreover, the proposed model might assist in optimizing the operations of photovoltaic power units. The proposed model is implemented utilizing TensorFlow and Keras within the Google Collab environment.