hybrid-based algorithm
Dotaz
Zobrazit nápovědu
The sine-cosine algorithm (SCA) is a new population-based meta-heuristic algorithm. In addition to exploiting sine and cosine functions to perform local and global searches (hence the name sine-cosine), the SCA introduces several random and adaptive parameters to facilitate the search process. Although it shows promising results, the search process of the SCA is vulnerable to local minima/maxima due to the adoption of a fixed switch probability and the bounded magnitude of the sine and cosine functions (from -1 to 1). In this paper, we propose a new hybrid Q-learning sine-cosine- based strategy, called the Q-learning sine-cosine algorithm (QLSCA). Within the QLSCA, we eliminate the switching probability. Instead, we rely on the Q-learning algorithm (based on the penalty and reward mechanism) to dynamically identify the best operation during runtime. Additionally, we integrate two new operations (Lévy flight motion and crossover) into the QLSCA to facilitate jumping out of local minima/maxima and enhance the solution diversity. To assess its performance, we adopt the QLSCA for the combinatorial test suite minimization problem. Experimental results reveal that the QLSCA is statistically superior with regard to test suite size reduction compared to recent state-of-the-art strategies, including the original SCA, the particle swarm test generator (PSTG), adaptive particle swarm optimization (APSO) and the cuckoo search strategy (CS) at the 95% confidence level. However, concerning the comparison with discrete particle swarm optimization (DPSO), there is no significant difference in performance at the 95% confidence level. On a positive note, the QLSCA statistically outperforms the DPSO in certain configurations at the 90% confidence level.
- MeSH
- algoritmy * MeSH
- heuristika * MeSH
- počítačová simulace MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
Telemedicine is an emerging development in the healthcare domain, where the Internet of Things (IoT) fiber optics technology assists telemedicine applications to improve overall digital healthcare performances for society. Telemedicine applications are bowel disease monitoring based on fiber optics laser endoscopy, gastrointestinal disease fiber optics lights, remote doctor-patient communication, and remote surgeries. However, many existing systems are not effective and their approaches based on deep reinforcement learning have not obtained optimal results. This paper presents the fiber optics IoT healthcare system based on deep reinforcement learning combinatorial constraint scheduling for hybrid telemedicine applications. In the proposed system, we propose the adaptive security deep q-learning network (ASDQN) algorithm methodology to execute all telemedicine applications under their given quality of services (deadline, latency, security, and resources) constraints. For the problem solution, we have exploited different fiber optics endoscopy datasets with images, video, and numeric data for telemedicine applications. The objective is to minimize the overall latency of telemedicine applications (e.g., local, communication, and edge nodes) and maximize the overall rewards during offloading and scheduling on different nodes. The simulation results show that ASDQN outperforms all telemedicine applications with their QoS and objectives compared to existing state action reward state (SARSA) and deep q-learning network (DQN) policy during execution and scheduling on different nodes.
- MeSH
- algoritmy MeSH
- deep learning * MeSH
- internet věcí * MeSH
- lidé MeSH
- technologie optických vláken MeSH
- telemedicína * MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
Currently, methods of combined classification are the focus of intense research. A properly designed group of combined classifiers exploiting knowledge gathered in a pool of elementary classifiers can successfully outperform a single classifier. There are two essential issues to consider when creating combined classifiers: how to establish the most comprehensive pool and how to design a fusion model that allows for taking full advantage of the collected knowledge. In this work, we address the issues and propose an AdaSS+, training algorithm dedicated for the compound classifier system that effectively exploits local specialization of the elementary classifiers. An effective training procedure consists of two phases. The first phase detects the classifier competencies and adjusts the respective fusion parameters. The second phase boosts classification accuracy by elevating the degree of local specialization. The quality of the proposed algorithms are evaluated on the basis of a wide range of computer experiments that show that AdaSS+ can outperform the original method and several reference classifiers.
- MeSH
- algoritmy MeSH
- lidé MeSH
- počítačová simulace MeSH
- rozpoznávání automatizované * MeSH
- teoretické modely * MeSH
- umělá inteligence * MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
Deep learning has recently been utilized with great success in a large number of diverse application domains, such as visual and face recognition, natural language processing, speech recognition, and handwriting identification. Convolutional neural networks, that belong to the deep learning models, are a subtype of artificial neural networks, which are inspired by the complex structure of the human brain and are often used for image classification tasks. One of the biggest challenges in all deep neural networks is the overfitting issue, which happens when the model performs well on the training data, but fails to make accurate predictions for the new data that is fed into the model. Several regularization methods have been introduced to prevent the overfitting problem. In the research presented in this manuscript, the overfitting challenge was tackled by selecting a proper value for the regularization parameter dropout by utilizing a swarm intelligence approach. Notwithstanding that the swarm algorithms have already been successfully applied to this domain, according to the available literature survey, their potential is still not fully investigated. Finding the optimal value of dropout is a challenging and time-consuming task if it is performed manually. Therefore, this research proposes an automated framework based on the hybridized sine cosine algorithm for tackling this major deep learning issue. The first experiment was conducted over four benchmark datasets: MNIST, CIFAR10, Semeion, and UPS, while the second experiment was performed on the brain tumor magnetic resonance imaging classification task. The obtained experimental results are compared to those generated by several similar approaches. The overall experimental results indicate that the proposed method outperforms other state-of-the-art methods included in the comparative analysis in terms of classification error and accuracy.
- MeSH
- algoritmy MeSH
- lidé MeSH
- magnetická rezonanční tomografie MeSH
- nádory mozku * MeSH
- neuronové sítě * MeSH
- psaní rukou MeSH
- Check Tag
- lidé MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
OBJECTIVE: The aim of this study was to compare three different reconstruction algorithms for the volumetry of the visceral adipose tissue (VAT) and subcutaneous adipose tissue (SAT) on ultra-low-dose computed tomography (CT) images. METHODS: Thirty-seven male patients underwent ultra-low-dose CT at the level of the fourth lumbar vertebra (22.5 mm in z-axis). The acquisitions were reconstructed in 5-mm slices with 50% overlap using filtered back projection (FBP), hybrid iterative reconstruction (HIR), and iterative model-based reconstruction (IMR) techniques. The volume of VAT and SAT was measured using an interactive seed-growing segmentation and by thresholding (-30 to -190 HU). RESULTS: The volume of SAT measured by the interactive method was smaller in FBP compared with both HIR (P = 0.0011) and IMR (P = 0.0034), and the volume of VAT was greater in IMR compared with HIR (P = 0.0253) or FBP (P = 0.0065). Using the thresholding method, IMR volumes of VAT were greater compared with HIR (P < 0.0001), and volumes of SAT were greater compared with both HIR and FBP (both P ≤ 0.0001). The VAT to SAT ratio was greater in IMR compared with HIR or FBP (both P < 0.0001). CONCLUSIONS: There are significant differences among FBP, HIR, and IMR in the volumetry of SAT and VAT, their ratios, and attenuation measured on ultra-low-dose images.
... Contents -- Preface xv -- 1 Introduction 1 -- 2 Algorithms and Complexity 7 -- 2.1 What Is an Algorithm ... ... 7 -- 2.2 Biological Algorithms versus Computer Algorithms 14 -- 2.3 The Change Problem 17 -- 2.4 Correct ... ... versus Incorrect Algorithms 20 -- 2.5 Recursive Algorithms 24 -- 2.6 Iterative versus Recursive Algorithms ... ... 28 -- 2.7 Fast versus Slow Algorithms 33 -- 2.8 Big-O Notation 37 -- 2.9 Algorithm Design Techniques ... ... 40 -- 2.9.1 Exhaustive Search 41 -- 2.9.2 Branch-and-Bound Algorithms 42 -- 2.9.3 Greedy Algorithms ...
Computational molecular biology series
[1st ed.] xviii, 435 s. : il.
- MeSH
- algoritmy MeSH
- informatika MeSH
- Konspekt
- Lékařské vědy. Lékařství
- NLK Obory
- lékařská informatika
Several local search algorithms for real-valued domains (axis parallel line search, Nelder-Mead simplex search, Rosenbrock's algorithm, quasi-Newton method, NEWUOA, and VXQR) are described and thoroughly compared in this article, embedding them in a multi-start method. Their comparison aims (1) to help the researchers from the evolutionary community to choose the right opponent for their algorithm (to choose an opponent that would constitute a hard-to-beat baseline algorithm), (2) to describe individual features of these algorithms and show how they influence the algorithm on different problems, and (3) to provide inspiration for the hybridization of evolutionary algorithms with these local optimizers. The recently proposed Comparing Continuous Optimizers (COCO) methodology was adopted as the basis for the comparison. The results show that in low dimensional spaces, the old method of Nelder and Mead is still the most successful among those compared, while in spaces of higher dimensions, it is better to choose an algorithm based on quadratic modeling, such as NEWUOA or a quasi-Newton method.
- MeSH
- algoritmy * MeSH
- benchmarking * MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
- srovnávací studie MeSH
Estimating causal interactions in complex dynamical systems is an important problem encountered in many fields of current science. While a theoretical solution for detecting the causal interactions has been previously formulated in the framework of prediction improvement, it generally requires the computation of high-dimensional information functionals-a situation invoking the curse of dimensionality with increasing network size. Recently, several methods have been proposed to alleviate this problem, based on iterative procedures for the assessment of conditional (in)dependences. In the current work, we bring a comparison of several such prominent approaches. This is done both by theoretical comparison of the algorithms using a formulation in a common framework and by numerical simulations including realistic complex coupling patterns. The theoretical analysis highlights the key similarities and differences between the algorithms, hinting on their comparative strengths and weaknesses. The method assumptions and specific properties such as false positive control and order-dependence are discussed. Numerical simulations suggest that while the accuracy of most of the algorithms is almost indistinguishable, there are substantial differences in their computational demands, ranging theoretically from polynomial to exponential complexity and leading to substantial differences in computation time in realistic scenarios depending on the density and size of networks. Based on the analysis of the algorithms and numerical simulations, we propose a hybrid approach providing competitive accuracy with improved computational efficiency.
Four methods for global numerical black box optimization with origins in the mathematical programming community are described and experimentally compared with the state of the art evolutionary method, BIPOP-CMA-ES. The methods chosen for the comparison exhibit various features that are potentially interesting for the evolutionary computation community: systematic sampling of the search space (DIRECT, MCS) possibly combined with a local search method (MCS), or a multi-start approach (NEWUOA, GLOBAL) possibly equipped with a careful selection of points to run a local optimizer from (GLOBAL). The recently proposed "comparing continuous optimizers" (COCO) methodology was adopted as the basis for the comparison. Based on the results, we draw suggestions about which algorithm should be used depending on the available budget of function evaluations, and we propose several possibilities for hybridizing evolutionary algorithms (EAs) with features of the other compared algorithms.
MOTIVATION: Genome analysis has become one of the most important tools for understanding the complex process of cancerogenesis. With increasing resolution of CGH arrays, the demand for computationally efficient algorithms arises, which are effective in the detection of aberrations even in very noisy data.