The cluster technique involves the creation of clusters and the selection of a cluster head (CH), which connects sensor nodes, known as cluster members (CM), to the CH. The CH receives data from the CM and collects data from sensor nodes, removing unnecessary data to conserve energy. It compresses the data and transmits them to base stations through multi-hop to reduce network load. Since CMs only communicate with their CH and have a limited range, they avoid redundant information. However, the CH's routing, compression, and aggregation functions consume power quickly compared to other protocols, like TPGF, LQEAR, MPRM, and P-LQCLR. To address energy usage in wireless sensor networks (WSNs), heterogeneous high-power nodes (HPN) are used to balance energy consumption. CHs close to the base station require effective algorithms for improvement. The cluster-based glow-worm optimization technique utilizes random clustering, distributed cluster leader selection, and link-based routing. The cluster head routes data to the next group leader, balancing energy utilization in the WSN. This algorithm reduces energy consumption through multi-hop communication, cluster construction, and cluster head election. The glow-worm optimization technique allows for faster convergence and improved multi-parameter selection. By combining these methods, a new routing scheme is proposed to extend the network's lifetime and balance energy in various environments. However, the proposed model consumes more energy than TPGF, and other protocols for packets with 0 or 1 retransmission count in a 260-node network. This is mainly due to the short INFO packets during the neighbor discovery period and the increased hop count of the proposed derived pathways. Herein, simulations are conducted to evaluate the technique's throughput and energy efficiency.
- Keywords
- cluster head, glow-worm, multi-parameters, optimization and heterogeneous, retransmission ratio,
- Publication type
- Journal Article MeSH
With the rapid growth of sensor networks and the enormous, fast-growing volumes of data collected from these sensors, there is a question relating to the way it will be used, and not only collected and analyzed. The data from these sensors are traditionally used for controlling and influencing the states and processes. Standard controllers are available and successfully implemented. However, with the data-driven era we are facing nowadays, there is an opportunity to use controllers, which can include much information, elusive for common controllers. Our goal is to propose a design of an intelligent controller-a conventional controller, but with a non-conventional method of designing its parameters using approaches of artificial intelligence combining fuzzy and genetics methods. Intelligent adaptation of parameters of the control system is performed using data from the sensors measured in the controlled process. All parts designed are based on non-conventional methods and are verified by simulations. The identification of the system's parameters is based on parameter optimization by means of its difference equation using genetic algorithms. The continuous monitoring of the quality control process and the design of the controller parameters are conducted using a fuzzy expert system of the Mamdani type, or the Takagi-Sugeno type. The concept of the intelligent control system is open and easily expandable.
- Keywords
- PID controller, artificial intelligence, expert systems, fuzzy methods, genetic algorithms, intelligent controller, optimization, softcomputing,
- Publication type
- Journal Article MeSH
With the advancement of science and technology, new complex optimization problems have emerged, and the achievement of optimal solutions has become increasingly important. Many of these problems have features and difficulties such as non-convex, nonlinear, discrete search space, and a non-differentiable objective function. Achieving the optimal solution to such problems has become a major challenge. To address this challenge and provide a solution to deal with the complexities and difficulties of optimization applications, a new stochastic-based optimization algorithm is proposed in this study. Optimization algorithms are a type of stochastic approach for addressing optimization issues that use random scanning of the search space to produce quasi-optimal answers. The Selecting Some Variables to Update-Based Algorithm (SSVUBA) is a new optimization algorithm developed in this study to handle optimization issues in various fields. The suggested algorithm's key principles are to make better use of the information provided by different members of the population and to adjust the number of variables used to update the algorithm population during the iterations of the algorithm. The theory of the proposed SSVUBA is described, and then its mathematical model is offered for use in solving optimization issues. Fifty-three objective functions, including unimodal, multimodal, and CEC 2017 test functions, are utilized to assess the ability and usefulness of the proposed SSVUBA in addressing optimization issues. SSVUBA's performance in optimizing real-world applications is evaluated on four engineering design issues. Furthermore, the performance of SSVUBA in optimization was compared to the performance of eight well-known algorithms to further evaluate its quality. The simulation results reveal that the proposed SSVUBA has a significant ability to handle various optimization issues and that it outperforms other competitor algorithms by giving appropriate quasi-optimal solutions that are closer to the global optima.
- Keywords
- optimization, optimization problem, population updating, population-based algorithm, selected variables, stochastic methods,
- Publication type
- Journal Article MeSH
Semiempirical quantum mechanical methods with corrections for noncovalent interactions, namely dispersion and hydrogen bonds, reach an accuracy comparable to much more expensive methods while being applicable to very large systems (up to 10 000 atoms). These corrections have been successfully applied in computer-assisted drug design, where they significantly improve the correlation with the experimental data. Despite these successes, there are still several unresolved issues that limit the applicability of these methods. We introduce a new generation of both hydrogen-bonding and dispersion corrections that address these problems, make the method more robust, and improve its accuracy. The hydrogen-bonding correction has been completely redesigned and for the first time can be used for geometry optimization and molecular-dynamics simulations without any limitations, as it and its derivatives have a smooth potential energy surface. The form of this correction is simpler than its predecessors, while the accuracy has been improved. For the dispersion correction, we adopt the latest developments in DFT-D, using the D3 formalism by Grimme. The new corrections have been parametrized on a large set of benchmark data including nonequilibrium geometries, the S66x8 data set. As a result, the newly developed D3H4 correction can accurately describe a wider range of interactions. We have parametrized this correction for the PM6, RM1, OM3, PM3, AM1, and SCC-DFTB methods.
- Publication type
- Journal Article MeSH
Accurate and efficient medical image segmentation is a critical yet challenging task due to issues like intensity inhomogeneity, poor contrast, noise, and blur. In this paper, we introduce a novel framework that addresses these challenges by leveraging adaptive level set evolution, enhanced with a unique edge indication function. Unlike prior edge-based algorithms, which frequently fail with noisy images and have large computing costs, our method incorporates an improved edge indicator term into the level set architecture, considerably improving performance on degraded images. The efficiency of proposed model depends on the optimization and implementation of proximal alternating direction technique of multipliers ([Formula: see text]). Our findings were validated using qualitative and quantitative methods such as dice coefficient assessment, sensitivity, accuracy, and mean absolute distance (MAD). Experimental findings show that the model successfully detects boundaries of objects within noisy and blurred visual data. The algorithm showed exceptional precision through its average dice coefficient of 0.96 which matched the ground truth data measurement standards. The system runs efficiently for only 0.90 seconds on average as a performance result. The framework achieved standout performance metrics that included 0.9552 accuracy together with 0.8854 sensitivity and 0.0796 MAD. The framework demonstrates robust capabilities in medical image evaluation which makes it an optimistic instrument for advancing the field.
- Keywords
- [Formula: see text] optimization, Edge indication function, Intensity inhomogeneity, Level-set evaluation, Medical image segmentation,
- Publication type
- Journal Article MeSH
PURPOSE: There is an annual incidence of 50,000 glioma cases in Europe. The optimal treatment strategy is highly personalised, depending on tumour type, grade, spatial localization, and the degree of tissue infiltration. In research settings, advanced magnetic resonance imaging (MRI) has shown great promise as a tool to inform personalised treatment decisions. However, the use of advanced MRI in clinical practice remains scarce due to the downstream effects of siloed glioma imaging research with limited representation of MRI specialists in established consortia; and the associated lack of available tools and expertise in clinical settings. These shortcomings delay the translation of scientific breakthroughs into novel treatment strategy. As a response we have developed the network "Glioma MR Imaging 2.0" (GliMR) which we present in this article. METHODS: GliMR aims to build a pan-European and multidisciplinary network of experts and accelerate the use of advanced MRI in glioma beyond the current "state-of-the-art" in glioma imaging. The Action Glioma MR Imaging 2.0 (GliMR) was granted funding by the European Cooperation in Science and Technology (COST) in June 2019. RESULTS: GliMR's first grant period ran from September 2019 to April 2020, during which several meetings were held and projects were initiated, such as reviewing the current knowledge on advanced MRI; developing a General Data Protection Regulation (GDPR) compliant consent form; and setting up the website. CONCLUSION: The Action overcomes the pre-existing limitations of glioma research and is funded until September 2023. New members will be accepted during its entire duration.
- Keywords
- Advanced MRI, COST action, Glioma, Multi-disciplinary, Networking, Translational research,
- Publication type
- Journal Article MeSH
- Review MeSH
Switched reluctance motors (SRMs) are favored in industrial applications for their durability, efficiency, and cost-effectiveness, yet face challenges such as torque ripple and nonlinear magnetic behavior that limit their precision in control tasks. To address these issues, this work introduces a novel hybrid adaptive ant lion optimization (HAALO) algorithm, combined with PI and FOPID controllers, to improve SRM performance. The HAALO algorithm enhances traditional ant lion optimization by integrating adaptive mutation and elite preservation techniques for dynamic real-time control, optimizing both torque ripple and speed regulation. Simulation results demonstrate the superiority of the HAALO-optimized controllers over conventional methods, showing faster convergence and enhanced control accuracy. This study provides a new hybrid optimization method that significantly advances SRM control, offering efficient solutions for high-performance applications.
- Keywords
- FOPID controller, HAALO algorithm, Hybrid adaptive optimization, PI controller, Switched reluctance motors, Torque ripple minimization,
- Publication type
- Journal Article MeSH
BACKGROUND: Making decisions about health care issues in advanced illness is difficult and the participation of patients and relatives is essential. Most of the studies on shared decision-making focus on the interaction between patient and physician (dyadic interaction), while the role of relatives in triadic decision-making remains less explored. The aim of the study was to investigate the perceived importance of the role of the patient, the physician and the relative in the decision-making from their respective perspectives. METHODS: Patients (n=154) with advanced disease, their relatives (n=95) and physicians (n=108) were asked to rank the importance of their roles on the scale from 0 to 10. Differences between respondent groups were examined by ANOVA. A typology of answers was constructed for dyadic and triadic relations and analyzed by descriptive statistics and the chi-square test. RESULTS: Physicians rated the importance of patients' role in decision-making significantly higher [mean 9.31; 95% confidence interval (CI): 9.07-9.55] than did patients themselves (mean 7.85; 95% CI: 7.37-8.32), while patients and relatives rated higher the importance of the physicians' role (mean 9.29; 95% CI: 8.98-9.59 and mean 9.20; 95% CI: 8.96-9.45, respectively) than did physicians themselves (mean 8.35; 95% CI: 0.06-8.65). In the analysis of the patient-physician dyadic interaction, patients ranked their role as equally important (44.1%) or more important (11.2%) than the role of physicians. Physicians (56.5%) thought patients should play a more important role. When relatives were included in the analysis, patients either preferred equal role of the three actors (30.2%) or prioritized the role of the physician and the relatives (16.8%), while physicians and relatives prioritized the role of the patient (54.6% and 29.0%, respectively). All results were statistically significant (P<0.05). CONCLUSIONS: Physicians and relatives tend to accentuate the active role of patients, while patients mostly prefer shared decision-making. Physicians seem to underestimate the importance of the role of relatives, compared to patients and relatives for whom the participation of relatives in the decision-making is of greater importance. A triadic decision-making model that acknowledges the importance of all three actors should be implemented in decision-making process in advanced illness.
- Keywords
- Decision making, advanced disease, autonomy, end of life, palliative care, participation,
- MeSH
- Chronic Disease MeSH
- Physicians * MeSH
- Humans MeSH
- Decision Making MeSH
- Physician-Patient Relations MeSH
- Patient Participation * MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
Background and Objectives: Iron deficiency (ID) is a common comorbidity in patients with heart failure. It is associated with reduced physical performance, frequent hospitalisations for heart failure decompensation, and high cardiovascular and overall mortality. The aim was to determine the prevalence of ID in patients with advanced heart failure on the waiting list for heart transplantation. Methods and Materials: We included 52 patients placed on the waiting list for heart transplantation in 2021 at our centre. The cohort included seven patients with LVAD (left ventricle assist device) as a bridge to transplantation implanted before the time of results collection. In addition to standard tests, the parameters of iron metabolism were monitored. ID was defined as a ferritin value <100 µg/L, or 100−299 µg/L if transferrin saturation (T-sat) is <20%. Results: ID was present in 79% of all subjects, but only in 35% of these patients anaemia was expressed. In the group without LVAD, ID was present in 82%, a median (lower−upper quartile) of ferritin level was 95.4 (62.2−152.1) µg/mL and mean T-sat was 0.18 ± 0.09. In LVAD group, ID was present in 57%, ferritin level was 268 (106−368) µg/mL and mean T-sat was 0.14 ± 0.04. Haemoglobin concentration was the same in patients with or without ID (133 ± 16) vs. (133 ± 23). ID was not associated with anaemia defined with regard to patient’s gender. In 40.5% of cases, iron deficiency was accompanied by chronic renal insufficiency, compared to 12.5% of the patients without ID. In the patients with LVAD, ID was present in four out of seven patients, but the group was too small for reliable statistical testing due to low statistical power. Conclusions: ID was present in the majority of patients with advanced heart failure and was not always accompanied by anaemia and renal insufficiency. Research on optimal markers for the diagnosis of iron deficiency, especially for specific groups of patients with heart failure, is still ongoing.
- Keywords
- advanced heart failure, anaemia, ferritin, iron deficiency, transferrin saturation,
- MeSH
- Anemia, Iron-Deficiency * complications epidemiology MeSH
- Anemia * complications MeSH
- Iron Deficiencies * MeSH
- Ferritins MeSH
- Humans MeSH
- Heart Failure * complications epidemiology diagnosis MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Names of Substances
- Ferritins MeSH
Breast milk analysis provides useful information about acute newborn exposure to harmful substances, such as psychoactive drugs abused by a nursing mother. Since breast milk represents a complex matrix with large amounts of interfering compounds, a comprehensive sample pre-treatment is necessary. This work focuses on determination of amphetamines and synthetic cathinones in human breast milk by microextraction techniques (liquid-phase microextraction and electromembrane extraction), and their comparison to more conventional treatment methods (protein precipitation, liquid-liquid extraction, and salting-out assisted liquid-liquid extraction). The aim of this work was to optimize and validate all the extraction procedures and thoroughly assess their advantages and disadvantages with special regard to their routine clinical use. The applicability of the extractions was further verified by the analysis of six real samples collected from breastfeeding mothers suspected of amphetamine abuse. The membrane microextraction techniques turned out to be the most advantageous as they required low amounts of organic solvents but still provided efficient sample clean-up, excellent quantification limit (0.5 ng mL-1), and good recovery (81-91% and 40-89% for electromembrane extraction and liquid-phase microextraction, respectively). The traditional liquid-liquid extraction as well as the salting-out assisted liquid-liquid extraction showed comparable recoveries (41-85% and 63-88%, respectively), but higher quantification limits (2.5 ng mL-1 and 5 ng mL-1, respectively). Moreover, these methods required multiple operating steps and were time consuming. Protein precipitation was fast and simple, but it demonstrated poor sample clean-up, low recovery (56-58%) and high quantification limit (5 ng mL-1). Based on the overall results, microextraction methods can be considered promising candidates, even for routine laboratory use.
- Keywords
- Amphetamines, Breast milk, Electromembrane extraction, Liquid-phase microextraction, Sample treatment, Synthetic cathinones,
- MeSH
- Amphetamines MeSH
- Liquid-Liquid Extraction MeSH
- Humans MeSH
- Limit of Detection MeSH
- Milk, Human * MeSH
- Liquid Phase Microextraction * MeSH
- Infant, Newborn MeSH
- Solvents MeSH
- Check Tag
- Humans MeSH
- Infant, Newborn MeSH
- Female MeSH
- Publication type
- Journal Article MeSH
- Names of Substances
- Amphetamines MeSH
- Solvents MeSH