Effectual air quality monitoring network (AQMN) design plays a prominent role in environmental engineering. An optimal AQMN design should consider stations' mutual information and system uncertainties for effectiveness. This study develops a novel optimization model using a non-dominated sorting genetic algorithm II (NSGA-II). The Bayesian maximum entropy (BME) method generates potential stations as the input of a framework based on the transinformation entropy (TE) method to maximize the coverage and minimize the probability of selecting stations. Also, the fuzzy degree of membership and the nonlinear interval number programming (NINP) approaches are used to survey the uncertainty of the joint information. To obtain the best Pareto optimal solution of the AQMN characterization, a robust ranking technique, called Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) approach, is utilized to select the most appropriate AQMN properties. This methodology is applied to Los Angeles, Long Beach, and Anaheim in California, USA. Results suggest using 4, 4, and 5 stations to monitor CO, NO2, and ozone, respectively; however, implementing this recommendation reduces coverage by 3.75, 3.75, and 3 times for CO, NO2, and ozone, respectively. On the positive side, this substantially decreases TE for CO, NO2, and ozone concentrations by 8.25, 5.86, and 4.75 times, respectively.
- Keywords
- Air quality, Bayesian maximum entropy (BME), Fuzzy set theory, Multi-criteria decision-making (MCDM), Nonlinear interval number programming (NINP), Transinformation entropy (TE),
- MeSH
- Bayes Theorem MeSH
- Entropy MeSH
- Environmental Monitoring methods MeSH
- Nitrogen Dioxide analysis MeSH
- Ozone * analysis MeSH
- Models, Theoretical MeSH
- Air Pollution * analysis MeSH
- Publication type
- Journal Article MeSH
- Names of Substances
- Nitrogen Dioxide MeSH
- Ozone * MeSH
Ship acoustic signal classification is essential for vessel identification, underwater navigation, and maritime security. Traditional methods struggle with the non-stationary nature and noise of ship acoustic signals, reducing classification accuracy. To address these challenges, we propose an automated pipeline that integrates Empirical Mode Decomposition (EMD), adaptive wavelet filtering, feature selection, and a Bayesian-optimized Random Forest classifier. The framework begins with EMD-based decomposition, where the most informative Intrinsic Mode Functions (IMFs) are selected using Signal-to-Noise Ratio (SNR) analysis. Wavelet filtering is applied to reduce noise, with optimal wavelet parameters determined via SNR and Stein's Unbiased Risk Estimate (SURE) criteria. Features extracted from statistical, frequency domain (FFT), and time-frequency (wavelet) metrics are ranked, and the top 11 most important features are selected for classification. A Bayesian-optimized Random Forest classifier is trained using the extracted features, ensuring optimal hyperparameter selection and reducing computational complexity. The classification results are further enhanced using a majority voting strategy, improving the accuracy of the final object identification. The proposed approach demonstrates high accuracy, improved noise suppression, and robust classification performance. The methodology is scalable, computationally efficient, and suitable for real-time maritime applications.
A new approach to 2-D blind deconvolution of ultrasonic images in a Bayesian framework is presented. The radio-frequency image data are modeled as a convolution of the point-spread function and the tissue function, with additive white noise. The deconvolution algorithm is derived from statistical assumptions about the tissue function, the point-spread function, and the noise. It is solved as an iterative optimization problem. In each iteration, additional constraints are applied as a projection operator to further stabilize the process. The proposed method is an extension of the homomorphic deconvolution, which is used here only to compute the initial estimate of the point-spread function. Homomorphic deconvolution is based on the assumption that the point-spread function and the tissue function lie in different bands of the cepstrum domain, which is not completely true. This limiting constraint is relaxed in the subsequent iterative deconvolution. The deconvolution is applied globally to the complete radiofrequency image data. Thus, only the global part of the point-spread function is considered. This approach, together with the need for only a few iterations, makes the deconvolution potentially useful for real-time applications. Tests on phantom and clinical images have shown that the deconvolution gives stable results of clearly higher spatial resolution and better defined tissue structures than in the input images and than the results of the homomorphic deconvolution alone.
- MeSH
- Algorithms * MeSH
- Bayes Theorem MeSH
- Image Interpretation, Computer-Assisted methods MeSH
- Reproducibility of Results MeSH
- Pattern Recognition, Automated methods MeSH
- Sensitivity and Specificity MeSH
- Ultrasonography methods MeSH
- Artificial Intelligence * MeSH
- Image Enhancement methods MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
Divergence-time estimation based on molecular phylogenies and the fossil record has provided insights into fundamental questions of evolutionary biology. In Bayesian node dating, phylogenies are commonly time calibrated through the specification of calibration densities on nodes representing clades with known fossil occurrences. Unfortunately, the optimal shape of these calibration densities is usually unknown and they are therefore often chosen arbitrarily, which directly impacts the reliability of the resulting age estimates. As possible solutions to this problem, two nonexclusive alternative approaches have recently been developed, the “fossilized birth–death” (FBD) model and “total-evidence dating.” While these approaches have been shown to perform well under certain conditions, they require including all (or a random subset) of the fossils of each clade in the analysis, rather than just relying on the oldest fossils of clades. In addition, both approaches assume that fossil records of different clades in the phylogeny are all the product of the same underlying fossil sampling rate, even though this rate has been shown to differ strongly between higher level taxa. We here develop a flexible new approach to Bayesian age estimation that combines advantages of node dating and the FBD model. In our new approach, calibration densities are defined on the basis of first fossil occurrences and sampling rate estimates that can be specified separately for all clades. We verify our approach with a large number of simulated data sets, and compare its performance to that of the FBD model. We find that our approach produces reliable age estimates that are robust to model violation, on par with the FBD model. By applying our approach to a large data set including sequence data from over 1000 species of teleost fishes as well as 147 carefully selected fossil constraints, we recover a timeline of teleost diversification that is incompatible with previously assumed vicariant divergences of freshwater fishes. Our results instead provide strong evidence for transoceanic dispersal of cichlids and other groups of teleost fishes.
- Keywords
- Bayesian inference *, calibration density *, Cichlidae *, fossil record *, marine dispersal *, phylogeny *, relaxed molecular clock *,
- MeSH
- Bayes Theorem MeSH
- Biodiversity MeSH
- Models, Biological * MeSH
- Time MeSH
- Cichlids classification MeSH
- Phylogeny * MeSH
- Genetic Speciation MeSH
- Fossils MeSH
- Animals MeSH
- Check Tag
- Animals MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
- Geographicals
- Atlantic Ocean MeSH
To support the shift toward a circular economy in waste management, it is essential to monitor progress using measurable indicators. However, the growing volume of secondary waste from pre-treatment processes highlights the need to assess its composition, as it can represent a diverse mixture and complicates the evaluation of individual waste streams. The proposed approach aims to estimate the composition of secondary waste by using a combination of machine learning and optimization techniques. The cornerstone for evaluation is data from waste management monitoring. Machine learning based on linear or Bayesian linear regression allows for the efficient processing of large datasets and the identification of key relationships in the system. The optimization model developed for a special form of data reconciliation maintains insight into the results and ensures the preservation of mass balances. In a case study in the Czech Republic, the model identified a 3 % reduction in the material recovery of municipal waste, as this waste is used for energy recovery or landfilled after transformation into secondary waste. Mixed secondary waste consists of 46 % plastic waste, with only 20 % being truly recycled. A significant portion is landfilled, which represents a potential for at least energy recovery from the waste. With refined waste management indicators and potential for recovery, the results can contribute to improvements in terms of technology and regional focus.
- Keywords
- Data reconciliation, Machine learning, Material recovery, Quadratic optimization, Secondary waste composition, Waste management indicators,
- MeSH
- Bayes Theorem MeSH
- Waste Management * methods MeSH
- Recycling MeSH
- Machine Learning * MeSH
- Models, Theoretical MeSH
- Solid Waste * analysis MeSH
- Publication type
- Journal Article MeSH
- Geographicals
- Czech Republic MeSH
- Names of Substances
- Solid Waste * MeSH
Dalbavancin is increasingly being used for long-term treatment of subacute and chronic staphylococcal infections. In this study, a new Bayesian model was implemented and validated using MwPharm software for accurately forecasting the duration of pharmacodynamic target attainment above the efficacy thresholds of 4.02 mg/L or 8.04 mg/L against staphylococci. Forecasting accuracy improved substantially with the a posteriori approach compared with the a priori approach, particularly when two measured concentrations were used. This strategy may help clinicians to estimate the duration of optimal exposure with dalbavancin in the context of long-term treatment.
- Keywords
- Bayesian prediction, MwPharm, TDM, dalbavancin,
- MeSH
- Anti-Bacterial Agents * therapeutic use pharmacology MeSH
- Bayes Theorem MeSH
- Humans MeSH
- Microbial Sensitivity Tests MeSH
- Staphylococcal Infections * drug therapy MeSH
- Staphylococcus MeSH
- Teicoplanin therapeutic use pharmacology MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Names of Substances
- Anti-Bacterial Agents * MeSH
- dalbavancin MeSH Browser
- Teicoplanin MeSH
SYBA (SYnthetic Bayesian Accessibility) is a fragment-based method for the rapid classification of organic compounds as easy- (ES) or hard-to-synthesize (HS). It is based on a Bernoulli naïve Bayes classifier that is used to assign SYBA score contributions to individual fragments based on their frequencies in the database of ES and HS molecules. SYBA was trained on ES molecules available in the ZINC15 database and on HS molecules generated by the Nonpher methodology. SYBA was compared with a random forest, that was utilized as a baseline method, as well as with other two methods for synthetic accessibility assessment: SAScore and SCScore. When used with their suggested thresholds, SYBA improves over random forest classification, albeit marginally, and outperforms SAScore and SCScore. However, upon the optimization of SAScore threshold (that changes from 6.0 to - 4.5), SAScore yields similar results as SYBA. Because SYBA is based merely on fragment contributions, it can be used for the analysis of the contribution of individual molecular parts to compound synthetic accessibility. SYBA is publicly available at https://github.com/lich-uct/syba under the GNU General Public License.
- Keywords
- Bayesian analysis, Bernoulli naïve Bayes, Synthetic accessibility,
- Publication type
- Journal Article MeSH
To identify patterns in big medical datasets and use Deep Learning and Machine Learning (ML) to reliably diagnose Cardio Vascular Disease (CVD), researchers are currently delving deeply into these fields. Training on large datasets and producing highly accurate validation results is exceedingly difficult. Furthermore, early and precise diagnosis is necessary due to the increased global prevalence of cardiovascular disease (CVD). However, the increasing complexity of healthcare datasets makes it challenging to detect feature connections and produce precise predictions. To address these issues, the Intelligent Cardiovascular Disease Diagnosis based on Ant Colony Optimisation with Enhanced Deep Learning (ICVD-ACOEDL) model was developed. This model employs feature selection (FS) and hyperparameter optimization to diagnose CVD. Applying a min-max scaler, medical data is first consistently prepared. The key feature that sets ICVD-ACOEDL apart is the use of Ant Colony Optimisation (ACO) to select an optimal feature subset, which in turn helps to upgrade the performance of the ensuring deep learning enhanced neural network (DLENN) classifier. The model reforms the hyperparameters of DLENN for CVD classification using Bayesian optimization. Comprehensive evaluations on benchmark medical datasets show that ICVD-ACOEDL exceeds existing techniques, indicating that it could have a significant impact on CVD diagnosis. The model furnishes a workable way to increase CVD classification efficiency and accuracy in real-world medical situations by incorporating ACO for feature selection, min-max scaling for data pre-processing, and Bayesian optimization for hyperparameter tweaking.
- Keywords
- Ant Colony Optimisation, Bayesian optimisation, Cardiovascular disease, Hyperparameter, Min–max scaler,
- MeSH
- Bayes Theorem MeSH
- Deep Learning * MeSH
- Diagnosis, Computer-Assisted methods MeSH
- Ants MeSH
- Cardiovascular Diseases * diagnosis MeSH
- Humans MeSH
- Neural Networks, Computer * MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
Soil pollution is a big issue caused by anthropogenic activities. The spatial distribution of potentially toxic elements (PTEs) varies in most urban and peri-urban areas. As a result, spatially predicting the PTEs content in such soil is difficult. A total number of 115 samples were obtained from Frydek Mistek in the Czech Republic. Calcium (Ca), magnesium (Mg), potassium (K), and nickel (Ni) concentrations were determined using Inductively Coupled Plasma Optical Emission Spectroscopy. The response variable was Ni, while the predictors were Ca, Mg, and K. The correlation matrix between the response variable and the predictors revealed a satisfactory correlation between the elements. The prediction results indicated that support vector machine regression (SVMR) performed well, although its estimated root mean square error (RMSE) (235.974 mg/kg) and mean absolute error (MAE) (166.946 mg/kg) were higher when compared with the other methods applied. The hybridized model of empirical bayesian kriging-multiple linear regression (EBK-MLR) performed poorly, as evidenced by a coefficient of determination value of less than 0.1. The empirical bayesian kriging-support vector machine regression (EBK-SVMR) model was the optimal model, with low RMSE (95.479 mg/kg) and MAE (77.368 mg/kg) values and a high coefficient of determination (R2 = 0.637). EBK-SVMR modelling technique output was visualized using a self-organizing map. The clustered neurons of the hybridized model CakMg-EBK-SVMR component plane showed a diverse colour pattern predicting the concentration of Ni in the urban and peri-urban soil. The results proved that combining EBK and SVMR is an effective technique for predicting Ni concentrations in urban and peri-urban soil.
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
In this paper, we study the design aspects of an indoor visible light positioning (VLP) system that uses an artificial neural network (ANN) for positioning estimation by considering a multipath channel. Previous results usually rely on the simplistic line of sight model with limited validity. The study considers the influence of noise as a performance indicator for the comparison between different design approaches. Three different ANN algorithms are considered, including Levenberg-Marquardt, Bayesian regularization, and scaled conjugate gradient algorithms, to minimize the positioning error (εp) in the VLP system. The ANN design is optimized based on the number of neurons in the hidden layers, the number of training epochs, and the size of the training set. It is shown that, the ANN with Bayesian regularization outperforms the traditional received signal strength (RSS) technique using the non-linear least square estimation for all values of signal to noise ratio (SNR). Furthermore, in the inner region, which includes the area of the receiving plane within the transmitters, the positioning accuracy is improved by 43, 55, and 50% for the SNR of 10, 20, and 30 dB, respectively. In the outer region, which is the remaining area within the room, the positioning accuracy is improved by 57, 32, and 6% for the SNR of 10, 20, and 30 dB, respectively. Moreover, we also analyze the impact of different training dataset sizes in ANN, and we show that it is possible to achieve a minimum εp of 2 cm for 30 dB of SNR using a random selection scheme. Finally, it is observed that εp is low even for lower values of SNR, i.e., εp values are 2, 11, and 44 cm for the SNR of 30, 20, and 10 dB, respectively.
- Keywords
- Bayesian regularization, artificial neural network (ANN), multipath reflections, non-linear least square, visible light communication (VLC), visible light positioning,
- MeSH
- Algorithms * MeSH
- Bayes Theorem MeSH
- Least-Squares Analysis MeSH
- Neural Networks, Computer * MeSH
- Light MeSH
- Publication type
- Journal Article MeSH