Modelling the flow properties of rubber blends makes it possible to predict their rheological behaviour during the processing and production of rubber-based products. As the nonlinear nature of such complex processes complicates the creation of exact analytical models, it is appropriate to use artificial intelligence tools in this modelling. The present study was implemented to develop a highly efficient artificial neural network model, optimised using a novel training algorithm with fast parallel computing to predict the results of rheological tests of rubber blends performed under different conditions. A series of 120 real dynamic viscosity-time curves, acquired by a rubber process analyser for styrene-butadiene rubber blends with varying carbon black contents vulcanised at different temperatures, were analysed using a Generalised Regression Neural Network. The model was optimised by limiting the fitting error of the training dataset to a pre-specified value of less than 1%. All repeated calculations were made via parallel computing with multiple computer cores, which significantly reduces the total computation time. An excellent agreement between the predicted and measured generalisation data was found, with an error of less than 4.7%, confirming the high generalisation performance of the newly developed model.
- Keywords
- curing process, generalised regression neural network, intelligent modelling, parallel computing, rubber blends,
- Publication type
- Journal Article MeSH
Fog computing is one of the major components of future 6G networks. It can provide fast computing of different application-related tasks and improve system reliability due to better decision-making. Parallel offloading, in which a task is split into several sub-tasks and transmitted to different fog nodes for parallel computation, is a promising concept in task offloading. Parallel offloading suffers from challenges such as sub-task splitting and mapping of sub-tasks to the fog nodes. In this paper, we propose a novel many-to-one matching-based algorithm for the allocation of sub-tasks to fog nodes. We develop preference profiles for IoT nodes and fog nodes to reduce the task computation delay. We also propose a technique to address the externalities problem in the matching algorithm that is caused by the dynamic preference profiles. Furthermore, a detailed evaluation of the proposed technique is presented to show the benefits of each feature of the algorithm. Simulation results show that the proposed matching-based offloading technique outperforms other available techniques from the literature and improves task latency by 52% at high task loads.
- Keywords
- Internet of Things, externalities problem, fog computing, matching theory, partial task offloading, task offloading,
- MeSH
- Algorithms * MeSH
- Computer Simulation MeSH
- Reproducibility of Results MeSH
- Publication type
- Journal Article MeSH
The approach of using more than one processor to compute in order to overcome the complexity of different medical imaging methods that make up an overall job is known as GPU (graphic processing unit)-based parallel processing. It is extremely important for several medical imaging techniques such as image classification, object detection, image segmentation, registration, and content-based image retrieval, since the GPU-based parallel processing approach allows for time-efficient computation by a software, allowing multiple computations to be completed at once. On the other hand, a non-invasive imaging technology that may depict the shape of an anatomy and the biological advancements of the human body is known as magnetic resonance imaging (MRI). Implementing GPU-based parallel processing approaches in brain MRI analysis with medical imaging techniques might be helpful in achieving immediate and timely image capture. Therefore, this extended review (the extension of the IWBBIO2023 conference paper) offers a thorough overview of the literature with an emphasis on the expanding use of GPU-based parallel processing methods for the medical analysis of brain MRIs with the imaging techniques mentioned above, given the need for quicker computation to acquire early and real-time feedback in medicine. Between 2019 and 2023, we examined the articles in the literature matrix that include the tasks, techniques, MRI sequences, and processing results. As a result, the methods discussed in this review demonstrate the advancements achieved until now in minimizing computing runtime as well as the obstacles and problems still to be solved in the future.
- Keywords
- GPU, MRI, parallel processing, review,
- MeSH
- Algorithms * MeSH
- Humans MeSH
- Magnetic Resonance Imaging methods MeSH
- Brain MeSH
- Computer Graphics * MeSH
- Image Processing, Computer-Assisted methods MeSH
- Software MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Review MeSH
BACKGROUND: Next generation sequencing (NGS) technology allows laboratories to investigate virome composition in clinical and environmental samples in a culture-independent way. There is a need for bioinformatic tools capable of parallel processing of virome sequencing data by exactly identical methods: this is especially important in studies of multifactorial diseases, or in parallel comparison of laboratory protocols. RESULTS: We have developed a web-based application allowing direct upload of sequences from multiple virome samples using custom parameters. The samples are then processed in parallel using an identical protocol, and can be easily reanalyzed. The pipeline performs de-novo assembly, taxonomic classification of viruses as well as sample analyses based on user-defined grouping categories. Tables of virus abundance are produced from cross-validation by remapping the sequencing reads to a union of all observed reference viruses. In addition, read sets and reports are created after processing unmapped reads against known human and bacterial ribosome references. Secured interactive results are dynamically plotted with population and diversity charts, clustered heatmaps and a sortable and searchable abundance table. CONCLUSIONS: The Vipie web application is a unique tool for multi-sample metagenomic analysis of viral data, producing searchable hits tables, interactive population maps, alpha diversity measures and clustered heatmaps that are grouped in applicable custom sample categories. Known references such as human genome and bacterial ribosomal genes are optionally removed from unmapped ('dark matter') reads. Secured results are accessible and shareable on modern browsers. Vipie is a freely available web-based tool whose code is open source.
- Keywords
- Assembly, Metagenomics, NGS analysis, Parallel processing, Viral dark matter, Viromes, Virus, Visualization,
- MeSH
- Genetic Variation MeSH
- Genomics methods MeSH
- Internet * MeSH
- Humans MeSH
- Microbiota genetics MeSH
- Software * MeSH
- Viruses genetics MeSH
- High-Throughput Nucleotide Sequencing * MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
The paper describes a novel control strategy for simultaneous manipulation of several microscale particles over a planar microelectrode array using dielectrophoresis. The approach is based on a combination of numerical nonlinear optimization, which gives a systematic computational procedure for finding the voltages applied to the individual electrodes, and exploitation of the intrinsic noise, which compensates for the loss of controllability when two identical particles are exposed to identical forces. Although interesting on its own, the proposed functionality can also be seen as a preliminary achievement in a quest for a technique for separation of two particles. The approach is tested experimentally with polystyrene beads (50 microns in diameter) immersed in deionized water on a flat microelectrode array with parallel electrodes. A digital camera and computer vision algorithm are used to measure the positions. Two distinguishing features of the proposed control strategy are that the range of motion is not limited to interelectrode gaps and that independent manipulation of several particles simultaneously is feasible even on a simple microelectrode array.
- Keywords
- Dielectrophoresis, Feedback control, Micromanipulation, Parallel manipulation, Visual feedback,
- MeSH
- Algorithms MeSH
- Equipment Design MeSH
- Electrodes MeSH
- Electrophoresis methods MeSH
- Noise MeSH
- Micromanipulation instrumentation methods MeSH
- Microspheres MeSH
- Signal Processing, Computer-Assisted instrumentation MeSH
- Models, Theoretical MeSH
- Feedback * MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
Computational design of new proteins is often performed by optimizing the amino acid sequence. This sequence is characterized by an energy (lower energy means better propensity to form the desired 3D structure) that is sampled and minimized. Here, we use the parallel tempering algorithm to accelerate this task. ESMfold was used to predict the structures of the sampled proteins and calculate energy. Starting from random amino acid sequences, each sequence was sampled using the Monte Carlo method at one of a series of temperatures, and these replicas were being exchanged by the parallel tempering method. A series of 100 or 200 residue proteins was designed to maximize confidence in structure prediction and globularity and minimize surface hydrophobic residues. We show that parallel tempering is a viable alternative to Monte Carlo sampling without replica exchanges and simulated annealing or related energy-based protein design methods, especially in the situation where a continuous flow of designed sequences is desired.
- Keywords
- ESMfold, Monte Carlo, machine learning, parallel tempering, protein design, replica exchange,
- MeSH
- Algorithms * MeSH
- Hydrophobic and Hydrophilic Interactions MeSH
- Protein Conformation MeSH
- Monte Carlo Method MeSH
- Models, Molecular MeSH
- Protein Engineering * methods MeSH
- Proteins * chemistry genetics MeSH
- Amino Acid Sequence MeSH
- Thermodynamics MeSH
- Publication type
- Journal Article MeSH
- Names of Substances
- Proteins * MeSH
An important problem in current computational systems biology is to analyze models of biological systems dynamics under parameter uncertainty. This paper presents a novel algorithm for parameter synthesis based on parallel model checking. The algorithm is conceptually universal with respect to the modeling approach employed. We introduce the algorithm, show its scalability, and examine its applicability on several biological models.
A new reconstruction method for parallel MRI called PROBER is proposed. The method PROBER works in an image domain similar to methods based on Sensitivity Encoding (SENSE). However, unlike SENSE, which first estimates the spatial sensitivity maps, PROBER approximates the reconstruction coefficients directly by B-splines. Also, B-spline coefficients are estimated at once in order to minimize the reconstruction error instead of estimating the reconstruction in each pixel independently (as in SENSE). This makes the method robust to noise in reference images. No presmoothing of reference images is necessary. The number of estimated parameters is reduced, which speeds up the estimation process. PROBER was tested on simulated, phantom, and in vivo data. The results are compared with commercial implementations of the algorithms SENSE and GRAPPA (Generalized Autocalibrating Partially Parallel Acquisitions) in terms of elapsed time and reconstruction quality. The experiments showed that PROBER is faster than GRAPPA and SENSE for images wider than 150x150 pixels for comparable reconstruction quality. With more basis functions, PROBER outperforms both SENSE and GRAPPA in reconstruction quality at the cost of slightly increased computational time.
- MeSH
- Algorithms MeSH
- Artifacts MeSH
- Time Factors MeSH
- Gadolinium DTPA MeSH
- Adult MeSH
- Phantoms, Imaging MeSH
- Head anatomy & histology MeSH
- Thorax anatomy & histology MeSH
- Calibration MeSH
- Contrast Media MeSH
- Humans MeSH
- Magnetic Resonance Imaging methods MeSH
- Computer Simulation MeSH
- Image Processing, Computer-Assisted methods statistics & numerical data MeSH
- Image Enhancement methods MeSH
- Check Tag
- Adult MeSH
- Humans MeSH
- Publication type
- Journal Article MeSH
- Research Support, Non-U.S. Gov't MeSH
- Comparative Study MeSH
- Names of Substances
- Gadolinium DTPA MeSH
- Contrast Media MeSH
Detached off-grids, subject to the generated renewable energy (RE), need to balance and compensate the unstable power supply dependent on local source potential. Power quality (PQ) is a set of EU standards that state acceptable deviations in the parameters of electrical power systems to guarantee their operability without dropout. Optimization of the estimated PQ parameters in a day-horizon is essential in the operational planning of autonomous smart grids, which accommodate the norms for the specific equipment and user demands to avoid malfunctions. PQ data for all system states are not available for dozens of connected / switched on household appliances, defined by their binary load series only, as the number of combinations grows exponentially. The load characteristics and eventual RE contingent supply can result in system instability and unacceptable PQ events. Models, evolved by Artificial Intelligence (AI) methods using self-optimization algorithms, can estimate unknown cases and states in autonomous systems contingent on self-supply of RE power related to chaotic and intermitted local weather sources. A new multilevel extension procedure designed to incrementally improve the applicability and adaptability to training data. The initial AI model starts with binary load series only, which are insufficient to represent complex data patterns. The input vector is progressively extended with correlated PQ parameters at the next estimation level to better represent the active demand of the power consumer. Historical data sets comprise training samples for all PQ parameters, but only the load sequences of the switch-on appliances are available in the next estimation states. The most valuable PQ parameters are selected and estimated in the previous algorithm stages to be used as supplementary series in the next more precise computing. More complex models, using the previous PQ-data approximates, are formed at the secondary processing levels to estimate the target PQ-output in better quality. The new added input parameters allow us to evolve a more convenient model form. The proposed multilevel refinement algorithm can be generally applied in modelling of unknown sequence states of dynamical systems, initially described by binary series or other insufficient limited-data variables, which are inadequate in a problem representation. Most AI computing techniques can adapt this strategy to improve their adaptive learning and model performance.
OBJECTIVES: Direct genotyping of adenovirus or enterovirus from clinical material using polymerase chain reaction (PCR) followed by Sanger sequencing is often difficult due to the presence of multiple virus types in a sample, or due to varying efficacy of PCR amplifying the capsid gene on the background of foreign nucleic acids. Here we present a simple protocol for virus genotyping using massive parallel amplicon sequencing. METHODS: The protocol utilized a set of 16 tailed degenerate primers flanking the seventh hypervariable region of the adenovirus hexon gene and 9 tailed degenerate primers targeted to the proximal portion of the enterovirus VP1 gene. Subsequent addition of dual indices enabled simultaneous sequencing of 384 different samples on an Illumina MiSeq instrument. Downstream bioinformatic analysis was based on remapping to a set of references representative of the presently known repertoire of virus types. RESULTS: After validation with known virus types, the sequencing method was applied on 301 adenovirus-positive samples and 350 enterovirus-positive samples from a longitudinally collected series of stools from 83 children aged 3 to 36 months. We detected 7 different adenovirus types and 27 different enterovirus types. There were 37 (6.2%) samples containing more than one genotype of the same viral genus. At least one dual infection was experienced by 23 of 83 (28%) of the children observed over the 3 years' observation period. CONCLUSIONS: Amplicon sequencing with a multiplex set of degenerate primers seems to be a rapid and reliable technical solution for genotyping of large collections of samples where simultaneous infections with multiple strains can be expected.
- Keywords
- adenovirus, enterovirus, genotype, infants, massive parallel sequencing, virus type,
- MeSH
- Adenoviridae classification genetics isolation & purification MeSH
- Adenoviridae Infections virology MeSH
- DNA Primers genetics MeSH
- Enterovirus Infections virology MeSH
- Enterovirus classification genetics isolation & purification MeSH
- Genotype * MeSH
- Genotyping Techniques methods MeSH
- Infant MeSH
- Humans MeSH
- Longitudinal Studies MeSH
- Child, Preschool MeSH
- Sequence Analysis, DNA methods MeSH
- Computational Biology MeSH
- Animals MeSH
- Check Tag
- Infant MeSH
- Humans MeSH
- Male MeSH
- Child, Preschool MeSH
- Female MeSH
- Animals MeSH
- Publication type
- Journal Article MeSH
- Evaluation Study MeSH
- Research Support, Non-U.S. Gov't MeSH
- Geographicals
- Norway MeSH
- Names of Substances
- DNA Primers MeSH