Analyzing simulation algorithm performance is cumbersome: execute some runs, observe a performance metric, and analyze the results. Often, the results motivate follow-up experiments, which in turn may lead to additional experiments, and so on. This time-consuming and error-prone process can be automated with planning approaches from artificial intelligence, making simulator performance analysis more convenient and rigorous. This paper introduces ALeSiA, a prototypical system for automatic simulator performance analysis. It is independent of any specific simulation system and realizes a hypothesis-driven approach to evaluate performance.
A comparison of three chromatographic retention time prediction modelsAndrew McEachran
High resolution mass spectrometry (HRMS) data has revolutionized the identification of environmental contaminants through non-targeted analyses (NTA). However, data processing and chemical identification and prioritization remain challenging due to the vast number of unknowns observed in NTA. The ideal NTA workflow requires harmonized data and tools from a variety of sources to allow the most probable and confirmed identifications. One such tool is the use of chromatographic retention time (RT). Comparing predicted RT of candidate structures to observed RT allows for additional specificity towards ultimate identification. In this work, three RT prediction models were evaluated on the same set of chemicals: 1) a logP-based model, 2) a model generated in ACD/ChromGenius, and 3) a Quantitative Structure Retention Relationship model, OPERA-RT. Our results indicate that both ACD/ChromGenius and OPERA-RT outperform the logP-based model. Between the two, OPERA-RT produced a slightly better fit on the entire set of structures than ACD/ChromGenius (R2 values of 0.85 to 0.83). Further, OPERA-RT, generated within the US EPA’s National Center for Computational Toxicology, predicted 96% of RTs within a 15% (+/-) chromatographic time window of experimental RTs. Finally, to test an NTA workflow, candidate structures were generated for formulae in the test set using the US EPA’s CompTox Chemistry Dashboard and RTs for all candidates were predicted using both ACD/ChromGenius and OPERA-RT. RT screening windows were applied to screen out unlikely candidate chemicals and enhance potential identification. Compared to ACD/ChromGenius, OPERA-RT screened out a greater percentage of the candidate structures by RT, but retained fewer of the known chemicals. This research demonstrates the potential value of including RT prediction in NTA workflows and indicates the potential value of OPERA-RT predictions to support our NTA investigations. This abstract does not necessarily represent the views or policies of the U.S. Environmental Protection Agency.
Keynote of HOP-Rec @ RecSys 2018
Presenter: Jheng-Hong Yang
These slides aim to be a complementary material for the short paper: HOP-Rec @ RecSys18. It explains the intuition and some abstract idea behind the descriptions and mathematical symbols by illustrating some plots and figures.
A Study of Parallel Approaches in MOACOs for solving the Bicriteria TSPAntonio Mora
In this work, the parallelization of some Multi-Objective Ant Colony Optimization (MOACO) algorithms has been performed. The aim is to get a better performance, not only in running time (usually the main objective when a distributed approach is implemented), but also improving the spread of solutions over the Pareto front (the ideal set of solutions). In order to do this, colony-level (coarse- grained) implementations have been tested for solving the Bicriteria TSP problem, yielding better sets of solutions, in the sense explained above, than a sequential approach.
Feature Selection Method Based on Chaotic Maps and Butterfly Optimization Alg...Tarek Gaber
Feature selection (FS) is a challenging problem that attracted the attention of many researchers. FS can be considered as an NP hard problem, If a dataset contains N features then 2N solutions are generated with each additional feature, the complexity doubles. To solve this problem, we reduce the dimensionality of the feature by extracting the most important features. In this paper, we integrate the chaotic maps in the standard butterfly optimization algorithm to increase the diversity and avoid trapping in local minima in this algorithm. The proposed algorithm is called Chaotic Butterfly Optimization Algorithm (CBOA). The performance of the proposed CBOA is investigated by applying it on 16 benchmark datasets and comparing it against six meta-heuristics algorithms. The results show that invoking the chaotic maps in the standard BOA can improve its performance with an accuracy more than 95% .
This paper research review Ant colony optimization (ACO) and Genetic Algorithm (GA), both are two
powerful meta-heuristics. This paper explains some major defects of these two algorithm at first then
proposes a new model for ACO in which, artificial ants use a quick genetic operator and accelerate their
actions in selecting next state.
Experimental results show that proposed hybrid algorithm is effective and its performance including speed
and accuracy beats other version.
Applying Genetic Algorithms to Information Retrieval Using Vector Space ModelIJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in 1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used cosine similarity and jaccards to compute similarity between the query and documents, and used two proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study concluded that we might have several improvements when using adaptive genetic algorithms.
A comparison of three chromatographic retention time prediction modelsAndrew McEachran
High resolution mass spectrometry (HRMS) data has revolutionized the identification of environmental contaminants through non-targeted analyses (NTA). However, data processing and chemical identification and prioritization remain challenging due to the vast number of unknowns observed in NTA. The ideal NTA workflow requires harmonized data and tools from a variety of sources to allow the most probable and confirmed identifications. One such tool is the use of chromatographic retention time (RT). Comparing predicted RT of candidate structures to observed RT allows for additional specificity towards ultimate identification. In this work, three RT prediction models were evaluated on the same set of chemicals: 1) a logP-based model, 2) a model generated in ACD/ChromGenius, and 3) a Quantitative Structure Retention Relationship model, OPERA-RT. Our results indicate that both ACD/ChromGenius and OPERA-RT outperform the logP-based model. Between the two, OPERA-RT produced a slightly better fit on the entire set of structures than ACD/ChromGenius (R2 values of 0.85 to 0.83). Further, OPERA-RT, generated within the US EPA’s National Center for Computational Toxicology, predicted 96% of RTs within a 15% (+/-) chromatographic time window of experimental RTs. Finally, to test an NTA workflow, candidate structures were generated for formulae in the test set using the US EPA’s CompTox Chemistry Dashboard and RTs for all candidates were predicted using both ACD/ChromGenius and OPERA-RT. RT screening windows were applied to screen out unlikely candidate chemicals and enhance potential identification. Compared to ACD/ChromGenius, OPERA-RT screened out a greater percentage of the candidate structures by RT, but retained fewer of the known chemicals. This research demonstrates the potential value of including RT prediction in NTA workflows and indicates the potential value of OPERA-RT predictions to support our NTA investigations. This abstract does not necessarily represent the views or policies of the U.S. Environmental Protection Agency.
Keynote of HOP-Rec @ RecSys 2018
Presenter: Jheng-Hong Yang
These slides aim to be a complementary material for the short paper: HOP-Rec @ RecSys18. It explains the intuition and some abstract idea behind the descriptions and mathematical symbols by illustrating some plots and figures.
A Study of Parallel Approaches in MOACOs for solving the Bicriteria TSPAntonio Mora
In this work, the parallelization of some Multi-Objective Ant Colony Optimization (MOACO) algorithms has been performed. The aim is to get a better performance, not only in running time (usually the main objective when a distributed approach is implemented), but also improving the spread of solutions over the Pareto front (the ideal set of solutions). In order to do this, colony-level (coarse- grained) implementations have been tested for solving the Bicriteria TSP problem, yielding better sets of solutions, in the sense explained above, than a sequential approach.
Feature Selection Method Based on Chaotic Maps and Butterfly Optimization Alg...Tarek Gaber
Feature selection (FS) is a challenging problem that attracted the attention of many researchers. FS can be considered as an NP hard problem, If a dataset contains N features then 2N solutions are generated with each additional feature, the complexity doubles. To solve this problem, we reduce the dimensionality of the feature by extracting the most important features. In this paper, we integrate the chaotic maps in the standard butterfly optimization algorithm to increase the diversity and avoid trapping in local minima in this algorithm. The proposed algorithm is called Chaotic Butterfly Optimization Algorithm (CBOA). The performance of the proposed CBOA is investigated by applying it on 16 benchmark datasets and comparing it against six meta-heuristics algorithms. The results show that invoking the chaotic maps in the standard BOA can improve its performance with an accuracy more than 95% .
This paper research review Ant colony optimization (ACO) and Genetic Algorithm (GA), both are two
powerful meta-heuristics. This paper explains some major defects of these two algorithm at first then
proposes a new model for ACO in which, artificial ants use a quick genetic operator and accelerate their
actions in selecting next state.
Experimental results show that proposed hybrid algorithm is effective and its performance including speed
and accuracy beats other version.
Applying Genetic Algorithms to Information Retrieval Using Vector Space ModelIJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in 1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used cosine similarity and jaccards to compute similarity between the query and documents, and used two proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study concluded that we might have several improvements when using adaptive genetic algorithms.
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
USING CUCKOO ALGORITHM FOR ESTIMATING TWO GLSD PARAMETERS AND COMPARING IT WI...ijcsit
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
APPLYING GENETIC ALGORITHMS TO INFORMATION RETRIEVAL USING VECTOR SPACE MODEL IJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in 1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used cosine similarity and jaccards to compute similarity between the query and documents, and used two proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study concluded that we might have several improvements when using adaptive genetic algorithms.
Applying genetic algorithms to information retrieval using vector space modelIJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on
mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in
1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used
cosine similarity and jaccards to compute similarity between the query and documents, and used two
proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at
evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study
concluded that we might have several improvements when using adaptive genetic algorithms.
Model-based Regression Testing of Autonomous RobotsZoltan Micskei
Slides for the paper in 18th Int. Conf. on System Design Languages (SDL 2017). We present a method and a case study on how model-based regression testing can be achieved in the context of autonomous robots. The method uses information from several domain-specific languages for modeling the robot’s context and configuration. Our approach is implemented in a prototype tool, and its scalability is evaluated on models from the case study.
Weather Prediction Model using Random Forest Algorithm and Apache Sparkijtsrd
One of the greatest challenge that meteorological department faces are to predict weather accurately. These predictions are important because they influence daily life and also affect the economy of a state or even a nation. Weather predictions are also necessary since they form the first level of preparation against the natural disasters which may make difference between life and death. They also help to reduce the loss of resources and minimizing the mitigation steps that are expected to be taken after a natural disaster occurs. This research work focuses on analyzing algorithm on big data that are suitable for weather prediction and highlights the performance analysis with Random Forest algorithms in the spark framework. Thin Thin Swe | Phyu Phyu | Sandar Pa Pa Thein "Weather Prediction Model using Random Forest Algorithm and Apache Spark" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29133.pdf Paper URL: https://www.ijtsrd.com/computer-science/database/29133/weather-prediction-model-using-random-forest-algorithm-and-apache-spark/thin-thin-swe
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
Static Memory Management for Efficient Mobile Sensing ApplicationsFarley Lai
Memory management is a crucial aspect of mobile sensing applications that must process high-rate data streams in an energy-efficient manner. Our work is done in the context of synchronous data-flow models in which applications are implemented as a graph of components that exchange data at fixed and known rates over FIFO channels. In this paper, we show that it is feasible to leverage the restricted semantics of synchronous data-flow models to optimize memory management. Our memory optimization approach includes two components: (1) We use abstract interpretation to analyze the complete memory behavior of a mobile sensing application and identify data sharing opportunities across components according to the live ranges of exchanged samples. Experiments indicate that the static analysis is precise for a majority of considered stream applications whose control logic does not depend on input data. (2) We propose novel heuristics for memory allocation that leverage the graph structure of applications to optimize data exchanges between application components to achieve not only significantly lower memory footprints but also increased stream processing throughput.
Kurze Projektpräsentation zu JAMES II, Vorstellung am 23. 5. im Rahmen der Projektwoche am Institut für Informatik (Thema "Der erste Schritt zu Open Source") der Universität Rostock.
A Day in the Life of a Bug --- sinnvoll zu Open Source Projekten beitragenRoland Ewald
Kurzvortrag zu Möglichkeiten, in Open Source Projekten aktiv zu werden, mit Fokus auf Bugs (melden & beheben). Gehalten am 23. 5. im Rahmen der Projektwoche am Institut für Informatik (Thema "Der erste Schritt zu Open Source") der Universität Rostock.
More Related Content
Similar to Using AI Planning to Automate the Performance Analysis of Simulators
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
USING CUCKOO ALGORITHM FOR ESTIMATING TWO GLSD PARAMETERS AND COMPARING IT WI...ijcsit
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
This study introduces and compares different methods for estimating the two parameters of generalized logarithmic series distribution. These methods are the cuckoo search optimization, maximum likelihood estimation, and method of moments algorithms. All the required derivations and basic steps of each algorithm are explained. The applications for these algorithms are implemented through simulations using different sample sizes (n = 15, 25, 50, 100). Results are compared using the statistical measure mean square error.
APPLYING GENETIC ALGORITHMS TO INFORMATION RETRIEVAL USING VECTOR SPACE MODEL IJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in 1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used cosine similarity and jaccards to compute similarity between the query and documents, and used two proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study concluded that we might have several improvements when using adaptive genetic algorithms.
Applying genetic algorithms to information retrieval using vector space modelIJCSEA Journal
Genetic algorithms are usually used in information retrieval systems (IRs) to enhance the information retrieval process, and to increase the efficiency of the optimal information retrieval in order to meet the users’ needs and help them find what they want exactly among the growing numbers of available information. The improvement of adaptive genetic algorithms helps to retrieve the information needed by the user accurately, reduces the retrieved relevant files and excludes irrelevant files. In this study, the researcher explored the problems embedded in this process, attempted to find solutions such as the way of choosing mutation probability and fitness function, and chose Cranfield English Corpus test collection on
mathematics. Such collection was conducted by Cyrial Cleverdon and used at the University of Cranfield in
1960 containing 1400 documents, and 225 queries for simulation purposes. The researcher also used
cosine similarity and jaccards to compute similarity between the query and documents, and used two
proposed adaptive fitness function, mutation operators as well as adaptive crossover. The process aimed at
evaluating the effectiveness of results according to the measures of precision and recall. Finally, the study
concluded that we might have several improvements when using adaptive genetic algorithms.
Model-based Regression Testing of Autonomous RobotsZoltan Micskei
Slides for the paper in 18th Int. Conf. on System Design Languages (SDL 2017). We present a method and a case study on how model-based regression testing can be achieved in the context of autonomous robots. The method uses information from several domain-specific languages for modeling the robot’s context and configuration. Our approach is implemented in a prototype tool, and its scalability is evaluated on models from the case study.
Weather Prediction Model using Random Forest Algorithm and Apache Sparkijtsrd
One of the greatest challenge that meteorological department faces are to predict weather accurately. These predictions are important because they influence daily life and also affect the economy of a state or even a nation. Weather predictions are also necessary since they form the first level of preparation against the natural disasters which may make difference between life and death. They also help to reduce the loss of resources and minimizing the mitigation steps that are expected to be taken after a natural disaster occurs. This research work focuses on analyzing algorithm on big data that are suitable for weather prediction and highlights the performance analysis with Random Forest algorithms in the spark framework. Thin Thin Swe | Phyu Phyu | Sandar Pa Pa Thein "Weather Prediction Model using Random Forest Algorithm and Apache Spark" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29133.pdf Paper URL: https://www.ijtsrd.com/computer-science/database/29133/weather-prediction-model-using-random-forest-algorithm-and-apache-spark/thin-thin-swe
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
Static Memory Management for Efficient Mobile Sensing ApplicationsFarley Lai
Memory management is a crucial aspect of mobile sensing applications that must process high-rate data streams in an energy-efficient manner. Our work is done in the context of synchronous data-flow models in which applications are implemented as a graph of components that exchange data at fixed and known rates over FIFO channels. In this paper, we show that it is feasible to leverage the restricted semantics of synchronous data-flow models to optimize memory management. Our memory optimization approach includes two components: (1) We use abstract interpretation to analyze the complete memory behavior of a mobile sensing application and identify data sharing opportunities across components according to the live ranges of exchanged samples. Experiments indicate that the static analysis is precise for a majority of considered stream applications whose control logic does not depend on input data. (2) We propose novel heuristics for memory allocation that leverage the graph structure of applications to optimize data exchanges between application components to achieve not only significantly lower memory footprints but also increased stream processing throughput.
Similar to Using AI Planning to Automate the Performance Analysis of Simulators (20)
Kurze Projektpräsentation zu JAMES II, Vorstellung am 23. 5. im Rahmen der Projektwoche am Institut für Informatik (Thema "Der erste Schritt zu Open Source") der Universität Rostock.
A Day in the Life of a Bug --- sinnvoll zu Open Source Projekten beitragenRoland Ewald
Kurzvortrag zu Möglichkeiten, in Open Source Projekten aktiv zu werden, mit Fokus auf Bugs (melden & beheben). Gehalten am 23. 5. im Rahmen der Projektwoche am Institut für Informatik (Thema "Der erste Schritt zu Open Source") der Universität Rostock.
Evaluating Simulation Software Components with Player Rating Systems (SIMUToo...Roland Ewald
In component-based simulation systems, simulation runs are usually executed by combinations of distinct components, each solving a particular sub-task. If multiple components are available for a given sub-task (e.g., different event queue implementations), a simulation system may rely on an automatic selection mechanism, on a user decision, or --- if neither is available --- on a predefined default component. However, deciding upon a default component for each kind of sub-task is difficult: such a component should work well across various application domains and various combinations with other components. Furthermore, the performance of individual components cannot be evaluated easily, since performance is typically measured for component combinations as a whole (e.g., the execution time of a simulation run). Finally, the selection of default components should be dynamic, as new and potentially superior components may be deployed to the system over time. We illustrate how player rating systems for team-based games can solve the above problems and evaluate our approach with an implementation of the TrueSkill(tm) rating system (Herbrich et al, 2007), applied in the context of the open-source modeling and simulation framework JAMES II. We also show how such systems can be used to steer performance analysis experiments for component ranking.
The paper can be found here: https://docs.google.com/file/d/0BxPrl7QoBqmoUDVXNmZUc29Nbmc/edit?usp=sharing
Domain-Specific Languages (DSLs) entwickeln und anwendenRoland Ewald
Vortrag für die Java User Group Rostock (https://sites.google.com/site/jughro) am 16. 1. 2013 zu DSLs im Allgemeinen und dem Buch "Domain-Specific Languages" von Martin Fowler (http://martinfowler.com/dsl.html).
Slides for the poster madness at the WSC '12 (http://wintersim.org), presenting a domain-specific language for setting up simulation experiments (see http://sessl.org for details).
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Deep Software Variability and Frictionless Reproducibility
Using AI Planning to Automate the Performance Analysis of Simulators
1. Using AI Planning to Automate
the Performance Analysis of
Simulators
19. 3. 2014, SIMUTools 2014 — Lisbon, Portugal
Roland Ewald
(for own content)
Sponsored by:
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 1
2. selected metabolites to a defined growth medium;
(vi) laboratory automation software to physically
execute the experimental plan and to record the
data and metadata in a relational database; (vii)
software to analyze the data and metadata (gen-
erate growth curves and extract parameters); and
(viii) software to relate the analyzed data to the
hypotheses; for example, statistical methods are
required to decide on significance. Once this in-
frastructure is in place, no human intellectual inter-
vention is necessary to execute cycles of simple
hypothesis-led experimentation. [For more details
of the software, and its application to a related
functional genomics problem, see (16) and figs.
S1 and S2].
lysine biosynthetic pathways of fungi. Adam hy-
pothesized that three genes (YER152C, YJL060W,
and YGL202W) encode this enzyme and ob-
served results consistent with all three hypotheses
(Table 1). To test Adam’s conclusions, we pu-
rified the protein products of these genes and
used them in in vitro enzyme assays, which
confirmed Adam’s conclusions [supporting on-
line material (SOM)] (Fig. 2).
To further test Adam's conclusions, we ex-
amined the scientific literature on the 20 genes
investigated (Table 1) (16). This revealed the ex-
istence of strong empirical evidence for the cor-
rectness of six of the hypotheses; that is, the
enzymes were not actually orphans (Table 1).
of overlapping function, enzymes that catalyze
more than one related reaction, and existing func-
tional annotations. Adam’s systematic bioinformatic
and quantitative phenotypic analyzes were required
to unravel this web of functionality.
Use of a robot scientist enables all aspects of a
scientific investigation to be formalized in logic.
For the core organization of this formalization,
we used the ontology of scientific experiments:
EXPO (11, 12). This ontology formalizes generic
knowledge about experiments. For Adam, we
developed LABORS, a customized version of
EXPO, expressed in the description logic lan-
guage OWL-DL (17). Application of LABORS
produces experimental descriptions in the logic-
Fig. 1. The Robot Scien-
tist Adam. The advances
that distinguish Adam from
other complex laboratory
systems are the individual
design of the experiments
to test hypotheses and the
utilization of complex in-
ternal cycles. Adam’s basic
operations are selection of
specified yeast strains from
a library held in a freezer,
inoculation of these strains
into microtiter plate wells
containing rich medium,
measurement of growth
curves on rich medium,
harvesting of a defined
quantity of cells from each
well, inoculation of these
cells into wells containing
defined media (minimal syn-
thetic dextrose medium plus
up to four added metab-
olites from a choice of six),
and measurement of growth
curves on the specified me-
dia. To achieve this func-
tionality, Adam has the
following components: a,
an automated –20°C freezer;
b, three liquid handlers (one
of which can separately control 96 fluid channels simultaneously); c, three
automated +30°C incubators; d, two automated plate readers; e, three robot
arms; f, two automated plate slides; g, an automated plate centrifuge; h, an
automated plate washer; i, two high-efficiency particulate air filters; and j, a
rigid transparent plastic enclosure. There are also two bar code readers, seven
cameras, 20 environment sensors, and four personal computers, as well as the
software. Adam is capable of designing and initiating over a thousand new
strain and defined-growth-medium experiments each day (from a selection of
thousands of yeast strains), with each experiment lasting up to 5 days. The
design enables measurement of OD595nm for each experiment at least once
every 30 min (more often if running at less than full capacity), allowing ac-
curate growth curves to be recorded (typically we take over a hundred mea-
surements a day per well), plus associated metadata. See the supporting
online material for pictures and a video of Adam in action.
3 APRIL 2009 VOL 324 SCIENCE www.sciencemag.org86
onApril14,2009www.sciencemag.orgDownloadedfrom
Left: Victorian era student microscope, by Tom Blackwell (flickr.com, CC-BY-NC)
Right: King et al., The automation of science, Science 324 (5923), pp. 85-89, Apr. 2009.
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 2
3. What about Simulator Performance Analysis?
1989
25
5
34
1 2 3
ExecutionTime
Simulators
2014
25
5
34
1 2 3
Executio
Simulators
1 2 3
ExecutionTime
Simulators
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 3
4. The Problem
• Manual analysis is cumbersome (less exploration, more errors)
• Constant reinvention of the wheel (scripts, statistics, design)
• Implicit, undetected bias (little reproducibility, credibility, efficiency)
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 4
5. The Problem
• Manual analysis is cumbersome (less exploration, more errors)
• Constant reinvention of the wheel (scripts, statistics, design)
• Implicit, undetected bias (little reproducibility, credibility, efficiency)
A performance study has context & is done for a reason.
Let the user express this, automate everything else.
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 4
6. Scenario: Comparing two Simulators
Hypothesis:
∀m ∈ M : moreEntities(m, xhyp) ⇒ faster(B, A, m)
• A, B: simulators
• M: models (problem space)
• moreEntities, faster: predicates
• xhyp: hypothetical threshold
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 5
7. Non-deterministic Outcomes
∀m ∈ M : moreEntities(m, xhyp) ⇒ faster(B, A, m)
1. B outperforms A: faster(B, A, m) ∧ ¬faster(A, B, m)
2. A outperforms B: faster(A, B, m) ∧ ¬faster(B, A, m)
3. A and B perform similarly: ¬faster(A, B, m) ∧ ¬faster(B, A, m)
4. A, B, or both crashed.
5. Another error occurred.
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 6
8. Falsification Approaches
∀m ∈ M : moreEntities(m, xhyp) ⇒ faster(B, A, m)
1. Randomly sample m ∈ M (computing budget allocation?)
2. Statistical tests (runtime distribution?)
3. Train performance predictors, try only promising m ∈ M
4. Analyze sensitivity of A and B performance, sample accordingly
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 7
9. Falsification Approaches
∀m ∈ M : moreEntities(m, xhyp) ⇒ faster(B, A, m)
1. Randomly sample m ∈ M (computing budget allocation?)
2. Statistical tests (runtime distribution?)
3. Train performance predictors, try only promising m ∈ M
4. Analyze sensitivity of A and B performance, sample accordingly
5. ... or other experiment sequences.
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 7
10. ALESIA
A system for automatic simulator performance analysis.
• Hypothesis-driven experimentation
• Support for performance predictors (knowledge representation)
• Automatic result analysis ⇒ feedback loop
• Integration of methods from statistics, OR, AI etc.
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 8
11. ALESIA: Overall Structure
SESSL
Simulation System (exchangeable)
Problem
Plan Execution
Planning
Result
Observed Data Experiment Definitions
Plan
Formal Definition
Report
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 9
12. Sample Input
1. User domain (context)
2. Preferences
3. Goal (¬hypothesis)
val result = submit {
SingleModel("java://examples.sr.LinearChainSystem")
} {
TerminateWhen(WallClockTimeMaximum(seconds = 30))
} {
exists >> model | hasProperty("qss")
}
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 10
13. Sample Actions
• Generated by ActionSpecification instances
• Depend on user domain
• Non-deterministic
Action loadSingleModel:
precondition: ¬depleted ∧ ¬loadedModel
effect: depleted ∨ loadedModel
Action checkQSSProperty:
precondition: loadedModel
effect: hasProperty(qss)∨ (¬hasProperty(qss) ∧ ¬loadedModel)
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 11
14. Non-Deterministic AI Planning
7.3.2014 BDD
x1
x2
0
x2
1
x3
0
x3
1
1
1
0
0
01 10
(from Wikipedia, cc-by-sa)
• Non-determinism: Plan ⇒ Policy
• Symbolic Model Checking
• Binary Decision Diagrams (BDDs)
• Algorithms by Cimatti et al., Rintanen
Planning algorithms:
Cimatti et al.: Weak, strong, and strong cyclic planning via symbolic model checking, Artificial Intelligence, 147(1-2), 2003
Rintanen: Complexity of conditional planning under partial observability and infinite executions, European Conference on AI, 2012
Ghallab, Nau, Traverso: Automated Planning: Theory & Practice, 2004
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 12
15. Sample Plan for Goal hasProperty(qss)
Plan types:
• Weak
• Strong-cyclic
• Strong
if(¬depleted ∧ ¬loadedModel)
use loadSingleModel
else if(loadedModel)
use checkQSSProperty
else error
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 13
16. Plan Execution
Planning Domain
Execution Domain
loadedModel
… SingleModel("java://examples.sr.LinearChainSystem")
hasProperty(qss)
Problem Definition
java://examples.sr.LinearChainSystem Execution Times
… hasProperty("qss")
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 14
17. Simulation System Independence via SESSL
import sessl._
import sessl.james._ //ALeSiA is used with the JAMES II binding
execute{
new Experiment {
model="java://examples.sr.LinearChainSystem"
replications=10
stopCondition=AfterWallClockTime(seconds=3) or AfterSimSteps(10e06)
}
}
Ewald, Uhrmacher: SESSL: A Domain-Specific Language for Simulation Experiments, ACM TOMACS 24(2), 2014
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 15
18. Example: Simulators for Chemical Reaction Networks
val result = submit {
// Problem Domain (Context):
ModelSet("java://examples.sr.LinearChainSystem",
ModelParameter("numOfSpecies",1,10,1000),
ModelParameter("numOfInitialParticles",10,100,10000)),
SingleSimulator("nrm", NextReactionMethod()),
SingleSimulator("dm", DirectMethod())
} { // Preferences:
DesiredSingleExecutionWallClockTime(seconds = 1),
TerminateWhen(MaxOverallNumberOfActions(100))
} { // Hypothesis:
exists>>model|(hasProperty("qss") and isFasterWCT("dm", "nrm", model))
}
Stochastic Simulation Algorithms:
Gillespie: A general method for numerically simulating the stochastic time evolution of coupled chemical reactions, J Comp Phys 22, 1976
Gibson, Bruck: Efficient Exact Stochastic Simulation of Chemical Systems with Many Species and Many Channels, J Chem Phys 104, 2000
Benchmark Model:
Cao, Li, Petzold: Efficient formulation of the stochastic simulation algorithm for chemically reacting systems, The J Chem Phys 121(9), 2004
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 16
20. Summary
• Goal: automate simulator performance analysis
• Method: AI planning and plan execution with feedback loop
• Results:
• Works in principle, but action specification requires more meta-data
• Ongoing: integration of sophisticated methods
• Future: DSLs for context & hypotheses, distributed execution
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 18
21. Thank you. Questions?
ALESIA is open source (Apache 2.0 license).
https://bitbucket.org/alesia
19. 3. 2014 c 2014 UNIVERSITÄT ROSTOCK | MODELING & SIMULATION RESEARCH GROUP 19