P. Jamshidi, M. Velez, C. Kästner, N. Siegmund, and P. Kawthekar. Transfer learning for improving model predictions in highly configurable software. Int’l Symp. Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2017.
Computer aided design of electrical machineAsif Jamadar
This document discusses computer aided design of electrical machines. It introduces the topic and outlines some key advantages of CAD, such as performing millions of computations quickly, enabling the study of wide parameter variations to find optimal designs, and eliminating tedious calculations. It then describes two main methods of computer aided design - the analysis method and the synthesis method. The analysis method determines machine performance from initial parameters, while the synthesis method uses numerical techniques and iteration to modify variable values to meet desired performance characteristics and find an optimal design.
This document discusses different methods for estimating software project costs, including their advantages and disadvantages. It identifies two main types of methods - algorithmic methods that use mathematical equations and non-algorithmic methods based on past data and experience. While algorithmic methods provide more accurate estimates, they require more effort. The document recommends using a combination of different estimation methods, comparing results, and regularly re-estimating costs at project milestones to improve accuracy. There is no single best method, as cost estimation depends on the specific project scenario.
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...Pooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
Autonomic Resource Provisioning for Cloud-Based SoftwarePooyan Jamshidi
The Third National Conference on Cloud Computing and Commerce (NC4), for more information please refer to: http://computing.dcu.ie/~pjamshidi/PDF/SEAMS2014.pdf
Towards Quality-Aware Development of Big Data Applications with DICEPooyan Jamshidi
The document summarizes the DICE Horizon 2020 project, which aims to improve quality-aware development of big data applications. The 3-year project involves 9 partners across 7 EU countries. It seeks to shorten development times and reduce costs and quality incidents for big data projects through model-driven engineering and DevOps approaches. The project will demonstrate its techniques on three big data case studies and has milestones to define requirements, provide tools, and define its integrated architecture.
The document describes a configuration optimization tool that aims to automatically optimize the configuration of big data technologies. It does this by running experiments on data intensive applications, measuring performance under different configurations, and using this data to recommend optimal configurations. The tool implements two approaches for optimization - Bayesian optimization and transfer learning. It consists of several components, including an experimental suite to run tests, an optimization module, interfaces to various big data technologies, and a performance repository to store results. The goal is to help users like SMEs reduce the time and cost of testing and configuring big data applications between releases.
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...Pooyan Jamshidi
https://arxiv.org/abs/1606.06543
Finding optimal configurations for Stream Processing Systems (SPS) is a challenging problem due to the large number of parameters that can influence their performance and the lack of analytical models to anticipate the effect of a change. To tackle this issue, we consider tuning methods where an experimenter is given a limited budget of experiments and needs to carefully allocate this budget to find optimal configurations. We propose in this setting Bayesian Optimization for Configuration Optimization (BO4CO), an auto-tuning algorithm that leverages Gaussian Processes (GPs) to iteratively capture posterior distributions of the configuration spaces and sequentially drive the experimentation. Validation based on Apache Storm demonstrates that our approach locates optimal configurations within a limited experimental budget, with an improvement of SPS performance typically of at least an order of magnitude compared to existing configuration algorithms.
Transfer Learning for Improving Model Predictions in Robotic SystemsPooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
Computer aided design of electrical machineAsif Jamadar
This document discusses computer aided design of electrical machines. It introduces the topic and outlines some key advantages of CAD, such as performing millions of computations quickly, enabling the study of wide parameter variations to find optimal designs, and eliminating tedious calculations. It then describes two main methods of computer aided design - the analysis method and the synthesis method. The analysis method determines machine performance from initial parameters, while the synthesis method uses numerical techniques and iteration to modify variable values to meet desired performance characteristics and find an optimal design.
This document discusses different methods for estimating software project costs, including their advantages and disadvantages. It identifies two main types of methods - algorithmic methods that use mathematical equations and non-algorithmic methods based on past data and experience. While algorithmic methods provide more accurate estimates, they require more effort. The document recommends using a combination of different estimation methods, comparing results, and regularly re-estimating costs at project milestones to improve accuracy. There is no single best method, as cost estimation depends on the specific project scenario.
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...Pooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
Autonomic Resource Provisioning for Cloud-Based SoftwarePooyan Jamshidi
The Third National Conference on Cloud Computing and Commerce (NC4), for more information please refer to: http://computing.dcu.ie/~pjamshidi/PDF/SEAMS2014.pdf
Towards Quality-Aware Development of Big Data Applications with DICEPooyan Jamshidi
The document summarizes the DICE Horizon 2020 project, which aims to improve quality-aware development of big data applications. The 3-year project involves 9 partners across 7 EU countries. It seeks to shorten development times and reduce costs and quality incidents for big data projects through model-driven engineering and DevOps approaches. The project will demonstrate its techniques on three big data case studies and has milestones to define requirements, provide tools, and define its integrated architecture.
The document describes a configuration optimization tool that aims to automatically optimize the configuration of big data technologies. It does this by running experiments on data intensive applications, measuring performance under different configurations, and using this data to recommend optimal configurations. The tool implements two approaches for optimization - Bayesian optimization and transfer learning. It consists of several components, including an experimental suite to run tests, an optimization module, interfaces to various big data technologies, and a performance repository to store results. The goal is to help users like SMEs reduce the time and cost of testing and configuring big data applications between releases.
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...Pooyan Jamshidi
https://arxiv.org/abs/1606.06543
Finding optimal configurations for Stream Processing Systems (SPS) is a challenging problem due to the large number of parameters that can influence their performance and the lack of analytical models to anticipate the effect of a change. To tackle this issue, we consider tuning methods where an experimenter is given a limited budget of experiments and needs to carefully allocate this budget to find optimal configurations. We propose in this setting Bayesian Optimization for Configuration Optimization (BO4CO), an auto-tuning algorithm that leverages Gaussian Processes (GPs) to iteratively capture posterior distributions of the configuration spaces and sequentially drive the experimentation. Validation based on Apache Storm demonstrates that our approach locates optimal configurations within a limited experimental budget, with an improvement of SPS performance typically of at least an order of magnitude compared to existing configuration algorithms.
Transfer Learning for Improving Model Predictions in Robotic SystemsPooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
Title of paper: Multi Agent System for Machine Learning Under Uncertainty in Cyber Physical Manufacturing System
Presented at - 9th Workshop on Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of the Future
This document discusses software project management and cost estimation. It outlines five basic factors that influence software project costs: size, process, personnel, environment, and required quality. An equation is provided that estimates effort based on these five factors. The document also discusses the importance of cost estimation for feasibility analysis and return on investment calculations. It describes different techniques for software cost estimation including algorithmic modeling, expert judgment, top-down, bottom-up, and estimation by analogy.
Saving resources with simulation webinar 092011Scott Althouse
IBM Rational Rhapsody provides solutions to help reduce costs and risks when developing complex products and systems. It allows for early validation and verification of designs through model-based simulation and testing. This helps find defects earlier in the development process when they are cheaper to fix. Rational Rhapsody also improves collaboration, requirements management, and automation of testing.
In the software industry, two software engineering development best practices coexist: open-source and
closed-source software. The former has a shared code that anyone can contribute, whereas the latter has a
proprietary code that only the owner can access. Software reliability is crucial in the industry when a new
product or update is released. Applying meta-heuristic optimization algorithms for closed-source software
reliability prediction has produced significant and accurate results. Now, open-source software dominates
the landscape of cloud-based systems. Therefore, providing results on open- source software reliability - as
a quality indicator - would greatly help solve the open-source software reliability growth- modelling
problem.
This document discusses software cost estimation. It covers fundamentals of cost estimation including cost components. It discusses cost estimation during the software lifecycle and the general process. It then describes several methods for cost estimation - algorithmic/parametric models, expert judgment, top-down, bottom-up, analogy, and price to win. It stresses the importance of accurate cost estimation and concludes by listing references.
implementing_ai_for_improved_performance_testing_the_key_to_success.pptxsarah david
Experience a revolution in software testing with our AI-driven Performance Testing solutions at Cuneiform Consulting. In a world dominated by technological advancements, implementing AI is the key to unlocking unparalleled software performance. Boost your applications with speed, scalability, and responsiveness, ensuring a seamless user experience. Cuneiform Consulting leads the way in reshaping quality assurance, adhering to the predictions of the World Quality Report for AI's significant role in the next decade. Join us to stay ahead, save costs with constant AI-powered testing, and explore the boundless possibilities of AI/ML development services. Contact us now for a future-proof digital transformation!
This document contains the resume of Srikanth Rangaraju summarizing his professional experience in Quality Assurance and Testing for various IT projects over 12 years. He has extensive experience in designing and executing automated test cases as well as performance, load, and cloud testing. Srikanth also has skills in programming languages, frameworks, version control, and project management tools.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Development of software defect prediction system using artificial neural networkIJAAS Team
Software testing is an activity to enable a system is bug free during execution process. The software bug prediction is one of the most encouraging exercises of the testing phase of the software improvement life cycle. In any case, in this paper, a framework was created to anticipate the modules that deformity inclined in order to be utilized to all the more likely organize software quality affirmation exertion. Genetic Algorithm was used to extract relevant features from the acquired datasets to eliminate the possibility of overfitting and the relevant features were classified to defective or otherwise modules using the Artificial Neural Network. The system was executed in MATLAB (R2018a) Runtime environment utilizing a statistical toolkit and the performance of the system was assessed dependent on the accuracy, precision, recall, and the f-score to check the effectiveness of the system. In the finish of the led explores, the outcome indicated that ECLIPSE JDT CORE, ECLIPSE PDE UI, EQUINOX FRAMEWORK and LUCENE has the accuracy, precision, recall and the f-score of 86.93, 53.49, 79.31 and 63.89% respectively, 83.28, 31.91, 45.45 and 37.50% respectively, 83.43, 57.69, 45.45 and 50.84% respectively and 91.30, 33.33, 50.00 and 40.00% respectively. This paper presents an improved software predictive system for the software defect detections.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
The document presents a hybrid neural network model using particle swarm optimization (PSO) to evaluate the quality of object-oriented software modules by predicting fault-prone components. The model trains a neural network using PSO on 80% of a dataset containing code attributes from NASA projects. It then tests the trained network on the remaining 20% and calculates accuracy, mean absolute error, and root mean squared error at different iterations, showing improved results as iterations increase. Compared to other methods, the PSO-trained neural network achieves higher accuracy and lower errors in fault prediction.
A Machine learning based framework for Verification and Validation of Massive...IRJET Journal
This document presents a machine learning based framework for verification and validation of massive scale image data. It discusses the challenges of managing and analyzing large image datasets. The proposed framework uses techniques like data augmentation, feature extraction and selection, decision trees, cross-validation and test cases to systematically manage massive image data and validate machine learning algorithms and systems. It uses Cell Morphology Analysis (CMA) as a case study to demonstrate how the framework can verify and validate large datasets, software systems and algorithms. The effectiveness of the framework is shown through its application to CMA, which involves classifying cell images using machine learning.
Trends in Embedded Software EngineeringAditya Kamble
This document summarizes trends in embedded software engineering, including moving from traditional coding to model-driven engineering and domain-specific development. It also discusses quality assurance techniques for safety-critical systems, such as static and formal verification as well as dynamic testing. Model-based approaches like fault tree analysis and measurement-based reliability growth models are presented for safety and reliability analysis. Overall, the document outlines challenges in developing complex embedded systems and the need for continued advances in systematic engineering technologies.
Towards a Macrobenchmark Framework for Performance Analysis of Java ApplicationsGábor Szárnyas
This document discusses the need for macrobenchmarks to evaluate the performance and scalability of large model querying systems. It presents the Train Benchmark, which measures the performance of validation queries on randomly generated railway network models of increasing sizes. The benchmark includes loading models, running validation queries to detect errors, transforming models by injecting faults, and revalidating. It aims to provide a realistic and scalable way to assess model querying tools for domains like software engineering, where models can contain billions of elements.
Software performance simulation strategies for high-level embedded system designMr. Chanuwan
This document discusses software performance simulation strategies for high-level embedded system design. It introduces instruction set simulators (ISSs), which are accurate but slow. It also discusses binary-level simulation (BLS) and source-level simulation (SLS), which are faster approaches based on native execution. The document proposes a new approach called intermediate source code instrumentation based simulation (iSciSim) that aims to achieve high accuracy, speed, and low complexity for system-level design space exploration.
A Runtime Evaluation Methodology and Framework for Autonomic SystemsIDES Editor
An autonomic system provides self-adaptive ability
that enables system to dynamically adjust its behavior on
environmental changes or system failure. Fundamental
process of adaptive behavior in an autonomic system is consist
of monitoring system or/and environment information,
analyzing monitored information, planning adaptation policy
and executing selected policy. Evaluating system utility is
one of a significant part among them. We propose a novel
approach on evaluating autonomic system at runtime. Our
proposed method takes advantage of a goal model that has
been widely used at requirement elicitation phase to capture
system requirements. We suggest the state-based goal model
that is dynamically activated as the system state changes. In
addition, we defined type of constraints that can be used to
evaluate goal satisfaction level. We implemented a prototype
of autonomic computing software engine to verity our proposed
method. We simulated the behavior of the autonomic
computing engine with the home surveillance robot scenario
and observed the validity of our proposed method
Learning LWF Chain Graphs: A Markov Blanket Discovery ApproachPooyan Jamshidi
LWF Chain graphs were introduced by Lauritzen, Wermuth, and Frydenberg as a generalization of graphical models based on undirected graphs and DAGs. From the causality point of view, in an LWF CG: Directed edges represent direct causal effects. Undirected edges represent causal effects due to interference, which occurs when an individual’s outcome is influenced by their social interaction with other population members, e.g., in situations that involve contagious agents, educational programs, or social networks. The construction of chain graph models is a challenging task that would be greatly facilitated by automation.
Markov blanket discovery has an important role in structure learning of Bayesian network. It is surprising, however, how little attention it has attracted in the context of learning LWF chain graphs. In this work, we provide a graphical characterization of Markov blankets in chain graphs. The characterization is different from the well-known one for Bayesian networks and generalizes it. We provide a novel scalable and sound algorithm for Markov blanket discovery in LWF chain graphs. We also provide a sound and scalable constraint-based framework for learning the structure of LWF CGs from faithful causally sufficient data. With the use of our algorithm, the problem of structure learning is reduced to finding an efficient algorithm for Markov blanket discovery in LWF chain graphs. This greatly simplifies the structure-learning task and makes a wide range of inference/learning problems computationally tractable because our approach exploits locality.
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...Pooyan Jamshidi
We enable reliable and dependable self‐adaptations of component connectors in unreliable environments with imperfect monitoring facilities and conflicting user opinions about adaptation policies by developing a framework which comprises: (a) mechanisms for robust model evolution, (b) a method for adaptation reasoning, and (c) tool support that allows an end‐to‐end application of the developed techniques in real‐world domains.
More Related Content
Similar to Sensitivity Analysis for Building Adaptive Robotic Software
Title of paper: Multi Agent System for Machine Learning Under Uncertainty in Cyber Physical Manufacturing System
Presented at - 9th Workshop on Service Oriented, Holonic and Multi-agent Manufacturing Systems for Industry of the Future
This document discusses software project management and cost estimation. It outlines five basic factors that influence software project costs: size, process, personnel, environment, and required quality. An equation is provided that estimates effort based on these five factors. The document also discusses the importance of cost estimation for feasibility analysis and return on investment calculations. It describes different techniques for software cost estimation including algorithmic modeling, expert judgment, top-down, bottom-up, and estimation by analogy.
Saving resources with simulation webinar 092011Scott Althouse
IBM Rational Rhapsody provides solutions to help reduce costs and risks when developing complex products and systems. It allows for early validation and verification of designs through model-based simulation and testing. This helps find defects earlier in the development process when they are cheaper to fix. Rational Rhapsody also improves collaboration, requirements management, and automation of testing.
In the software industry, two software engineering development best practices coexist: open-source and
closed-source software. The former has a shared code that anyone can contribute, whereas the latter has a
proprietary code that only the owner can access. Software reliability is crucial in the industry when a new
product or update is released. Applying meta-heuristic optimization algorithms for closed-source software
reliability prediction has produced significant and accurate results. Now, open-source software dominates
the landscape of cloud-based systems. Therefore, providing results on open- source software reliability - as
a quality indicator - would greatly help solve the open-source software reliability growth- modelling
problem.
This document discusses software cost estimation. It covers fundamentals of cost estimation including cost components. It discusses cost estimation during the software lifecycle and the general process. It then describes several methods for cost estimation - algorithmic/parametric models, expert judgment, top-down, bottom-up, analogy, and price to win. It stresses the importance of accurate cost estimation and concludes by listing references.
implementing_ai_for_improved_performance_testing_the_key_to_success.pptxsarah david
Experience a revolution in software testing with our AI-driven Performance Testing solutions at Cuneiform Consulting. In a world dominated by technological advancements, implementing AI is the key to unlocking unparalleled software performance. Boost your applications with speed, scalability, and responsiveness, ensuring a seamless user experience. Cuneiform Consulting leads the way in reshaping quality assurance, adhering to the predictions of the World Quality Report for AI's significant role in the next decade. Join us to stay ahead, save costs with constant AI-powered testing, and explore the boundless possibilities of AI/ML development services. Contact us now for a future-proof digital transformation!
This document contains the resume of Srikanth Rangaraju summarizing his professional experience in Quality Assurance and Testing for various IT projects over 12 years. He has extensive experience in designing and executing automated test cases as well as performance, load, and cloud testing. Srikanth also has skills in programming languages, frameworks, version control, and project management tools.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
A Defect Prediction Model for Software Product based on ANFISIJSRD
Artificial intelligence techniques are day by day getting involvement in all the classification and prediction based process like environmental monitoring, stock exchange conditions, biomedical diagnosis, software engineering etc. However still there are yet to be simplify the challenges of selecting training criteria for design of artificial intelligence models used for prediction of results. This work focus on the defect prediction mechanism development using software metric data of KC1.We have taken subtractive clustering approach for generation of fuzzy inference system (FIS).The FIS rules are generated at different radius of influence of input attribute vectors and the developed rules are further modified by ANFIS technique to obtain the prediction of number of defects in software project using fuzzy logic system.
Development of software defect prediction system using artificial neural networkIJAAS Team
Software testing is an activity to enable a system is bug free during execution process. The software bug prediction is one of the most encouraging exercises of the testing phase of the software improvement life cycle. In any case, in this paper, a framework was created to anticipate the modules that deformity inclined in order to be utilized to all the more likely organize software quality affirmation exertion. Genetic Algorithm was used to extract relevant features from the acquired datasets to eliminate the possibility of overfitting and the relevant features were classified to defective or otherwise modules using the Artificial Neural Network. The system was executed in MATLAB (R2018a) Runtime environment utilizing a statistical toolkit and the performance of the system was assessed dependent on the accuracy, precision, recall, and the f-score to check the effectiveness of the system. In the finish of the led explores, the outcome indicated that ECLIPSE JDT CORE, ECLIPSE PDE UI, EQUINOX FRAMEWORK and LUCENE has the accuracy, precision, recall and the f-score of 86.93, 53.49, 79.31 and 63.89% respectively, 83.28, 31.91, 45.45 and 37.50% respectively, 83.43, 57.69, 45.45 and 50.84% respectively and 91.30, 33.33, 50.00 and 40.00% respectively. This paper presents an improved software predictive system for the software defect detections.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
The document presents a hybrid neural network model using particle swarm optimization (PSO) to evaluate the quality of object-oriented software modules by predicting fault-prone components. The model trains a neural network using PSO on 80% of a dataset containing code attributes from NASA projects. It then tests the trained network on the remaining 20% and calculates accuracy, mean absolute error, and root mean squared error at different iterations, showing improved results as iterations increase. Compared to other methods, the PSO-trained neural network achieves higher accuracy and lower errors in fault prediction.
A Machine learning based framework for Verification and Validation of Massive...IRJET Journal
This document presents a machine learning based framework for verification and validation of massive scale image data. It discusses the challenges of managing and analyzing large image datasets. The proposed framework uses techniques like data augmentation, feature extraction and selection, decision trees, cross-validation and test cases to systematically manage massive image data and validate machine learning algorithms and systems. It uses Cell Morphology Analysis (CMA) as a case study to demonstrate how the framework can verify and validate large datasets, software systems and algorithms. The effectiveness of the framework is shown through its application to CMA, which involves classifying cell images using machine learning.
Trends in Embedded Software EngineeringAditya Kamble
This document summarizes trends in embedded software engineering, including moving from traditional coding to model-driven engineering and domain-specific development. It also discusses quality assurance techniques for safety-critical systems, such as static and formal verification as well as dynamic testing. Model-based approaches like fault tree analysis and measurement-based reliability growth models are presented for safety and reliability analysis. Overall, the document outlines challenges in developing complex embedded systems and the need for continued advances in systematic engineering technologies.
Towards a Macrobenchmark Framework for Performance Analysis of Java ApplicationsGábor Szárnyas
This document discusses the need for macrobenchmarks to evaluate the performance and scalability of large model querying systems. It presents the Train Benchmark, which measures the performance of validation queries on randomly generated railway network models of increasing sizes. The benchmark includes loading models, running validation queries to detect errors, transforming models by injecting faults, and revalidating. It aims to provide a realistic and scalable way to assess model querying tools for domains like software engineering, where models can contain billions of elements.
Software performance simulation strategies for high-level embedded system designMr. Chanuwan
This document discusses software performance simulation strategies for high-level embedded system design. It introduces instruction set simulators (ISSs), which are accurate but slow. It also discusses binary-level simulation (BLS) and source-level simulation (SLS), which are faster approaches based on native execution. The document proposes a new approach called intermediate source code instrumentation based simulation (iSciSim) that aims to achieve high accuracy, speed, and low complexity for system-level design space exploration.
A Runtime Evaluation Methodology and Framework for Autonomic SystemsIDES Editor
An autonomic system provides self-adaptive ability
that enables system to dynamically adjust its behavior on
environmental changes or system failure. Fundamental
process of adaptive behavior in an autonomic system is consist
of monitoring system or/and environment information,
analyzing monitored information, planning adaptation policy
and executing selected policy. Evaluating system utility is
one of a significant part among them. We propose a novel
approach on evaluating autonomic system at runtime. Our
proposed method takes advantage of a goal model that has
been widely used at requirement elicitation phase to capture
system requirements. We suggest the state-based goal model
that is dynamically activated as the system state changes. In
addition, we defined type of constraints that can be used to
evaluate goal satisfaction level. We implemented a prototype
of autonomic computing software engine to verity our proposed
method. We simulated the behavior of the autonomic
computing engine with the home surveillance robot scenario
and observed the validity of our proposed method
Similar to Sensitivity Analysis for Building Adaptive Robotic Software (20)
Learning LWF Chain Graphs: A Markov Blanket Discovery ApproachPooyan Jamshidi
LWF Chain graphs were introduced by Lauritzen, Wermuth, and Frydenberg as a generalization of graphical models based on undirected graphs and DAGs. From the causality point of view, in an LWF CG: Directed edges represent direct causal effects. Undirected edges represent causal effects due to interference, which occurs when an individual’s outcome is influenced by their social interaction with other population members, e.g., in situations that involve contagious agents, educational programs, or social networks. The construction of chain graph models is a challenging task that would be greatly facilitated by automation.
Markov blanket discovery has an important role in structure learning of Bayesian network. It is surprising, however, how little attention it has attracted in the context of learning LWF chain graphs. In this work, we provide a graphical characterization of Markov blankets in chain graphs. The characterization is different from the well-known one for Bayesian networks and generalizes it. We provide a novel scalable and sound algorithm for Markov blanket discovery in LWF chain graphs. We also provide a sound and scalable constraint-based framework for learning the structure of LWF CGs from faithful causally sufficient data. With the use of our algorithm, the problem of structure learning is reduced to finding an efficient algorithm for Markov blanket discovery in LWF chain graphs. This greatly simplifies the structure-learning task and makes a wide range of inference/learning problems computationally tractable because our approach exploits locality.
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...Pooyan Jamshidi
We enable reliable and dependable self‐adaptations of component connectors in unreliable environments with imperfect monitoring facilities and conflicting user opinions about adaptation policies by developing a framework which comprises: (a) mechanisms for robust model evolution, (b) a method for adaptation reasoning, and (c) tool support that allows an end‐to‐end application of the developed techniques in real‐world domains.
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...Pooyan Jamshidi
Modern cyber-physical systems (e.g., robotics systems) are typically composed of physical and software components, the characteristics of which are likely to change over time. Assumptions about parts of the system made at design time may not hold at run time, especially when a system is deployed for long periods (e.g., over decades). Self-adaptation is designed to find reconfigurations of systems to handle such run-time inconsistencies. Planners can be used to find and enact optimal reconfigurations in such an evolving context. However, for systems that are highly configurable, such planning becomes intractable due to the size of the adaptation space. To overcome this challenge, in this paper we explore an approach that (a) uses machine learning to find Pareto-optimal configurations without needing to explore every configuration and (b) restricts the search space to such configurations to make planning tractable. We explore this in the context of robot missions that need to consider task timeliness and energy consumption. An independent evaluation shows that our approach results in high-quality adaptation plans in uncertain and adversarial environments.
Paper: https://arxiv.org/abs/1903.03920
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...Pooyan Jamshidi
Despite achieving state-of-the-art performance across many domains, machine learning systems are highly vulnerable to subtle adversarial perturbations. Although defense approaches have been proposed in recent years, many have been bypassed by even weak adversarial attacks. Previous studies showed that ensembles created by combining multiple weak defenses (i.e., input data transformations) are still weak. In this talk, I will show that it is indeed possible to construct effective ensembles using weak defenses to block adversarial attacks. However, to do so requires a diverse set of such weak defenses. Based on this motivation, I will present Athena, an extensible framework for building effective defenses to adversarial attacks against machine learning systems. I will talk about the effectiveness of ensemble strategies with a diverse set of many weak defenses that comprise transforming the inputs (e.g., rotation, shifting, noising, denoising, and many more) before feeding them to target deep neural network classifiers. I will also discuss the effectiveness of the ensembles with adversarial examples generated by various adversaries in different threat models. In the second half of the talk, I will explain why building defenses based on the idea of many diverse weak defenses works, when it is most effective, and what its inherent limitations and overhead are.
Transfer Learning for Performance Analysis of Machine Learning SystemsPooyan Jamshidi
This document discusses transfer learning approaches for analyzing the performance of machine learning systems. It begins with the presenter's background and credentials. It then notes that today's most popular systems are highly configurable, but understanding how configurations impact performance is challenging. The document uses a case study of a social media analytics system called SocialSensor to illustrate the opportunity of exploring different configurations to improve performance without extra resources. Testing various configurations of SocialSensor's data processing pipelines revealed that the default was suboptimal, and an optimal configuration found through experimentation significantly outperformed the default and an expert's recommendation. The document concludes that default configurations are often bad, but transfer learning approaches can help identify configurations that noticeably improve performance.
Transfer Learning for Performance Analysis of Configurable Systems:A Causal ...Pooyan Jamshidi
Modern systems (e.g., deep neural networks, big data analytics, and compilers) are highly configurable, which means they expose different performance behavior under different configurations. The fundamental challenge is that one cannot simply measure all configurations due to the sheer size of the configuration space. Transfer learning has been used to reduce the measurement efforts by transferring knowledge about performance behavior of systems across environments. Previously, research has shown that statistical models are indeed transferable across environments. In this work, we investigate identifiability and transportability of causal effects and statistical relations in highly-configurable systems. Our causal analysis agrees with previous exploratory analysis~\cite{Jamshidi17} and confirms that the causal effects of configuration options can be carried over across environments with high confidence. We expect that the ability to carry over causal relations will enable effective performance analysis of highly-configurable systems.
1) Machine learning systems are increasingly configurable, making their performance behavior complex and difficult to understand. 2) The document discusses a social media monitoring system called SocialSensor, which uses a configurable data processing pipeline. 3) By exploring different system configurations, the performance of SocialSensor could potentially be improved without requiring more resources. This demonstrates an opportunity to optimize performance through configuration tuning.
Integrated Model Discovery and Self-Adaptation of RobotsPooyan Jamshidi
Machine learn models efficiently under budget constraints to adapt to perturbations such as environmental changes or changes in the internal resources.
Modern software-intensive systems are composed of components that are likely to change their behaviour over time (e.g., adding/removing components).
For software to continue to operate under such changes, the assumptions about parts of the system made at design time may not hold at runtime due to uncertainty.
Mechanisms must be put in place that can dynamically learn new models of these assumptions and use them to make decisions about missions, configurations, etc.
Transfer Learning for Performance Analysis of Highly-Configurable SoftwarePooyan Jamshidi
This document discusses using transfer learning to analyze the performance of configurable software systems. It begins by noting that today's most popular software systems are highly configurable and that their increasing configurability makes understanding performance behavior difficult. The author then describes using transfer learning to enable learning performance models more efficiently by reusing data from related source domains. This allows developers and users to better understand performance tradeoffs and find optimal configurations.
Architectural Tradeoff in Learning-Based SoftwarePooyan Jamshidi
In classical software development, developers write explicit instructions in a programming language to hardcode the explicit behavior of software systems. By writing each line of code, the programmer instructs the software to have the desirable behavior by exploring a specific point in program space.
Recently, however, software systems are adding learning components that, instead of hardcoding an explicit behavior, learn a behavior through data. The learning-intensive software systems are written in terms of models and their parameters that need to be adjusted based on data. In learning-enabled systems, we specify some constraints on the behavior of a desirable program (e.g., a data set of input–output pairs of examples) and use the computational resources to search through the program space to find a program that satisfies the constraints. In neural networks, we restrict the search to a continuous subset of the program space.
This talk provides experimental evidence of making tradeoffs for deep neural network models, using the Deep Neural Network Architecture system as a case study. Concrete experimental results are presented; also featured are additional case studies in big data (Storm, Cassandra), data analytics (configurable boosting algorithms), and robotics applications.
Production-Ready Machine Learning for the Software ArchitectPooyan Jamshidi
This document summarizes a guest lecture on building production-ready machine learning systems. The lecturer discusses how a startup called Sniffable, which created an Instagram-like app for dogs, tried to build a machine learning system called Pooch Predictor to predict how popular photos would be. However, Pooch Predictor failed repeatedly in production due to issues like not retraining models, tight coupling between components, and treating ML like a transactional rather than dynamic system. The lecturer emphasizes that ML systems must be reactive, responsive, resilient, elastic, and message-driven to succeed at scale.
Transfer Learning for Software Performance Analysis: An Exploratory AnalysisPooyan Jamshidi
The document discusses transfer learning for building performance models of configurable software systems. Building accurate performance models through direct measurement is challenging due to the large configuration space and environmental factors. Transfer learning aims to address this by leveraging knowledge from performance models built for related systems or environments to improve the learning process for new systems and environments. The goal is to develop techniques that allow predicting and optimizing performance for configurable systems across changing environments.
Learning Software Performance Models for Dynamic and Uncertain EnvironmentsPooyan Jamshidi
This document provides background on Pooyan Jamshidi's research related to learning software performance models for dynamic and uncertain environments. It summarizes his past work developing techniques for modeling and optimizing performance across different systems and environments, including using transfer learning to reuse performance data from related sources to build more accurate models with fewer measurements. It also outlines opportunities for using transfer learning to adapt performance models to new environments and systems.
The document discusses using machine learning techniques like Gaussian processes (GPs) to optimize the configuration of software systems. It notes that software performance landscapes are often complex, with non-linear interactions between parameters and non-convex response surfaces. Measurements are also subject to noise. The document introduces an approach called TL4CO that uses multi-task Gaussian processes to model software performance across different versions/deployments, allowing it to leverage data from other versions to improve optimization. This helps address challenges in DevOps where new versions are continuously delivered.
Fuzzy Self-Learning Controllers for Elasticity Management in Dynamic Cloud Ar...Pooyan Jamshidi
(1) The document discusses challenges in managing elasticity in cloud architectures due to unpredictable demand and uncertainty in measurements. (2) It proposes a fuzzy self-learning controller called RobusT2Scale that uses type-2 fuzzy logic to qualitatively specify thresholds and make robust scaling decisions despite uncertainty. (3) Experimental results show that RobusT2Scale is able to guarantee service level agreements while avoiding over- and under-provisioning of resources compared to other approaches.
Configuration Optimization for Big Data SoftwarePooyan Jamshidi
The document discusses configuration optimization for big data software using an approach developed in the DICE project funded by the European Union's Horizon 2020 program. It describes optimizing configurations for Apache Storm and Cassandra to significantly reduce configuration time. Experiments showed large performance variations between configurations and that default settings often performed poorly compared to optimized settings. Tuning on one version did not guarantee good performance on other versions, but transferring more observations from other versions improved performance, though with diminishing returns due to increased optimization costs.
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...Pooyan Jamshidi
A look at the searches related to the term “microservices” on Google Trends revealed that the top searches are now technology driven. This implies that the time of general search terms such as “What is microservices?” has now long passed. Not only are software vendors (for example, IBM and Microsoft) using microservices and DevOps practices, but also content providers (for example, Netflix and the BBC) have adopted and are using them.
I report on experiences and lessons learned during incremental migration and architectural refactoring of a commercial mobile back end as a service to microservices architecture. I explain how we adopted DevOps and how this facilitated a smooth migration towards Microservices architecture.
This document discusses self-learning cloud controllers that can dynamically scale cloud resources. It notes that current auto-scaling approaches require deep application knowledge and expertise to determine scaling parameters and policies. The paper proposes a type-2 fuzzy logic approach called RobusT2Scale that uses fuzzy rules and monitoring data to determine scaling actions. It aims to handle uncertainty in elastic systems and accommodate different user preferences through fuzzy reasoning over workload and response time data. The approach pre-computes scaling decisions to enable efficient runtime elasticity control. It is evaluated based on its ability to meet an SLA target response time compared to over- and under-provisioning approaches.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
inQuba Webinar Mastering Customer Journey Management with Dr Graham HillLizaNolte
HERE IS YOUR WEBINAR CONTENT! 'Mastering Customer Journey Management with Dr. Graham Hill'. We hope you find the webinar recording both insightful and enjoyable.
In this webinar, we explored essential aspects of Customer Journey Management and personalization. Here’s a summary of the key insights and topics discussed:
Key Takeaways:
Understanding the Customer Journey: Dr. Hill emphasized the importance of mapping and understanding the complete customer journey to identify touchpoints and opportunities for improvement.
Personalization Strategies: We discussed how to leverage data and insights to create personalized experiences that resonate with customers.
Technology Integration: Insights were shared on how inQuba’s advanced technology can streamline customer interactions and drive operational efficiency.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
What is an RPA CoE? Session 2 – CoE RolesDianaGray10
In this session, we will review the players involved in the CoE and how each role impacts opportunities.
Topics covered:
• What roles are essential?
• What place in the automation journey does each role play?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
AI in the Workplace Reskilling, Upskilling, and Future Work.pptxSunil Jagani
Discover how AI is transforming the workplace and learn strategies for reskilling and upskilling employees to stay ahead. This comprehensive guide covers the impact of AI on jobs, essential skills for the future, and successful case studies from industry leaders. Embrace AI-driven changes, foster continuous learning, and build a future-ready workforce.
Read More - https://bit.ly/3VKly70
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
GlobalLogic Java Community Webinar #18 “How to Improve Web Application Perfor...GlobalLogic Ukraine
Під час доповіді відповімо на питання, навіщо потрібно підвищувати продуктивність аплікації і які є найефективніші способи для цього. А також поговоримо про те, що таке кеш, які його види бувають та, основне — як знайти performance bottleneck?
Відео та деталі заходу: https://bit.ly/45tILxj
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Introducing BoxLang : A new JVM language for productivity and modularity!Ortus Solutions, Corp
Just like life, our code must adapt to the ever changing world we live in. From one day coding for the web, to the next for our tablets or APIs or for running serverless applications. Multi-runtime development is the future of coding, the future is to be dynamic. Let us introduce you to BoxLang.
Dynamic. Modular. Productive.
BoxLang redefines development with its dynamic nature, empowering developers to craft expressive and functional code effortlessly. Its modular architecture prioritizes flexibility, allowing for seamless integration into existing ecosystems.
Interoperability at its Core
With 100% interoperability with Java, BoxLang seamlessly bridges the gap between traditional and modern development paradigms, unlocking new possibilities for innovation and collaboration.
Multi-Runtime
From the tiny 2m operating system binary to running on our pure Java web server, CommandBox, Jakarta EE, AWS Lambda, Microsoft Functions, Web Assembly, Android and more. BoxLang has been designed to enhance and adapt according to it's runnable runtime.
The Fusion of Modernity and Tradition
Experience the fusion of modern features inspired by CFML, Node, Ruby, Kotlin, Java, and Clojure, combined with the familiarity of Java bytecode compilation, making BoxLang a language of choice for forward-thinking developers.
Empowering Transition with Transpiler Support
Transitioning from CFML to BoxLang is seamless with our JIT transpiler, facilitating smooth migration and preserving existing code investments.
Unlocking Creativity with IDE Tools
Unleash your creativity with powerful IDE tools tailored for BoxLang, providing an intuitive development experience and streamlining your workflow. Join us as we embark on a journey to redefine JVM development. Welcome to the era of BoxLang.
Introducing BoxLang : A new JVM language for productivity and modularity!
Sensitivity Analysis for Building Adaptive Robotic Software
1. SENSITIVITY ANALYSIS FOR
BUILDING ADAPTIVE ROBOTIC SOFTWARE
Pooyan Jamshidi, Miguel Velez and Christian Kästner
INTENT DISCOVERY: SENSITIVITY ANALYSIS FOR CONFIGURATION OPTIMIZATION
REDUCING COSTS WITH TRANSFER LEARNING
USE CASES
Systematic System Evolution
To automate or guide intelligent design choices.
Runtime Adaptation
To enable runtime adaptation of software configurations
to maintain quality of performance under dynamic
conditions (changing environment, goals, and tasks).
Performance Debugging
To guide robot software developers to identify potential
bugs causing low quality of performance.
RESULTS
Motivation:
Robotic software expose configurable parameters.
These tunable parameters affect performance of robots.
This can be leveraged to optimize performance.
Source Response Target Response
Transfer learning combines:
Lots of data gathered cheaply from
the simulator
With much less data gathered
expensively from the target robot
To make better predictions overall
PUBLICATIONS
P. Jamshidi, M. Velez, C. Kästner, N. Siegmund, and P. Kawthekar. Transfer
learning for improving model predictions in highly configurable software. Int’l
Symp. Software Engineering for Adaptive and Self-Managing Systems
(SEAMS), 2017.
P. Kawthekar and C. Kästner. Sensitivity analysis for building evolving &
adaptive robotic software, Workshop on Autonomous Mobile Service Robots
(WSF), 2016.
Predictive
Model
Learn
Model
Measure
Measure
Data
Source
Target
Simulator (Gazebo) Robot (TurtleBot)
Predict
Performance
Predictions
Adaptation
Use for
analysis
5 10 15 20 25
number of particles
5
10
15
20
25
numberofrefinements
0
5
10
15
20
25
5 10 15 20 25
number of particles
5
10
15
20
25
numberofrefinements
0
5
10
15
20
25
5 10 15 20 25
number of particles
5
10
15
20
25
numberofrefinements
0
5
10
15
20
25
CPU usage [%] CPU usage [%]
(a) (b)
(c) (d)
Prediction without transfer learning
5 10 15 20 25
5
10
15
20
25
10
15
20
25
Prediction with transfer learning
Using only a few real data points to
predict yields poor results across
configuration space
Using transfer learning to combine the
few real data points with lots of
approximate data yields a good model
Machine
Learning
Configuration
Parameters
Design of
Experiment
Configuration
Space
Predictive
Model
Sensitivity
Analysis
DataMeasurem
ents
Configuration
Space
Data
Accuracy
Energy
CPU
0 5 10 15 20 25 30 35
mean CPU utilization
0
500
1000
1500
2000
2500
3000
3500
numberofconfigurations
5 10 15 20 25
number of particles
5
10
15
20
25
numberofrefinements
6
8
10
12
14
16
18
20
22
24
26
5 10 15 20 25
number of particles
5
10
15
20
25
numberofrefinements
5
10
15
20
25
30
35
40
45
CPU
Localisation Error