We enable reliable and dependable self‐adaptations of component connectors in unreliable environments with imperfect monitoring facilities and conflicting user opinions about adaptation policies by developing a framework which comprises: (a) mechanisms for robust model evolution, (b) a method for adaptation reasoning, and (c) tool support that allows an end‐to‐end application of the developed techniques in real‐world domains.
Oracle trace data collection errors: the story about oceans, islands, and riversCary Millsap
When you execute a business task on a computer system, you create an experience. The duration of this experience is called response time. The richest and easiest information about response time to obtain in the whole Oracle technology stack is available from the Oracle Database tier: Oracle's extended SQL trace data. But in almost 100% of first tries with using trace data, people make a data collection mistake that complicates their analysis. This is the story of that mistake.
With Atlassian Support as your guide learn how to diagnose your JIRA and Confluence installation like a pro, where to look for solutions, and tips for getting a quick response from us when all else fails.
Most important "trick" of performance instrumentationCary Millsap
This is the material from my 10-minute TED-style talk 2014-09-29 at OakTable World held in conjunction with Oracle OpenWorld 2014 in San Francisco. It explains the importance of assigning a unique id to the Oracle Database code path associated with each performance experience that users can have with your system
In this Dagstuhl talk, I presented my current research on cloud auto-scaling and component connector self-adaptation and how I employed type-2 fuzzy control to tame the uncertainty regarding knowledge specification.
Autonomic Resource Provisioning for Cloud-Based SoftwarePooyan Jamshidi
9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'14) @ ICSE 2014, for more information please refer to: http://computing.dcu.ie/~pjamshidi/PDF/SEAMS2014.pdf
Oracle trace data collection errors: the story about oceans, islands, and riversCary Millsap
When you execute a business task on a computer system, you create an experience. The duration of this experience is called response time. The richest and easiest information about response time to obtain in the whole Oracle technology stack is available from the Oracle Database tier: Oracle's extended SQL trace data. But in almost 100% of first tries with using trace data, people make a data collection mistake that complicates their analysis. This is the story of that mistake.
With Atlassian Support as your guide learn how to diagnose your JIRA and Confluence installation like a pro, where to look for solutions, and tips for getting a quick response from us when all else fails.
Most important "trick" of performance instrumentationCary Millsap
This is the material from my 10-minute TED-style talk 2014-09-29 at OakTable World held in conjunction with Oracle OpenWorld 2014 in San Francisco. It explains the importance of assigning a unique id to the Oracle Database code path associated with each performance experience that users can have with your system
In this Dagstuhl talk, I presented my current research on cloud auto-scaling and component connector self-adaptation and how I employed type-2 fuzzy control to tame the uncertainty regarding knowledge specification.
Autonomic Resource Provisioning for Cloud-Based SoftwarePooyan Jamshidi
9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'14) @ ICSE 2014, for more information please refer to: http://computing.dcu.ie/~pjamshidi/PDF/SEAMS2014.pdf
New Clustering-based Forecasting Method for Disaggregated End-consumer Electr...Peter Laurinec
This paper presents a new method for forecasting the load of individual electricity consumers using smart grid data and clustering. The data from all consumers are used for clustering to create more suitable training sets to forecasting methods. Before clustering, time series are efficiently preprocessed by normalisation and the computation of representations of time series using a multiple linear regression model. Final centroid-based forecasts are scaled by saved normalisation parameters to create forecast for every consumer. Our method is compared with the approach that creates forecasts for every consumer separately. Evaluation and experiments were conducted on two large smart meter datasets from residences of Ireland and factories of Slovakia.
The achieved results proved that our clustering-based method improves forecasting accuracy and decreases high rates of errors (maximum). It is also more scalable since it is not necessary to train the model for every consumer.
(SAC2020 SVT-2) Constrained Detecting Arrays for Fault Localization in Combin...Hao Jin
Authors:
Hao Jin, Osaka University
Ce Shi, Shanghai Lixin University of Accounting and Finance
Tatsuhiro Tsuchiya, Osaka University
Abstract:
Detecting Arrays (DAs) are mathematical objects that enable fault localization in combinatorial interaction testing. Each row of a DA serves as a test case, whereas a whole DA is treated as a test suite. In real-world testing problems, it is often the case that some constraints exist among test parameters. In this paper, we show that it may be impossible to construct a DA using only constraint-satisfying test cases. The reason for this is that a set of some faulty interactions may always mask the effect of other faulty interactions in the presence of constraints. Based on this observation, we propose the notion of Constrained Detecting Arrays (CDAs) to adapt DAs to practical situations. The definition of CDAs requires that all rows of a CDA must satisfy the constraints and the same fault localization capability as the DA must hold except for such inherently undetectable faults. We then propose a computational method for constructing CDAs. Experimental results obtained by using a program that implements the method show that the method was able to produce CDAs within a reasonable time for practical problem instances.
Final project for my vibrations class was to preform modal analysis of an object of our choosing as well as compare our results using FEA software simulations.
In this analysis, we establish an economic and mathematical (queueing theory) framework to calculate how much it would cost to build the dev/test lab of your dreams. One in which developers never wait for testing resources, and every commit is tested on replicas of the production environment. Then, we contrast several different options to get there and evaluate all of them from an economic perspective.
After the presentation, in order to illustrate that this is not only theory, I also gave a quick demonstration of how we at Ravello use our own technology to develop our own application. Each engineer can spin up as many instances of the production replica app as needed on demand for dev/test. We showed how we have integrated Ravello with Jenkins so that on every commit, we spin up the production replica application and run integration tests in parallel.
If you have any questions, feel free to reach out. We are more than happy to discuss how this may be relevant to your development process. www.ravellosystems.com
Wildcard13 - warmup slides for the "Roundtable discussion with Oracle Profess...Maris Elsins
These are the warmup slides for the Wildcard 13 conference (Riga, Latvia, September 13th.2013
Join the discussion with oracle professionals, get your problems solved and help others! Bring your questions and problems with you to discuss them in a larger group of oracle professionals. We'll discuss anything you have related to Oracle Databases - performance tuning, coding standards and instrumentation, configuration issues, database design, migration strategies, system architectures, upgrade issues, etc.
The chances are:
The question will be answered or the problem will be solved.
You'll have more ideas to explore and try to address the issue.
You'll spend fun time helping others by sharing your experience.
You'll get a free beer for your courage to join the discussion.
Case Quality Management—ToyotaQuality Control Analytics at Toyo.docxcowinhelen
Case: Quality Management—Toyota
Quality Control Analytics at Toyota
As part of the process for improving the quality of their cars, Toyota engineers have identifi ed a potential improvement does happen to get too large, it can cause the accelerator to bind and create a potential problem for the driver. (Note: This part of the case has been fabricated for teaching purposes, and none of these data were obtained from Toyota.)
Let’s assume that, as a first step to improving the process, a sample of 40 washers coming from the machine that produces the washers was taken and the thickness measured in millimeters. The following table has the measurements from the sample:
1.9 2.0 1.9 1.8 2.2 1.7 2.0 1.9 1.7 1.8
1.8 2.2 2.1 2.2 1.9 1.8 2.1 1.6 1.8 1.6
2.1 2.4 2.2 2.1 2.1 2.0 1.8 1.7 1.9 1.9
2.1 2.0 2.4 1.7 2.2 2.0 1.6 2.0 2.1 2.2
Questions
1 If the specification is such that no washer should be greater than 2.4 millimeters, assuming that the thick-nesses are distributed normally, what fraction of the output is expected to be greater than this thickness?
The average thickness in the sample is 1.9625 and the standard deviation is .209624. The probability that the thickness is greater than 2.4 is Z = (2.4 – 1.9625)/.209624 = 2.087068 1 - NORMSDIST(2.087068) = .018441 fraction defective, so 1.8441 percent of the washers are expected to have a thickness greater than 2.4.
2 If there are an upper and lower specification, where the upper thickness limit is 2.4 and the lower thick-ness limit is 1.4, what fraction of the output is expected to be out of tolerance?
The upper limit is given in a. The lower limit is 1.4 so Z = (1.4 – 1.9625)/.209624 = -2.68337. NORMSDIST(-2.68337) = .003644 fraction defective, so .3644 percent of the washers are expected to have a thickness lower than 1.4. The total expected fraction defective would be .018441 + .003644 = .022085 or about 2.2085 percent of the washers would be expected to be out of tolerance.
3 What is the Cpk for the process?
4 What would be the Cpk for the process if it were centered between the specification limits (assume the process standard deviation is the same)?
The center of the specification limits is 1.9, which is used for X-bar in the following:
5 What percentage of output would be expected to be out of tolerance if the process were centered?
Z = (2.4 – 1.9)/.209624 = 2.385221
Fraction defective would be 2 x (1-NORMSDIST(2.385221)) = 2 x .008534 = .017069, about 1.7 percent.
6 Set up X - and range control charts for the current process. Assume the operators will take samples of 10 washers at a time.
Observation
Sample
1
2
3
4
5
6
7
8
9
10
X-bar
R
1
1.9
2
1.9
1.8
2.2
1.7
2
1.9
1.7
1.8
1.89
0.5
2
1.8
2.2
2.1
2.2
1.9
1.8
2.1
1.6
1.8
1.6
1.91
0.6
3
2.1
2.4
2.2
2.1
2.1
2
1.8
1.7
1.9
1.9
2.02
0.7
4
2.1
2
2.4
1.7
2.2
2
1.6
2
2.1
2.2
2.03
0.8
Mean:
1.9625
0.65
From Exhibit 10.13, with sample size of 10, A2 = .31, D3 = .22 and D4 = 1.78
The upper control limit for the X-bar ch.
Online Detection of Shutdown Periods in Chemical Plants: A Case StudyManuel Martín
In process industry, chemical processes are controlled and monitored by using readings from multiple physical sensors across the plants. Such physical sensors are also supplemented by soft sensors, i.e. adaptive predictive models, which are often used for computing hard-to-measure variables of the process. For soft sensors to work well and adapt to changing operating conditions they need to be provided with relevant data. As production plants are regularly stopped, data instances generated during shutdown periods have to be identified to avoid updating these predictive models with wrong data. We present a case study concerned with a large chemical plant operation over a 2 years period. The task is to robustly and accurately identify the shutdown periods even in case of multiple sensor failures. State-of-the-art methods were evaluated using the first half of the dataset for calibration purposes and the other half for measuring the performance. Results show that shutdowns (i.e. sudden changes) can be quickly detected in any case but the detection delay of startups (i.e. gradual changes) is directly related with the choice of a window size.
Computational tools for drug discoveryEszter Szabó
Discovery of a novel drug is an optimizing challenge against an array of chemical and biological attributes to reach the desired efficacy and safety profile. The immense complexity of the human body combined with the astronomically large druggable chemical space hinders the selection of molecules with such a balanced profile. Therefore, the medicinal chemistry toolbox embraces all computational techniques with predictive power to focus the chemical space to the most promising candidates for synthesis and testing. The diversity includes data analysis tools, physics-based simulations, biological target structure driven or ligand structure based approaches [1-3]. While the size of the compound collections vary from a couple of close analogues up to billions of virtual compounds to process[4]. This presentation will highlight general concepts and techniques applied in computer aided drug design, focusing on data and ligand based computational chemistry approaches and showcase solutions developed by ChemAxon.
[1] Gisbert Schneider, David E Clark, Angew Chem Int Ed Engl. 2019, 5;58(32):10792-10803.
[2] John G Cumming, Andrew M Davis, Sorel Muresan, Markus Haeberlein, Hongming Chen, Nat Rev Drug Discov, 2013, 12(12):948-62.
[3] Yu-Chen Lo, Stefano E Rensi, Wen Torng, Russ B Altman, Drug Discov Today 2018, 23(8):1538-1546
[4] Torsten Hoffmanm, Marcus Gastreich, Drug Discov Today, 2019, 24(5):1148-1156.
Learn how you can use the new workload management histograms feature in IBM® DB2® 9.5 for Linux®, UNIX®, and Windows® to better understand your workloads, determine the root cause of system slowdowns related to changes in workload, and easily track adherence to performance Service Level Agreements.
Sched-freq is the integration of the scheduler with cpufreq to yield scheduler-aware CPU frequency management. The purpose of the project will be briefly explained. The current design will be presented including things like the location of hooks in the scheduler and the implicit policy created by those hooks. Sched-freq will be contrasted with other cpufreq governors such as ondemand and interactive. The latest test results will be presented, along with next steps.
Learning LWF Chain Graphs: A Markov Blanket Discovery ApproachPooyan Jamshidi
LWF Chain graphs were introduced by Lauritzen, Wermuth, and Frydenberg as a generalization of graphical models based on undirected graphs and DAGs. From the causality point of view, in an LWF CG: Directed edges represent direct causal effects. Undirected edges represent causal effects due to interference, which occurs when an individual’s outcome is influenced by their social interaction with other population members, e.g., in situations that involve contagious agents, educational programs, or social networks. The construction of chain graph models is a challenging task that would be greatly facilitated by automation.
Markov blanket discovery has an important role in structure learning of Bayesian network. It is surprising, however, how little attention it has attracted in the context of learning LWF chain graphs. In this work, we provide a graphical characterization of Markov blankets in chain graphs. The characterization is different from the well-known one for Bayesian networks and generalizes it. We provide a novel scalable and sound algorithm for Markov blanket discovery in LWF chain graphs. We also provide a sound and scalable constraint-based framework for learning the structure of LWF CGs from faithful causally sufficient data. With the use of our algorithm, the problem of structure learning is reduced to finding an efficient algorithm for Markov blanket discovery in LWF chain graphs. This greatly simplifies the structure-learning task and makes a wide range of inference/learning problems computationally tractable because our approach exploits locality.
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...Pooyan Jamshidi
Modern cyber-physical systems (e.g., robotics systems) are typically composed of physical and software components, the characteristics of which are likely to change over time. Assumptions about parts of the system made at design time may not hold at run time, especially when a system is deployed for long periods (e.g., over decades). Self-adaptation is designed to find reconfigurations of systems to handle such run-time inconsistencies. Planners can be used to find and enact optimal reconfigurations in such an evolving context. However, for systems that are highly configurable, such planning becomes intractable due to the size of the adaptation space. To overcome this challenge, in this paper we explore an approach that (a) uses machine learning to find Pareto-optimal configurations without needing to explore every configuration and (b) restricts the search space to such configurations to make planning tractable. We explore this in the context of robot missions that need to consider task timeliness and energy consumption. An independent evaluation shows that our approach results in high-quality adaptation plans in uncertain and adversarial environments.
Paper: https://arxiv.org/abs/1903.03920
More Related Content
Similar to A Framework for Robust Control of Uncertainty in Self-Adaptive Software Connectors
New Clustering-based Forecasting Method for Disaggregated End-consumer Electr...Peter Laurinec
This paper presents a new method for forecasting the load of individual electricity consumers using smart grid data and clustering. The data from all consumers are used for clustering to create more suitable training sets to forecasting methods. Before clustering, time series are efficiently preprocessed by normalisation and the computation of representations of time series using a multiple linear regression model. Final centroid-based forecasts are scaled by saved normalisation parameters to create forecast for every consumer. Our method is compared with the approach that creates forecasts for every consumer separately. Evaluation and experiments were conducted on two large smart meter datasets from residences of Ireland and factories of Slovakia.
The achieved results proved that our clustering-based method improves forecasting accuracy and decreases high rates of errors (maximum). It is also more scalable since it is not necessary to train the model for every consumer.
(SAC2020 SVT-2) Constrained Detecting Arrays for Fault Localization in Combin...Hao Jin
Authors:
Hao Jin, Osaka University
Ce Shi, Shanghai Lixin University of Accounting and Finance
Tatsuhiro Tsuchiya, Osaka University
Abstract:
Detecting Arrays (DAs) are mathematical objects that enable fault localization in combinatorial interaction testing. Each row of a DA serves as a test case, whereas a whole DA is treated as a test suite. In real-world testing problems, it is often the case that some constraints exist among test parameters. In this paper, we show that it may be impossible to construct a DA using only constraint-satisfying test cases. The reason for this is that a set of some faulty interactions may always mask the effect of other faulty interactions in the presence of constraints. Based on this observation, we propose the notion of Constrained Detecting Arrays (CDAs) to adapt DAs to practical situations. The definition of CDAs requires that all rows of a CDA must satisfy the constraints and the same fault localization capability as the DA must hold except for such inherently undetectable faults. We then propose a computational method for constructing CDAs. Experimental results obtained by using a program that implements the method show that the method was able to produce CDAs within a reasonable time for practical problem instances.
Final project for my vibrations class was to preform modal analysis of an object of our choosing as well as compare our results using FEA software simulations.
In this analysis, we establish an economic and mathematical (queueing theory) framework to calculate how much it would cost to build the dev/test lab of your dreams. One in which developers never wait for testing resources, and every commit is tested on replicas of the production environment. Then, we contrast several different options to get there and evaluate all of them from an economic perspective.
After the presentation, in order to illustrate that this is not only theory, I also gave a quick demonstration of how we at Ravello use our own technology to develop our own application. Each engineer can spin up as many instances of the production replica app as needed on demand for dev/test. We showed how we have integrated Ravello with Jenkins so that on every commit, we spin up the production replica application and run integration tests in parallel.
If you have any questions, feel free to reach out. We are more than happy to discuss how this may be relevant to your development process. www.ravellosystems.com
Wildcard13 - warmup slides for the "Roundtable discussion with Oracle Profess...Maris Elsins
These are the warmup slides for the Wildcard 13 conference (Riga, Latvia, September 13th.2013
Join the discussion with oracle professionals, get your problems solved and help others! Bring your questions and problems with you to discuss them in a larger group of oracle professionals. We'll discuss anything you have related to Oracle Databases - performance tuning, coding standards and instrumentation, configuration issues, database design, migration strategies, system architectures, upgrade issues, etc.
The chances are:
The question will be answered or the problem will be solved.
You'll have more ideas to explore and try to address the issue.
You'll spend fun time helping others by sharing your experience.
You'll get a free beer for your courage to join the discussion.
Case Quality Management—ToyotaQuality Control Analytics at Toyo.docxcowinhelen
Case: Quality Management—Toyota
Quality Control Analytics at Toyota
As part of the process for improving the quality of their cars, Toyota engineers have identifi ed a potential improvement does happen to get too large, it can cause the accelerator to bind and create a potential problem for the driver. (Note: This part of the case has been fabricated for teaching purposes, and none of these data were obtained from Toyota.)
Let’s assume that, as a first step to improving the process, a sample of 40 washers coming from the machine that produces the washers was taken and the thickness measured in millimeters. The following table has the measurements from the sample:
1.9 2.0 1.9 1.8 2.2 1.7 2.0 1.9 1.7 1.8
1.8 2.2 2.1 2.2 1.9 1.8 2.1 1.6 1.8 1.6
2.1 2.4 2.2 2.1 2.1 2.0 1.8 1.7 1.9 1.9
2.1 2.0 2.4 1.7 2.2 2.0 1.6 2.0 2.1 2.2
Questions
1 If the specification is such that no washer should be greater than 2.4 millimeters, assuming that the thick-nesses are distributed normally, what fraction of the output is expected to be greater than this thickness?
The average thickness in the sample is 1.9625 and the standard deviation is .209624. The probability that the thickness is greater than 2.4 is Z = (2.4 – 1.9625)/.209624 = 2.087068 1 - NORMSDIST(2.087068) = .018441 fraction defective, so 1.8441 percent of the washers are expected to have a thickness greater than 2.4.
2 If there are an upper and lower specification, where the upper thickness limit is 2.4 and the lower thick-ness limit is 1.4, what fraction of the output is expected to be out of tolerance?
The upper limit is given in a. The lower limit is 1.4 so Z = (1.4 – 1.9625)/.209624 = -2.68337. NORMSDIST(-2.68337) = .003644 fraction defective, so .3644 percent of the washers are expected to have a thickness lower than 1.4. The total expected fraction defective would be .018441 + .003644 = .022085 or about 2.2085 percent of the washers would be expected to be out of tolerance.
3 What is the Cpk for the process?
4 What would be the Cpk for the process if it were centered between the specification limits (assume the process standard deviation is the same)?
The center of the specification limits is 1.9, which is used for X-bar in the following:
5 What percentage of output would be expected to be out of tolerance if the process were centered?
Z = (2.4 – 1.9)/.209624 = 2.385221
Fraction defective would be 2 x (1-NORMSDIST(2.385221)) = 2 x .008534 = .017069, about 1.7 percent.
6 Set up X - and range control charts for the current process. Assume the operators will take samples of 10 washers at a time.
Observation
Sample
1
2
3
4
5
6
7
8
9
10
X-bar
R
1
1.9
2
1.9
1.8
2.2
1.7
2
1.9
1.7
1.8
1.89
0.5
2
1.8
2.2
2.1
2.2
1.9
1.8
2.1
1.6
1.8
1.6
1.91
0.6
3
2.1
2.4
2.2
2.1
2.1
2
1.8
1.7
1.9
1.9
2.02
0.7
4
2.1
2
2.4
1.7
2.2
2
1.6
2
2.1
2.2
2.03
0.8
Mean:
1.9625
0.65
From Exhibit 10.13, with sample size of 10, A2 = .31, D3 = .22 and D4 = 1.78
The upper control limit for the X-bar ch.
Online Detection of Shutdown Periods in Chemical Plants: A Case StudyManuel Martín
In process industry, chemical processes are controlled and monitored by using readings from multiple physical sensors across the plants. Such physical sensors are also supplemented by soft sensors, i.e. adaptive predictive models, which are often used for computing hard-to-measure variables of the process. For soft sensors to work well and adapt to changing operating conditions they need to be provided with relevant data. As production plants are regularly stopped, data instances generated during shutdown periods have to be identified to avoid updating these predictive models with wrong data. We present a case study concerned with a large chemical plant operation over a 2 years period. The task is to robustly and accurately identify the shutdown periods even in case of multiple sensor failures. State-of-the-art methods were evaluated using the first half of the dataset for calibration purposes and the other half for measuring the performance. Results show that shutdowns (i.e. sudden changes) can be quickly detected in any case but the detection delay of startups (i.e. gradual changes) is directly related with the choice of a window size.
Computational tools for drug discoveryEszter Szabó
Discovery of a novel drug is an optimizing challenge against an array of chemical and biological attributes to reach the desired efficacy and safety profile. The immense complexity of the human body combined with the astronomically large druggable chemical space hinders the selection of molecules with such a balanced profile. Therefore, the medicinal chemistry toolbox embraces all computational techniques with predictive power to focus the chemical space to the most promising candidates for synthesis and testing. The diversity includes data analysis tools, physics-based simulations, biological target structure driven or ligand structure based approaches [1-3]. While the size of the compound collections vary from a couple of close analogues up to billions of virtual compounds to process[4]. This presentation will highlight general concepts and techniques applied in computer aided drug design, focusing on data and ligand based computational chemistry approaches and showcase solutions developed by ChemAxon.
[1] Gisbert Schneider, David E Clark, Angew Chem Int Ed Engl. 2019, 5;58(32):10792-10803.
[2] John G Cumming, Andrew M Davis, Sorel Muresan, Markus Haeberlein, Hongming Chen, Nat Rev Drug Discov, 2013, 12(12):948-62.
[3] Yu-Chen Lo, Stefano E Rensi, Wen Torng, Russ B Altman, Drug Discov Today 2018, 23(8):1538-1546
[4] Torsten Hoffmanm, Marcus Gastreich, Drug Discov Today, 2019, 24(5):1148-1156.
Learn how you can use the new workload management histograms feature in IBM® DB2® 9.5 for Linux®, UNIX®, and Windows® to better understand your workloads, determine the root cause of system slowdowns related to changes in workload, and easily track adherence to performance Service Level Agreements.
Sched-freq is the integration of the scheduler with cpufreq to yield scheduler-aware CPU frequency management. The purpose of the project will be briefly explained. The current design will be presented including things like the location of hooks in the scheduler and the implicit policy created by those hooks. Sched-freq will be contrasted with other cpufreq governors such as ondemand and interactive. The latest test results will be presented, along with next steps.
Similar to A Framework for Robust Control of Uncertainty in Self-Adaptive Software Connectors (20)
Learning LWF Chain Graphs: A Markov Blanket Discovery ApproachPooyan Jamshidi
LWF Chain graphs were introduced by Lauritzen, Wermuth, and Frydenberg as a generalization of graphical models based on undirected graphs and DAGs. From the causality point of view, in an LWF CG: Directed edges represent direct causal effects. Undirected edges represent causal effects due to interference, which occurs when an individual’s outcome is influenced by their social interaction with other population members, e.g., in situations that involve contagious agents, educational programs, or social networks. The construction of chain graph models is a challenging task that would be greatly facilitated by automation.
Markov blanket discovery has an important role in structure learning of Bayesian network. It is surprising, however, how little attention it has attracted in the context of learning LWF chain graphs. In this work, we provide a graphical characterization of Markov blankets in chain graphs. The characterization is different from the well-known one for Bayesian networks and generalizes it. We provide a novel scalable and sound algorithm for Markov blanket discovery in LWF chain graphs. We also provide a sound and scalable constraint-based framework for learning the structure of LWF CGs from faithful causally sufficient data. With the use of our algorithm, the problem of structure learning is reduced to finding an efficient algorithm for Markov blanket discovery in LWF chain graphs. This greatly simplifies the structure-learning task and makes a wide range of inference/learning problems computationally tractable because our approach exploits locality.
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...Pooyan Jamshidi
Modern cyber-physical systems (e.g., robotics systems) are typically composed of physical and software components, the characteristics of which are likely to change over time. Assumptions about parts of the system made at design time may not hold at run time, especially when a system is deployed for long periods (e.g., over decades). Self-adaptation is designed to find reconfigurations of systems to handle such run-time inconsistencies. Planners can be used to find and enact optimal reconfigurations in such an evolving context. However, for systems that are highly configurable, such planning becomes intractable due to the size of the adaptation space. To overcome this challenge, in this paper we explore an approach that (a) uses machine learning to find Pareto-optimal configurations without needing to explore every configuration and (b) restricts the search space to such configurations to make planning tractable. We explore this in the context of robot missions that need to consider task timeliness and energy consumption. An independent evaluation shows that our approach results in high-quality adaptation plans in uncertain and adversarial environments.
Paper: https://arxiv.org/abs/1903.03920
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...Pooyan Jamshidi
Despite achieving state-of-the-art performance across many domains, machine learning systems are highly vulnerable to subtle adversarial perturbations. Although defense approaches have been proposed in recent years, many have been bypassed by even weak adversarial attacks. Previous studies showed that ensembles created by combining multiple weak defenses (i.e., input data transformations) are still weak. In this talk, I will show that it is indeed possible to construct effective ensembles using weak defenses to block adversarial attacks. However, to do so requires a diverse set of such weak defenses. Based on this motivation, I will present Athena, an extensible framework for building effective defenses to adversarial attacks against machine learning systems. I will talk about the effectiveness of ensemble strategies with a diverse set of many weak defenses that comprise transforming the inputs (e.g., rotation, shifting, noising, denoising, and many more) before feeding them to target deep neural network classifiers. I will also discuss the effectiveness of the ensembles with adversarial examples generated by various adversaries in different threat models. In the second half of the talk, I will explain why building defenses based on the idea of many diverse weak defenses works, when it is most effective, and what its inherent limitations and overhead are.
Transfer Learning for Performance Analysis of Configurable Systems:A Causal ...Pooyan Jamshidi
Modern systems (e.g., deep neural networks, big data analytics, and compilers) are highly configurable, which means they expose different performance behavior under different configurations. The fundamental challenge is that one cannot simply measure all configurations due to the sheer size of the configuration space. Transfer learning has been used to reduce the measurement efforts by transferring knowledge about performance behavior of systems across environments. Previously, research has shown that statistical models are indeed transferable across environments. In this work, we investigate identifiability and transportability of causal effects and statistical relations in highly-configurable systems. Our causal analysis agrees with previous exploratory analysis~\cite{Jamshidi17} and confirms that the causal effects of configuration options can be carried over across environments with high confidence. We expect that the ability to carry over causal relations will enable effective performance analysis of highly-configurable systems.
Integrated Model Discovery and Self-Adaptation of RobotsPooyan Jamshidi
Machine learn models efficiently under budget constraints to adapt to perturbations such as environmental changes or changes in the internal resources.
Modern software-intensive systems are composed of components that are likely to change their behaviour over time (e.g., adding/removing components).
For software to continue to operate under such changes, the assumptions about parts of the system made at design time may not hold at runtime due to uncertainty.
Mechanisms must be put in place that can dynamically learn new models of these assumptions and use them to make decisions about missions, configurations, etc.
Transfer Learning for Performance Analysis of Highly-Configurable SoftwarePooyan Jamshidi
A wide range of modern software-intensive systems (e.g., autonomous systems, big data analytics, robotics, deep neural architectures) are built configurable. These systems offer a rich space for adaptation to different domains and tasks. Developers and users often need to reason about the performance of such systems, making tradeoffs to change specific quality attributes or detecting performance anomalies. For instance, developers of image recognition mobile apps are not only interested in learning which deep neural architectures are accurate enough to classify their images correctly, but also which architectures consume the least power on the mobile devices on which they are deployed. Recent research has focused on models built from performance measurements obtained by instrumenting the system. However, the fundamental problem is that the learning techniques for building a reliable performance model do not scale well, simply because the configuration space is exponentially large that is impossible to exhaustively explore. For example, it will take over 60 years to explore the whole configuration space of a system with 25 binary options.
In this talk, I will start motivating the configuration space explosion problem based on my previous experience with large-scale big data systems in industry. I will then present my transfer learning solution to tackle the scalability challenge: instead of taking the measurements from the real system, we learn the performance model using samples from cheap sources, such as simulators that approximate the performance of the real system, with a fair fidelity and at a low cost. Results show that despite the high cost of measurement on the real system, learning performance models can become surprisingly cheap as long as certain properties are reused across environments. In the second half of the talk, I will present empirical evidence, which lays a foundation for a theory explaining why and when transfer learning works by showing the similarities of performance behavior across environments. I will present observations of environmental changes‘ impacts (such as changes to hardware, workload, and software versions) for a selected set of configurable systems from different domains to identify the key elements that can be exploited for transfer learning. These observations demonstrate a promising path for building efficient, reliable, and dependable software systems. Finally, I will share my research vision for the next five years and outline my immediate plans to further explore the opportunities of transfer learning.
Related Papers:
https://arxiv.org/pdf/1709.02280
https://arxiv.org/pdf/1704.00234
https://arxiv.org/pdf/1606.06543
Architectural Tradeoff in Learning-Based SoftwarePooyan Jamshidi
In classical software development, developers write explicit instructions in a programming language to hardcode the explicit behavior of software systems. By writing each line of code, the programmer instructs the software to have the desirable behavior by exploring a specific point in program space.
Recently, however, software systems are adding learning components that, instead of hardcoding an explicit behavior, learn a behavior through data. The learning-intensive software systems are written in terms of models and their parameters that need to be adjusted based on data. In learning-enabled systems, we specify some constraints on the behavior of a desirable program (e.g., a data set of input–output pairs of examples) and use the computational resources to search through the program space to find a program that satisfies the constraints. In neural networks, we restrict the search to a continuous subset of the program space.
This talk provides experimental evidence of making tradeoffs for deep neural network models, using the Deep Neural Network Architecture system as a case study. Concrete experimental results are presented; also featured are additional case studies in big data (Storm, Cassandra), data analytics (configurable boosting algorithms), and robotics applications.
Sensitivity Analysis for Building Adaptive Robotic SoftwarePooyan Jamshidi
P. Jamshidi, M. Velez, C. Kästner, N. Siegmund, and P. Kawthekar. Transfer learning for improving model predictions in highly configurable software. Int’l Symp. Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2017.
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...Pooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
Transfer Learning for Improving Model Predictions in Robotic SystemsPooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...Pooyan Jamshidi
https://arxiv.org/abs/1606.06543
Finding optimal configurations for Stream Processing Systems (SPS) is a challenging problem due to the large number of parameters that can influence their performance and the lack of analytical models to anticipate the effect of a change. To tackle this issue, we consider tuning methods where an experimenter is given a limited budget of experiments and needs to carefully allocate this budget to find optimal configurations. We propose in this setting Bayesian Optimization for Configuration Optimization (BO4CO), an auto-tuning algorithm that leverages Gaussian Processes (GPs) to iteratively capture posterior distributions of the configuration spaces and sequentially drive the experimentation. Validation based on Apache Storm demonstrates that our approach locates optimal configurations within a limited experimental budget, with an improvement of SPS performance typically of at least an order of magnitude compared to existing configuration algorithms.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Pride Month Slides 2024 David Douglas School District
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Connectors
1. A Framework for Robust Control of Uncertainty
in Self-Adaptive Software Connectors
Pooyan Jamshidi
Lero – the Irish Software Engineering Research Centre
School of Computing, Dublin City University
Pooyan.jamshidi@computing.dcu.ie
Supervised by: Dr. Claus Pahl
Environment=D
Environment=D’
Environment=D’
Adapted to satisfy
requirements
while it is runningü Reliable (Robust)
ü Run-time Efficient
8. 0.2
0.8
S D T L
S
0 0 10 0
D
6 0 0 0
T
0 6 0 4
L
0 0 2 0
6
CTMC (Continuous-Time Markov Chain)
DTMC (Discrete-Time Markov Chain)
S D T L
S
0 0 1 0
D
1 0 0 0
T
0 0.9 0 0.1
L
0 0 1 0
8
19. 0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Region of
definite
satisfaction
Region of
definite
dissatisfactionRegion of
uncertain
satisfaction
Performance Index
Possibility
Performance Index
Possibility
words can mean different
things to different people
Different users often
recommend
different adaptation policies
0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Type-2 MF
Type-1 MF
19
22. Rule
(𝒍)
Antecedents Consequent
𝒄 𝒂𝒗𝒈
𝒍
Workload
Response-
time
Normal
(-2)
Effort
(-1)
Medium
Effort
(0)
High
Effort
(+1)
Maximum
Effort (+2)
1 Very low Instantaneous 7 2 1 0 0 -1.6
2 Very low Fast 5 4 1 0 0 -1.4
3 Very low Medium 0 2 6 2 0 0
4 Very low Slow 0 0 4 6 0 0.6
5 Very low Very slow 0 0 0 6 4 1.4
6 Low Instantaneous 5 3 2 0 0 -1.3
7 Low Fast 2 7 1 0 0 -1.1
8 Low Medium 0 1 5 3 1 0.4
9 Low Slow 0 0 1 8 1 1
10 Low Very slow 0 0 0 4 6 1.6
11 Medium Instantaneous 6 4 0 0 0 -1.6
12 Medium Fast 2 5 3 0 0 -0.9
13 Medium Medium 0 0 5 4 1 0.6
14 Medium Slow 0 0 1 7 2 1.1
15 Medium Very slow 0 0 1 3 6 1.5
16 High Instantaneous 8 2 0 0 0 -1.8
17 High Fast 4 6 0 0 0 -1.4
18 High Medium 0 1 5 3 1 0.4
19 High Slow 0 0 1 7 2 1.1
20 High Very slow 0 0 0 6 4 1.4
21 Very high Instantaneous 9 1 0 0 0 -1.9
22 Very high Fast 3 6 1 0 0 -1.2
23 Very high Medium 0 1 4 4 1 0.5
24 Very high Slow 0 0 1 8 1 1
25 Very high Very slow 0 0 0 4 6 1.6
Rule
(𝐥)
Antecedents Consequent
𝒄 𝒂𝒗𝒈
𝒍Work
load
Response
-time
M
1
M
2
M
3
M
4
M
5
12 Medium Fast 2 5 3 0 0 -0.9
10 experts’ responses
𝑅-: IF (the workload (𝑥!) is )𝐹.#
, AND the response-
time (𝑥$) is )𝐺.(
), THEN (change the connector mode
to …).
𝑐/01
- =
∑23!
4)
𝑤2
-
×𝐶
∑23!
4)
𝑤2
-
Goal: pre-computations of costly calculations
to make a runtime efficient adaptation
reasoning based on fuzzy inference 22
23. Liang, Q., Mendel, J. M. (2000). Interval type-2 fuzzy
logic systems: theory and design. Fuzzy Systems, IEEE
Transactions on, 8(5), 535-550.
Adaptation Actions
Monitoring Data
23
38. 0.05
0.1
0.15
0.2
0.25
0.3
0.35
Type-1 FLS Type-2 FLS
RMSE
• The rule reduction reduced the rules
quite considerably.
• IT2 FLCs are more robust due to less
mean error and less variation in the
estimation error.
• T1 FLCs in some realization drop more
rules in comparison with the IT2 FLCs.
• IT2 FLC original designs can be designed
with less rules.
38
47. RQ0: “How to enable a robust and runtime efficient self-
adaptation for software connectors and make them reliable
to be used in Open environments?”
R
~
Knowledge
Specification
Uncertainty
Measurement
Inaccuracy
Naeem Esfahani and Sam Malek,
“Uncertainty in Self-Adaptive
Software Systems”
47
49. dA: 4 dD: 2
A_Lost: 4
dA dD
A_Lost
A2B
B2F
F2C
C2F
F2D
Variable model parameters
Fixed model parameter
Rate of
message output
Rate of
message lost
Rate of
message input
49
50. 0.84
0.031
0.056
50C. Ghezzi, V. PanzicaLa Manna, Alfredo Motta, G. Tamburrelli, "QoS Driven Dynamic Binding
in-the-many", QoSA 2010, Prague, June 23-25, 2010.
64. SUT Criteria Big spike Dual phase
Large
variations
Quickly
varying
Slowly
varying
Steep tri
phase
RobusT2Scale
𝑟𝑡?@% 973ms 537ms 509ms 451ms 423ms 498ms
𝑣𝑚 3.2 3.8 5.1 5.3 3.7 3.9
Overprovisioning
𝑟𝑡?@% 354ms 411ms 395ms 446ms 371ms 491ms
𝑣𝑚 6 6 6 6 6 6
Under
provisioning
𝑟𝑡?@% 1465ms 1832ms 1789ms 1594ms 1898ms 2194ms
𝑣𝑚 2 2 2 2 2 2
SLA: 𝒓𝒕 𝟗𝟓 ≤ 𝟔𝟎𝟎𝒎𝒔
For every 10s control interval
•RobusT2 is superior to under-provisioning in terms of
guaranteeing the SLA and does not require excessive
resources
•RobusT2 is superior to over-provisioning in terms of
guaranteeing required resources while guaranteeing the SLA 64
69. Framework
Source of Uncertainty Feedback Control Loop (MAPE-K)
EvaluationAdaptation
policy
Noisy
data
Simplificatio
n
Change
enactment
Users in
the loop
Dynamic
environment
M A P E K
RequirementSpecification
RELAX Fuzzy goal model Case study
AutoRELAX Fuzzy goal model
Experimental
study
FLAGS Fuzzy goal model Example
Goal-Driven Self-Optimization Prob. Prob. Prob. goal reasoning goal model
Experimental
study
REAssuRE Fuzzy goal reasoning goal model Example
(N Bencomo & Belaggoun, 2013) Prob. goal reasoning
Decision
model
Experimental
study
C/E/I
RCU (This Work) Fuzzy Prob. Control √ (Bayesian learning)
constraint
evaluation
Fuzzy reasoning
Mode
change
Markov
models +
Fuzzy rule
Experimental
study
Partial satisfaction of requirements @ design-time
Resolution @ runtime
Claim
Goal realization
69
70. Internal
Rainbow Prob. Prob. √
constraint
evaluation
√
Architecture
model
Experimental
study
POISED Fuzzy Fuzzy Fuzzy optimization
Architecture
model
Experimental
study
(Cámara et al., 2014) Prob. game analysis
Architecture
model
Experimental
study
ADC Prob. utility reasoning Utility Case study
Framework
Source of Uncertainty Feedback Control Loop (MAPE-K)
EvaluationAdaptation
policy
Noisy
data
Simplificatio
n
Change
enactment
Users in
the loop
Dynamic
environment
M A P E K
C/E/I
RCU (This Work) Fuzzy Prob. Control √ (Bayesian learning)
constraint
evaluation
Fuzzy reasoning
Mode
change
Markov
models +
Fuzzy rule
Experimental
study
Use different theories and reasoning mechanisms to
determine the impact of system change on the quality
properties…e.g., replacing a component on response-
time
70
71. External
FUSION Prob. Prob.
√
(learning)
√
Feature
model
Experimental
study
RESIST Prob. Prob. Prob.
√
(learning)
√
Markov
models
Experimental
study
ADAM Prob.
√
(learning)
√
Markov
models
Experimental
study
KAMI Prob.
√
(learning)
constraint
evaluation
Markov
models
Experimental
study
Veritas/Loki Prob.
test case
verification
test plan
verification;
optimization
Test cases
Experimental
study
C/E/I
RCU (This Work) Fuzzy Prob. Control √ (Bayesian learning)
constraint
evaluation
Fuzzy reasoning
Mode
change
Markov
models +
Fuzzy rule
Experimental
study
Framework
Source of Uncertainty Feedback Control Loop (MAPE-K)
EvaluationAdaptation
policy
Noisy
data
Simplificatio
n
Change
enactment
Users in
the loop
Dynamic
environment
M A P E K
White-box, black-box or gray-box learning approaches
to mitigate environmental uncertainty
71
72. Control
(Antonio Filieri et al., 2014) Control
controller
synthesis
Regression
models
Experimental
study
(Zhu et al., 2009) Control
integral
controller
Regression
models
Experimental
study
C/E/I
RCU (This Work) Fuzzy Prob. Control √ (Bayesian learning)
constraint
evaluation
Fuzzy reasoning
Mode
change
Markov
models +
Fuzzy rule
Experimental
study
Framework
Source of Uncertainty Feedback Control Loop (MAPE-K)
EvaluationAdaptation
policy
Noisy
data
Simplificatio
n
Change
enactment
Users in
the loop
Dynamic
environment
M A P E K
Fuzzy control (knowledge-based) vs. classic (model-based)
Increasing attention in SE community, Dagstuhl seminar,
ICSE’14,…
72
73. Design-time
GuideArch Fuzzy (utility) --
Optimization
(arch. selection)
-- Case study
EAGLE Prob. -- Goal verification Synthesis -- Example
MAVO --
Partial model
reasoning
-- Case study
(H. Yang et al., 2012) --
Machine
learning
Rule reasoning --
Experimental
study
(Arora et al., 2012) --
Feature
interaction
-- Case study
(Letier & van Lamsweerde, 2004) Prob. --
Partial goal
verification
--
(Letier et al., 2014) --
Monte-Carlo
simulation
Pareto-based
optimization
--
Experimental
study
Framework
Source of Uncertainty Feedback Control Loop (MAPE-K)
EvaluationAdaptation
policy
Noisy
data
Simplificatio
n
Change
enactment
Users in
the loop
Dynamic
environment
M A P E K
C/E/I
RCU (This Work) Fuzzy Prob. Control √ (Bayesian learning)
constraint
evaluation
Fuzzy reasoning
Mode
change
Markov
models +
Fuzzy rule
Experimental
study
User involvement and optimization based approaches
may not be necessarily applicable for runtime reasoning
73