(1) The document discusses challenges in managing elasticity in cloud architectures due to unpredictable demand and uncertainty in measurements. (2) It proposes a fuzzy self-learning controller called RobusT2Scale that uses type-2 fuzzy logic to qualitatively specify thresholds and make robust scaling decisions despite uncertainty. (3) Experimental results show that RobusT2Scale is able to guarantee service level agreements while avoiding over- and under-provisioning of resources compared to other approaches.
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...Pooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
Transfer Learning for Improving Model Predictions in Robotic SystemsPooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...Pooyan Jamshidi
https://arxiv.org/abs/1606.06543
Finding optimal configurations for Stream Processing Systems (SPS) is a challenging problem due to the large number of parameters that can influence their performance and the lack of analytical models to anticipate the effect of a change. To tackle this issue, we consider tuning methods where an experimenter is given a limited budget of experiments and needs to carefully allocate this budget to find optimal configurations. We propose in this setting Bayesian Optimization for Configuration Optimization (BO4CO), an auto-tuning algorithm that leverages Gaussian Processes (GPs) to iteratively capture posterior distributions of the configuration spaces and sequentially drive the experimentation. Validation based on Apache Storm demonstrates that our approach locates optimal configurations within a limited experimental budget, with an improvement of SPS performance typically of at least an order of magnitude compared to existing configuration algorithms.
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016MLconf
Alex Smola is the Manager of the Cloud Machine Learning Platform at Amazon. Prior to his role at Amazon, Smola was a Professor in the Machine Learning Department of Carnegie Mellon University and cofounder and CEO of Marianas Labs. Prior to that he worked at Google Strategic Technologies, Yahoo Research, and National ICT Australia. Prior to joining CMU, he was professor at UC Berkeley and the Australian National University. Alex obtained his PhD at TU Berlin in 1998. He has published over 200 papers and written or coauthored 5 books.
Abstract summary
Personalization and Scalable Deep Learning with MXNET: User return times and movie preferences are inherently time dependent. In this talk I will show how this can be accomplished efficiently using deep learning by employing an LSTM (Long Short Term Model). Moreover, I will show how to train large scale distributed parallel models using MXNet efficiently. This includes a brief overview of key components of defining networks, of optimization, and a walkthrough of the steps required to allocate machines, and to train a model.
Transfer Learning for Improving Model Predictions in Highly Configurable Soft...Pooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
Transfer Learning for Improving Model Predictions in Robotic SystemsPooyan Jamshidi
Modern software systems are now being built to be used in dynamic environments utilizing configuration capabilities to adapt to changes and external uncertainties. In a self-adaptation context, we are often interested in reasoning about the performance of the systems under different configurations. Usually, we learn a black-box model based on real measurements to predict the performance of the system given a specific configuration. However, as modern systems become more complex, there are many configuration parameters that may interact and, therefore, we end up learning an exponentially large configuration space. Naturally, this does not scale when relying on real measurements in the actual changing environment. We propose a different solution: Instead of taking the measurements from the real system, we learn the model using samples from other sources, such as simulators that approximate performance of the real system at low cost.
An Uncertainty-Aware Approach to Optimal Configuration of Stream Processing S...Pooyan Jamshidi
https://arxiv.org/abs/1606.06543
Finding optimal configurations for Stream Processing Systems (SPS) is a challenging problem due to the large number of parameters that can influence their performance and the lack of analytical models to anticipate the effect of a change. To tackle this issue, we consider tuning methods where an experimenter is given a limited budget of experiments and needs to carefully allocate this budget to find optimal configurations. We propose in this setting Bayesian Optimization for Configuration Optimization (BO4CO), an auto-tuning algorithm that leverages Gaussian Processes (GPs) to iteratively capture posterior distributions of the configuration spaces and sequentially drive the experimentation. Validation based on Apache Storm demonstrates that our approach locates optimal configurations within a limited experimental budget, with an improvement of SPS performance typically of at least an order of magnitude compared to existing configuration algorithms.
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016MLconf
Alex Smola is the Manager of the Cloud Machine Learning Platform at Amazon. Prior to his role at Amazon, Smola was a Professor in the Machine Learning Department of Carnegie Mellon University and cofounder and CEO of Marianas Labs. Prior to that he worked at Google Strategic Technologies, Yahoo Research, and National ICT Australia. Prior to joining CMU, he was professor at UC Berkeley and the Australian National University. Alex obtained his PhD at TU Berlin in 1998. He has published over 200 papers and written or coauthored 5 books.
Abstract summary
Personalization and Scalable Deep Learning with MXNET: User return times and movie preferences are inherently time dependent. In this talk I will show how this can be accomplished efficiently using deep learning by employing an LSTM (Long Short Term Model). Moreover, I will show how to train large scale distributed parallel models using MXNet efficiently. This includes a brief overview of key components of defining networks, of optimization, and a walkthrough of the steps required to allocate machines, and to train a model.
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)byteLAKE
See our presentation from the 6th International EULAG Users Workshop. We talked about taking HPC to the "Industry 4.0" by implementing smart techniques to optimize the codes in terms of performance and energy consumption. It explains how Machine Learning can dynamically optimize HPC simulations and byteLAKE's software autotuning solution.
Find out more about byteLAKE at: www.byteLAKE.com
Towards a Unified Data Analytics Optimizer with Yanlei DiaoDatabricks
Today’s big data analytics systems are best effort only: despite the wide adoption, they still lack the ability to take user monetary constraints and performance goals, and automatically configure an analytic job to achieve those goals. Our work aims to take a step further towards building a new data analytics optimizer that works for arbitrary dataflow programs and determines the job configuration in an automated manner based on user objectives regarding latency, throughput, monetary cost, etc.
At the core of the optimizer are a principled multi-objective optimization framework that enables one to explore the tradeoffs between different objectives, and a deep learning-based modeling approach that can learn a model for each user objective as complex as necessary for the user computing environment. Using both SQL-like and machine learning jobs in Spark, we show that our techniques can learn a model of each objective with high accuracy, and the multi-objective optimizer can automatically recommend new configurations that significantly improve performance from the configurations manually set by engineers.
Sergei Vassilvitskii, Research Scientist, Google at MLconf NYC - 4/15/16MLconf
Teaching K-Means New Tricks: Over 50 years old, the k-means algorithm remains one of the most popular clustering algorithms. In this talk we’ll cover some recent developments, including better initialization, the notion of coresets, clustering at scale, and clustering with outliers.
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and ArchitecturesMLAI2
Regularization and transfer learning are two popular techniques to enhance generalization on unseen data, which is a fundamental problem of machine learning. Regularization techniques are versatile, as they are task- and architecture-agnostic, but they do not exploit a large amount of data available. Transfer learning methods learn to transfer knowledge from one domain to another, but may not generalize across tasks and architectures, and may introduce new training cost for adapting to the target task. To bridge the gap between the two, we propose a transferable perturbation, MetaPerturb, which is meta-learned to improve generalization performance on unseen data. MetaPerturb is implemented as a set-based lightweight network that is agnostic to the size and the order of the input, which is shared across the layers. Then, we propose a meta-learning framework, to jointly train the perturbation function over heterogeneous tasks in parallel. As MetaPerturb is a set-function trained over diverse distributions across layers and tasks, it can generalize to heterogeneous tasks and architectures. We validate the efficacy and generality of MetaPerturb trained on a specific source domain and architecture, by applying it to the training of diverse neural architectures on heterogeneous target datasets against various regularizers and fine-tuning. The results show that the networks trained with MetaPerturb significantly outperform the baselines on most of the tasks and architectures, with a negligible increase in the parameter size and no hyperparameters to tune.
Final project, Machine Learning Having it Deep and Structured, NTU
- Rank 1/25 in peer review, original score: 16.2/17
- 2nd presentation pride (voted by audience)
These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018.
The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/
The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o:
- reading in data
- exploratory data analysis
- missingness
- feature engineering
- training and test split
- model training with Random Forests, Gradient Boosting, Neural Nets, etc.
- hyperparameter tuning
Implementation of linear regression and logistic regression on SparkDalei Li
This presentation was developed for a course project at Technical University of Madrid. The course is massively parallel machine learning supervised by Alberto Mozo and Bruno Ordozgoiti.
- POSTECH EECE695J, "딥러닝 기초 및 철강공정에의 활용", Week 5
- Contents: Restricted Boltzmann Machine (RBM), various activation functions, data preprocessing, regularization methods, training of a neural network
- Video: https://youtu.be/v4rGPl-8wdo
Introduction to Deep Learning with Pythonindico data
A presentation by Alec Radford, Head of Research at indico Data Solutions, on deep learning with Python's Theano library.
The emphasis of the presentation is high performance computing, natural language processing (using recurrent neural nets), and large scale learning with GPUs.
Video of the talk available here: https://www.youtube.com/watch?v=S75EdAcXHKk
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)byteLAKE
See our presentation from the 6th International EULAG Users Workshop. We talked about taking HPC to the "Industry 4.0" by implementing smart techniques to optimize the codes in terms of performance and energy consumption. It explains how Machine Learning can dynamically optimize HPC simulations and byteLAKE's software autotuning solution.
Find out more about byteLAKE at: www.byteLAKE.com
Towards a Unified Data Analytics Optimizer with Yanlei DiaoDatabricks
Today’s big data analytics systems are best effort only: despite the wide adoption, they still lack the ability to take user monetary constraints and performance goals, and automatically configure an analytic job to achieve those goals. Our work aims to take a step further towards building a new data analytics optimizer that works for arbitrary dataflow programs and determines the job configuration in an automated manner based on user objectives regarding latency, throughput, monetary cost, etc.
At the core of the optimizer are a principled multi-objective optimization framework that enables one to explore the tradeoffs between different objectives, and a deep learning-based modeling approach that can learn a model for each user objective as complex as necessary for the user computing environment. Using both SQL-like and machine learning jobs in Spark, we show that our techniques can learn a model of each objective with high accuracy, and the multi-objective optimizer can automatically recommend new configurations that significantly improve performance from the configurations manually set by engineers.
Sergei Vassilvitskii, Research Scientist, Google at MLconf NYC - 4/15/16MLconf
Teaching K-Means New Tricks: Over 50 years old, the k-means algorithm remains one of the most popular clustering algorithms. In this talk we’ll cover some recent developments, including better initialization, the notion of coresets, clustering at scale, and clustering with outliers.
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and ArchitecturesMLAI2
Regularization and transfer learning are two popular techniques to enhance generalization on unseen data, which is a fundamental problem of machine learning. Regularization techniques are versatile, as they are task- and architecture-agnostic, but they do not exploit a large amount of data available. Transfer learning methods learn to transfer knowledge from one domain to another, but may not generalize across tasks and architectures, and may introduce new training cost for adapting to the target task. To bridge the gap between the two, we propose a transferable perturbation, MetaPerturb, which is meta-learned to improve generalization performance on unseen data. MetaPerturb is implemented as a set-based lightweight network that is agnostic to the size and the order of the input, which is shared across the layers. Then, we propose a meta-learning framework, to jointly train the perturbation function over heterogeneous tasks in parallel. As MetaPerturb is a set-function trained over diverse distributions across layers and tasks, it can generalize to heterogeneous tasks and architectures. We validate the efficacy and generality of MetaPerturb trained on a specific source domain and architecture, by applying it to the training of diverse neural architectures on heterogeneous target datasets against various regularizers and fine-tuning. The results show that the networks trained with MetaPerturb significantly outperform the baselines on most of the tasks and architectures, with a negligible increase in the parameter size and no hyperparameters to tune.
Final project, Machine Learning Having it Deep and Structured, NTU
- Rank 1/25 in peer review, original score: 16.2/17
- 2nd presentation pride (voted by audience)
These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018.
The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/
The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o:
- reading in data
- exploratory data analysis
- missingness
- feature engineering
- training and test split
- model training with Random Forests, Gradient Boosting, Neural Nets, etc.
- hyperparameter tuning
Implementation of linear regression and logistic regression on SparkDalei Li
This presentation was developed for a course project at Technical University of Madrid. The course is massively parallel machine learning supervised by Alberto Mozo and Bruno Ordozgoiti.
- POSTECH EECE695J, "딥러닝 기초 및 철강공정에의 활용", Week 5
- Contents: Restricted Boltzmann Machine (RBM), various activation functions, data preprocessing, regularization methods, training of a neural network
- Video: https://youtu.be/v4rGPl-8wdo
Introduction to Deep Learning with Pythonindico data
A presentation by Alec Radford, Head of Research at indico Data Solutions, on deep learning with Python's Theano library.
The emphasis of the presentation is high performance computing, natural language processing (using recurrent neural nets), and large scale learning with GPUs.
Video of the talk available here: https://www.youtube.com/watch?v=S75EdAcXHKk
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...Pooyan Jamshidi
A look at the searches related to the term “microservices” on Google Trends revealed that the top searches are now technology driven. This implies that the time of general search terms such as “What is microservices?” has now long passed. Not only are software vendors (for example, IBM and Microsoft) using microservices and DevOps practices, but also content providers (for example, Netflix and the BBC) have adopted and are using them.
I report on experiences and lessons learned during incremental migration and architectural refactoring of a commercial mobile back end as a service to microservices architecture. I explain how we adopted DevOps and how this facilitated a smooth migration towards Microservices architecture.
Cloud Migration Patterns: A Multi-Cloud Architectural PerspectivePooyan Jamshidi
Cloud migration requires an engineering, verifiable, measurable, transparent and repeatable approach rather than an ad-hoc approach based on trial and error.
We describe a comprehensive set of (multi-)cloud migration patterns from an architectural perspective. In this work, we focus on application components and their migration to the multi-cloud environments. We define and characterize the patterns with concrete usage scenario. We also describe the process for migration pattern selection, composition and extension.
Sustaining & innovating amidst changes is the hallmark of exemplary leadership. Pelmar Group has been displaying this leadership for the last 50 years! In this special edition, we showcase for you Pelmar Eng Ltd and two other knowledge enhancing articles
Slideshow for the eighth lecture in my summer course, English 10, "Introduction to Literary Studies: Deception, Dishonesty, Bullshit."
http://patrickbrianmooney.nfshost.com/~patrick/ta/m15/
In this Dagstuhl talk, I presented my current research on cloud auto-scaling and component connector self-adaptation and how I employed type-2 fuzzy control to tame the uncertainty regarding knowledge specification.
Autonomic Resource Provisioning for Cloud-Based SoftwarePooyan Jamshidi
9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS'14) @ ICSE 2014, for more information please refer to: http://computing.dcu.ie/~pjamshidi/PDF/SEAMS2014.pdf
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...Pooyan Jamshidi
We enable reliable and dependable self‐adaptations of component connectors in unreliable environments with imperfect monitoring facilities and conflicting user opinions about adaptation policies by developing a framework which comprises: (a) mechanisms for robust model evolution, (b) a method for adaptation reasoning, and (c) tool support that allows an end‐to‐end application of the developed techniques in real‐world domains.
Alex Smola, Professor in the Machine Learning Department, Carnegie Mellon Uni...MLconf
Fast, Cheap and Deep – Scaling Machine Learning: Distributed high throughput machine learning is both a challenge and a key enabling technology. Using a Parameter Server template we are able to distribute algorithms efficiently over multiple GPUs and in the cloud. This allows us to design very fast recommender systems, factorization machines, classifiers, and deep networks. This degree of scalability allows us to tackle computationally expensive problems efficiently, yielding excellent results e.g. in visual question answering.
The Machine Learning behind the Autonomous Database ILOUG Feb 2020 Sandesh Rao
Autonomous Database is one of the hottest Oracle products where we have attempted to use Machine Learning for several aspects of the service. We take a view on our current state of ML in the Autonomous Database Cloud and how do we process this data in ADW/ATP with zeppelin notebooks to find anomalies in them to troubleshoot them at a scale of several petabytes a year and conduct AIOps. We will cover some sample notebooks to some use cases we will cover are a Log Anomaly timeline which we reduce significant amounts of logs using semi-supervised machine learning techniques to reduce logs and match them in near real time. Some of the other use cases is to use convolution filters...
Microservices have emerged as an architectural style for developing maintainable and scalable applications. Understanding the performance of alternative deployment configurations is challenging and must be aligned with the system usage in the production environment. In this talk I present an approach for automatically assessing scalability of microservice configuration alternatives. The talk with briefly introduce the concept of microservices, present the deployment approach and the evaluation approach based on the open source tool locust.io; it will present the tool PPTAM used to conduct the experiments and the performed data analysis.
Mine Your Simulation Model: Automated Discovery of Business Process Simulatio...Marlon Dumas
Keynote talk by Marlon Dumas at the SIMULTECH 2021 conference. The talk gives an overview of ongoing research on automated construction of simulation models / digital twins from business process execution logs, including approaches that combine discrete event simulation with deep learning methods.
Reliability Evaluation of Reconfigurable NMR Architecture Supported with Hot ...Koorosh Aslansefat
Reliability is a major issue for fault-tolerant systems used in critical applications. N-modular redundancy (NMR) is one of the traditional approaches used for fault masking in fault-tolerant systems. Reconfigurable NMR architecture supported with hot or cold standby spares is a common industrial method. So far, no systematic method for creating the Markov model of reconfigurable NMR systems supported with hot standby spares has been presented. Likewise, there is no explicit parametric formula for the reliability of these systems in the literature. This paper focuses on two issues: the systematic construction of the Markov model of reconfigurable NMR system, and its evaluation through a precise and explicit formula introduces in this paper. The introduced formula gives for system designer a good view of reliability behaviour of the reconfigurable NMR systems.
https://doi.org/10.1007/978-3-030-58920-2_4
Tsinghua University: Two Exemplary Applications in ChinaDataStax Academy
In this talk, we will share the experiences of applying Cassandra with two real customers in China. In the first use case, we deployed Cassandra at Sany Group, a leading company of Machinery manufacturing, to manage the sensor data generated by construction machinery. By designing a specific schema and optimizing the write process, we successfully managed over 1.5 billion historical data records and achieved the online write throughput of 10k write operations per second with 5 servers. MapReduce is also used on Cassandra for valued-added services, e.g. operations management, machine failure prediction, and abnormal behavior mining. In the second use case, Cassandra is deployed in the China Meteorological Administration to manage the Meteorological data. We design a hybrid schema to support both slice query and time window based query efficiently. Also, we explored the optimized compaction and deletion strategy for meteorological data in this case.
Learning LWF Chain Graphs: A Markov Blanket Discovery ApproachPooyan Jamshidi
LWF Chain graphs were introduced by Lauritzen, Wermuth, and Frydenberg as a generalization of graphical models based on undirected graphs and DAGs. From the causality point of view, in an LWF CG: Directed edges represent direct causal effects. Undirected edges represent causal effects due to interference, which occurs when an individual’s outcome is influenced by their social interaction with other population members, e.g., in situations that involve contagious agents, educational programs, or social networks. The construction of chain graph models is a challenging task that would be greatly facilitated by automation.
Markov blanket discovery has an important role in structure learning of Bayesian network. It is surprising, however, how little attention it has attracted in the context of learning LWF chain graphs. In this work, we provide a graphical characterization of Markov blankets in chain graphs. The characterization is different from the well-known one for Bayesian networks and generalizes it. We provide a novel scalable and sound algorithm for Markov blanket discovery in LWF chain graphs. We also provide a sound and scalable constraint-based framework for learning the structure of LWF CGs from faithful causally sufficient data. With the use of our algorithm, the problem of structure learning is reduced to finding an efficient algorithm for Markov blanket discovery in LWF chain graphs. This greatly simplifies the structure-learning task and makes a wide range of inference/learning problems computationally tractable because our approach exploits locality.
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...Pooyan Jamshidi
Modern cyber-physical systems (e.g., robotics systems) are typically composed of physical and software components, the characteristics of which are likely to change over time. Assumptions about parts of the system made at design time may not hold at run time, especially when a system is deployed for long periods (e.g., over decades). Self-adaptation is designed to find reconfigurations of systems to handle such run-time inconsistencies. Planners can be used to find and enact optimal reconfigurations in such an evolving context. However, for systems that are highly configurable, such planning becomes intractable due to the size of the adaptation space. To overcome this challenge, in this paper we explore an approach that (a) uses machine learning to find Pareto-optimal configurations without needing to explore every configuration and (b) restricts the search space to such configurations to make planning tractable. We explore this in the context of robot missions that need to consider task timeliness and energy consumption. An independent evaluation shows that our approach results in high-quality adaptation plans in uncertain and adversarial environments.
Paper: https://arxiv.org/abs/1903.03920
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...Pooyan Jamshidi
Despite achieving state-of-the-art performance across many domains, machine learning systems are highly vulnerable to subtle adversarial perturbations. Although defense approaches have been proposed in recent years, many have been bypassed by even weak adversarial attacks. Previous studies showed that ensembles created by combining multiple weak defenses (i.e., input data transformations) are still weak. In this talk, I will show that it is indeed possible to construct effective ensembles using weak defenses to block adversarial attacks. However, to do so requires a diverse set of such weak defenses. Based on this motivation, I will present Athena, an extensible framework for building effective defenses to adversarial attacks against machine learning systems. I will talk about the effectiveness of ensemble strategies with a diverse set of many weak defenses that comprise transforming the inputs (e.g., rotation, shifting, noising, denoising, and many more) before feeding them to target deep neural network classifiers. I will also discuss the effectiveness of the ensembles with adversarial examples generated by various adversaries in different threat models. In the second half of the talk, I will explain why building defenses based on the idea of many diverse weak defenses works, when it is most effective, and what its inherent limitations and overhead are.
Transfer Learning for Performance Analysis of Configurable Systems:A Causal ...Pooyan Jamshidi
Modern systems (e.g., deep neural networks, big data analytics, and compilers) are highly configurable, which means they expose different performance behavior under different configurations. The fundamental challenge is that one cannot simply measure all configurations due to the sheer size of the configuration space. Transfer learning has been used to reduce the measurement efforts by transferring knowledge about performance behavior of systems across environments. Previously, research has shown that statistical models are indeed transferable across environments. In this work, we investigate identifiability and transportability of causal effects and statistical relations in highly-configurable systems. Our causal analysis agrees with previous exploratory analysis~\cite{Jamshidi17} and confirms that the causal effects of configuration options can be carried over across environments with high confidence. We expect that the ability to carry over causal relations will enable effective performance analysis of highly-configurable systems.
Integrated Model Discovery and Self-Adaptation of RobotsPooyan Jamshidi
Machine learn models efficiently under budget constraints to adapt to perturbations such as environmental changes or changes in the internal resources.
Modern software-intensive systems are composed of components that are likely to change their behaviour over time (e.g., adding/removing components).
For software to continue to operate under such changes, the assumptions about parts of the system made at design time may not hold at runtime due to uncertainty.
Mechanisms must be put in place that can dynamically learn new models of these assumptions and use them to make decisions about missions, configurations, etc.
Transfer Learning for Performance Analysis of Highly-Configurable SoftwarePooyan Jamshidi
A wide range of modern software-intensive systems (e.g., autonomous systems, big data analytics, robotics, deep neural architectures) are built configurable. These systems offer a rich space for adaptation to different domains and tasks. Developers and users often need to reason about the performance of such systems, making tradeoffs to change specific quality attributes or detecting performance anomalies. For instance, developers of image recognition mobile apps are not only interested in learning which deep neural architectures are accurate enough to classify their images correctly, but also which architectures consume the least power on the mobile devices on which they are deployed. Recent research has focused on models built from performance measurements obtained by instrumenting the system. However, the fundamental problem is that the learning techniques for building a reliable performance model do not scale well, simply because the configuration space is exponentially large that is impossible to exhaustively explore. For example, it will take over 60 years to explore the whole configuration space of a system with 25 binary options.
In this talk, I will start motivating the configuration space explosion problem based on my previous experience with large-scale big data systems in industry. I will then present my transfer learning solution to tackle the scalability challenge: instead of taking the measurements from the real system, we learn the performance model using samples from cheap sources, such as simulators that approximate the performance of the real system, with a fair fidelity and at a low cost. Results show that despite the high cost of measurement on the real system, learning performance models can become surprisingly cheap as long as certain properties are reused across environments. In the second half of the talk, I will present empirical evidence, which lays a foundation for a theory explaining why and when transfer learning works by showing the similarities of performance behavior across environments. I will present observations of environmental changes‘ impacts (such as changes to hardware, workload, and software versions) for a selected set of configurable systems from different domains to identify the key elements that can be exploited for transfer learning. These observations demonstrate a promising path for building efficient, reliable, and dependable software systems. Finally, I will share my research vision for the next five years and outline my immediate plans to further explore the opportunities of transfer learning.
Related Papers:
https://arxiv.org/pdf/1709.02280
https://arxiv.org/pdf/1704.00234
https://arxiv.org/pdf/1606.06543
Architectural Tradeoff in Learning-Based SoftwarePooyan Jamshidi
In classical software development, developers write explicit instructions in a programming language to hardcode the explicit behavior of software systems. By writing each line of code, the programmer instructs the software to have the desirable behavior by exploring a specific point in program space.
Recently, however, software systems are adding learning components that, instead of hardcoding an explicit behavior, learn a behavior through data. The learning-intensive software systems are written in terms of models and their parameters that need to be adjusted based on data. In learning-enabled systems, we specify some constraints on the behavior of a desirable program (e.g., a data set of input–output pairs of examples) and use the computational resources to search through the program space to find a program that satisfies the constraints. In neural networks, we restrict the search to a continuous subset of the program space.
This talk provides experimental evidence of making tradeoffs for deep neural network models, using the Deep Neural Network Architecture system as a case study. Concrete experimental results are presented; also featured are additional case studies in big data (Storm, Cassandra), data analytics (configurable boosting algorithms), and robotics applications.
Sensitivity Analysis for Building Adaptive Robotic SoftwarePooyan Jamshidi
P. Jamshidi, M. Velez, C. Kästner, N. Siegmund, and P. Kawthekar. Transfer learning for improving model predictions in highly configurable software. Int’l Symp. Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2017.
Autonomic Resource Provisioning for Cloud-Based SoftwarePooyan Jamshidi
The Third National Conference on Cloud Computing and Commerce (NC4), for more information please refer to: http://computing.dcu.ie/~pjamshidi/PDF/SEAMS2014.pdf
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
4. Motivation
Really like this??
Auto-scaling enables you to realize this ideal on-demand provisioning
Time
Demand
?
Enacting change in the
Cloud resources are not
real-time
9. An Example of Auto-scaling Rule These quantitative
values are required to
be determined by the
user
Þ requires deep
knowledge of
application (CPU,
memory,
thresholds)
Þ requires
performance
modeling expertise
(when and how to
scale)
Þ A unified opinion
of user(s) is
required
Amazon auto scaling
Microsoft Azure Watch
9
Microsoft Azure Auto-
scaling Application Block
11. Sources of Uncertainty in Elastic Software
P. Jamshidi, C. Pahl, N. Mendonca,
“Managing Uncertainty in Autonomic
Cloud Elasticity Controllers”,
IEEE Cloud Computing, 2016.
P. Jamshidi, C. Pahl,
“Software Architecture for the Cloud–
a Roadmap towards Control-Theoretic,
Model-Based Cloud Architecture”,
LNCS, 2015.
12. A concrete example of uncertainty in the cloud
Uncertainty related to enactment latency:
The same scaling action (adding/removing
a VM with precisely the same size) took
different time to be enacted on the
cloud platform (here is Microsoft Azure)
at different points and
this difference were significant
(up to couple of minutes).
The enactment latency would be also different
on different cloud platforms.
16. Why we decided to use type-2 fuzzy logic?
0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Region of
definite
satisfaction
Region of
definite
dissatisfaction Region of
uncertain
satisfaction
Performance Index
Possibility
Performance Index
Possibility
words can mean different
things to different people
Different users often
recommend
different elasticity policies
0 0.5 1 1.5 2 2.5 3
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Type-2 MF
Type-1 MF
19. Fuzzy rule elicitation
Rule
(𝒍)
Antecedents Consequent
𝒄 𝒂𝒗𝒈
𝒍
Workload
Response-
time
Normal
(-2)
Effort
(-1)
Medium
Effort
(0)
High
Effort
(+1)
Maximum
Effort (+2)
1 Very low Instantaneous 7 2 1 0 0 -1.6
2 Very low Fast 5 4 1 0 0 -1.4
3 Very low Medium 0 2 6 2 0 0
4 Very low Slow 0 0 4 6 0 0.6
5 Very low Very slow 0 0 0 6 4 1.4
6 Low Instantaneous 5 3 2 0 0 -1.3
7 Low Fast 2 7 1 0 0 -1.1
8 Low Medium 0 1 5 3 1 0.4
9 Low Slow 0 0 1 8 1 1
10 Low Very slow 0 0 0 4 6 1.6
11 Medium Instantaneous 6 4 0 0 0 -1.6
12 Medium Fast 2 5 3 0 0 -0.9
13 Medium Medium 0 0 5 4 1 0.6
14 Medium Slow 0 0 1 7 2 1.1
15 Medium Very slow 0 0 1 3 6 1.5
16 High Instantaneous 8 2 0 0 0 -1.8
17 High Fast 4 6 0 0 0 -1.4
18 High Medium 0 1 5 3 1 0.4
19 High Slow 0 0 1 7 2 1.1
20 High Very slow 0 0 0 6 4 1.4
21 Very high Instantaneous 9 1 0 0 0 -1.9
22 Very high Fast 3 6 1 0 0 -1.2
23 Very high Medium 0 1 4 4 1 0.5
24 Very high Slow 0 0 1 8 1 1
25 Very high Very slow 0 0 0 4 6 1.6
Rule
()
Antecedents Consequent
Work
load
Response
-time
-2 -1 0 +1 +2
12 Medium Fast 2 5 3 0 0 -0.9
10 experts’ responses
𝑅"
: IF (the workload (𝑥%) is 𝐹'()
, AND the response-
time (𝑥*) is 𝐺'(,
), THEN (add/remove 𝑐./0
"
instances).
𝑐./0
"
=
∑ 𝑤4
"
×𝐶
78
49%
∑ 𝑤4
"78
49%
Goal: pre-computations of costly calculations
to make a runtime efficient elasticity
reasoning based on fuzzy inference
20. Elasticity Reasoning @ Runtime
Liang, Q., Mendel, J. M. (2000). Interval type-2 fuzzy
logic systems: theory and design. Fuzzy Systems, IEEE
Transactions on, 8(5), 535-550.
Scaling Actions
Monitoring Data
28. Workload Prediction and Its Accuracy
0 20 40 60 80 100 120
Time (Seconds)
150
200
250
300
350
400
450
500
Numberofhits
Forecasting with double exponential smoothing
Observed data
Smoothed data
Forecast
0.12
0.9
0.14
0.2
0.16
0.92
0.18
0.4 0.94
0.2
alpha gamma
Root mean squared error versus alpha
RMSE
0.22
0.960.6
0.24
0.98
0.26
0.8
1
29. Effectiveness of RobusT2Scale
SUT Criteria Big spike Dual phase
Large
variations
Quickly
varying
Slowly
varying
Steep tri
phase
RobusT2Scale
973ms 537ms 509ms 451ms 423ms 498ms
3.2 3.8 5.1 5.3 3.7 3.9
Overprovisioning
354ms 411ms 395ms 446ms 371ms 491ms
6 6 6 6 6 6
Under
provisioning
1465ms 1832ms 1789ms 1594ms 1898ms 2194ms
2 2 2 2 2 2
SLA: 𝒓𝒕 𝟗𝟓 ≤ 𝟔𝟎𝟎𝒎𝒔
For every 10s control interval
•RobusT2Scale is superior to under-provisioning in terms of
guaranteeing the SLA and does not require excessive
resources
•RobusT2Scale is superior to over-provisioning in terms of
guaranteeing required resources while guaranteeing the SLA
35. Fuzzy Q-Learning
Algorithm 1 : Fuzzy Q-Learning
Require: , ⌘, ✏
1: Initialize q-values:
q[i, j] = 0, 1 < i < N , 1 < j < J
2: Select an action for each fired rule:
ai = argmaxkq[i, k] with probability 1 ✏ . Eq. 5
ai = random{ak, k = 1, 2, · · · , J} with probability ✏
3: Calculate the control action by the fuzzy controller:
a =
PN
i=1 µi(x) ⇥ ai, . Eq. 1
where ↵i(s) is the firing level of the rule i
4: Approximate the Q function from the current
q-values and the firing level of the rules:
Q(s(t), a) =
PN
i=1 ↵i(s) ⇥ q[i, ai],
where Q(s(t), a) is the value of the Q function for
the state current state s(t) in iteration t and the action a
5: Take action a and let system goes to the next state s(t+1).
6: Observe the reinforcement signal, r(t + 1)
and compute the value for the new state:
V (s(t + 1)) =
PN
i=1 ↵i(s(t + 1)).maxk(q[i, qk]).
7: Calculate the error signal:
Q = r(t + 1) + ⇥ Vt(s(t + 1)) Q(s(t), a), . Eq. 4
where is a discount factor
8: Update q-values:
q[i, ai] = q[i, ai] + ⌘ · Q · ↵i(s(t)), . Eq. 4
where ⌘ is a learning rate
9: Repeat the process for the new state until it converges
D
c
c
a
o
b
o
S
a
r
d
a
w
if
th
to
r
a
Low Medium High
Workload
1
0
α β γ δ
Bad OK Good
Response Time
1
0
λ μ ν
of w and rt that correspond to the state of the system, s(t) (cf.
Step 4 in Algorithm 1). The control signal sa represents the
action a that the controller take at each loop. We define the
reward signal r(t) based on three criteria: (i) numbers of the
desired response time violations, (ii) the amount of resource
acquired, and (iii) throughput, as follows:
r(t) = U(t) U(t 1), (6)
where U(t) is the utility value of the system at time t. Hence,
if a controlling action leads to an increased utility, it means
that the action is appropriate. Otherwise, if the reward is close
to zero, it implies that the action is not effective. A negative
reward (punishment) warns that the situation becomes worse
after taking the action. The utility function is defined as:
U(t) = w1 ·
th(t)
thmax
+w2 ·(1
vm(t)
vmmax
)+w3 ·(1 H(t)) (7)
H(t) =
8
><
>:
(rt(t) rtdes)
rtdes
rtdes rt(t) 2 · rtdes
1 rt(t) 2 · rtdes
0 rt(t) rtdes
where th(t), vm(t) and rt(t) are throughput, number of worker
roles and response time of the system, respectively. w1,w2 and
w3 are their corresponding weights determining their relative
o possible but due to the intricacies of updating
e, we consider this as a natural future extension
r the problem areas that requires coordination
controllers, see [9].
ion. The controller receives the current values
t correspond to the state of the system, s(t) (cf.
rithm 1). The control signal sa represents the
e controller take at each loop. We define the
(t) based on three criteria: (i) numbers of the
e time violations, (ii) the amount of resource
ii) throughput, as follows:
r(t) = U(t) U(t 1), (6)
he utility value of the system at time t. Hence,
action leads to an increased utility, it means
s appropriate. Otherwise, if the reward is close
ies that the action is not effective. A negative
ment) warns that the situation becomes worse
action. The utility function is defined as:
h(t)
max
+w2 ·(1
vm(t)
vmmax
)+w3 ·(1 H(t)) (7)
Code:
https://github.com/pooyanjamshidi/Fuzzy-Q-Learning
50. Submit to CloudWays 2016!
Paper submission deadline July 1st, 2016
Decision notification August 1st, 2016
Final version due August 8th, 2016
Workshop date September 5th, 2016
Collocated with ESOCC, Vienna
Topics: Cloud Architecture, Big Data, DevOps
- Details: https://sites.google.com/site/cloudways2016/call-for-papers