SigOpt provides an optimization platform and techniques to help practitioners solve machine learning problems more efficiently. It allows users to frame problems as black box optimization and optimize multiple competing metrics. SigOpt's parallel optimization capabilities also help fully utilize available compute resources to accelerate results. Case studies demonstrated significant speedups and accuracy gains from SigOpt compared to random search and grid search.
This talk discusses the intuition behind Bayesian optimization with and without multiple metrics. Tobias Andreassen, who supports a number of our systematic trading customers, presented the intuition behind Bayesian optimization for model optimization with a single or multiple (often competing) metrics. Many times it makes sense to analyze a second metric to avoid myopic training runs that overfit on your data, or otherwise don’t represent or impede performance in real-world scenarios.
Tuning for Systematic Trading: Talk 2: Deep LearningSigOpt
This talk explains how to train deep learning and other expensive models with parallelism and multitask optimization to reduce wall clock time. Tobias Andreassen, who supports a number of our systematic trading customers, presented the intuition behind Bayesian optimization for model optimization with a single or multiple (often competing) metrics. Many times it makes sense to analyze a second metric to avoid myopic training runs that overfit on your data, or otherwise don’t represent or impede performance in real-world scenarios.
Tuning for Systematic Trading: Talk 3: Training, Tuning, and Metric StrategySigOpt
This talk explains how you can train and tune efficiently using metric strategy to assign, store, and optimize a variety of metrics, even changing them over time. Tobias Andreassen, who supports a number of our systematic trading customers, explained how he helps customers tune more efficiently with these SigOpt features in real-world scenarios.
Agile analytics : An exploratory study of technical complexity managementAgnirudra Sikdar
The thesis involved the reviewing of various case studies to determine the types of modelling, choice of algorithm, types of analytical approaches and trying to determine the various complexities arising from these cases. From these reviews, procedures have been proposed to improve the efficiency and manage the various types of complexities from using agile methodological perspective. Focus was mostly done on Customer Segmentation and Clustering , with the sole purpose to bridge Big Data and Business Intelligence together using Analytic.
Common Problems in Hyperparameter OptimizationSigOpt
Originally given at MLConf NYC 2017.
All large machine learning pipelines have tunable parameters, commonly referred to as hyperparameters. Hyperparameter optimization is the process by which we find the values for these parameters that cause our system to perform the best. SigOpt provides a Bayesian optimization platform that is commonly used for hyperparameter optimization, and I’m going to share some of the common problems we’ve seen when integrating into machine learning pipelines.
Interactive Tradeoffs Between Competing Offline Metrics with Bayesian Optimiz...SigOpt
Many real world applications - machine learning models, simulators, etc. - have multiple competing metrics that define performance; these require practitioners to carefully consider potential tradeoffs. However, assessing and ranking this tradeoff is nontrivial, especially when the number of metrics is more than two. Often times, practitioners scalarize the metrics into a single objective, e.g., using a weighted sum.
In this talk, we pose this problem as a constrained multi-objective optimization problem. By setting and updating the constraints, we can efficiently explore only the region of the Pareto efficient frontier of the model/system of most interest. We motivate this problem with the application of an experimental design setting, where we are trying to fabricate high performance glass substrate for solar cell panels.
This talk discusses the intuition behind Bayesian optimization with and without multiple metrics. Tobias Andreassen, who supports a number of our systematic trading customers, presented the intuition behind Bayesian optimization for model optimization with a single or multiple (often competing) metrics. Many times it makes sense to analyze a second metric to avoid myopic training runs that overfit on your data, or otherwise don’t represent or impede performance in real-world scenarios.
Tuning for Systematic Trading: Talk 2: Deep LearningSigOpt
This talk explains how to train deep learning and other expensive models with parallelism and multitask optimization to reduce wall clock time. Tobias Andreassen, who supports a number of our systematic trading customers, presented the intuition behind Bayesian optimization for model optimization with a single or multiple (often competing) metrics. Many times it makes sense to analyze a second metric to avoid myopic training runs that overfit on your data, or otherwise don’t represent or impede performance in real-world scenarios.
Tuning for Systematic Trading: Talk 3: Training, Tuning, and Metric StrategySigOpt
This talk explains how you can train and tune efficiently using metric strategy to assign, store, and optimize a variety of metrics, even changing them over time. Tobias Andreassen, who supports a number of our systematic trading customers, explained how he helps customers tune more efficiently with these SigOpt features in real-world scenarios.
Agile analytics : An exploratory study of technical complexity managementAgnirudra Sikdar
The thesis involved the reviewing of various case studies to determine the types of modelling, choice of algorithm, types of analytical approaches and trying to determine the various complexities arising from these cases. From these reviews, procedures have been proposed to improve the efficiency and manage the various types of complexities from using agile methodological perspective. Focus was mostly done on Customer Segmentation and Clustering , with the sole purpose to bridge Big Data and Business Intelligence together using Analytic.
Common Problems in Hyperparameter OptimizationSigOpt
Originally given at MLConf NYC 2017.
All large machine learning pipelines have tunable parameters, commonly referred to as hyperparameters. Hyperparameter optimization is the process by which we find the values for these parameters that cause our system to perform the best. SigOpt provides a Bayesian optimization platform that is commonly used for hyperparameter optimization, and I’m going to share some of the common problems we’ve seen when integrating into machine learning pipelines.
Interactive Tradeoffs Between Competing Offline Metrics with Bayesian Optimiz...SigOpt
Many real world applications - machine learning models, simulators, etc. - have multiple competing metrics that define performance; these require practitioners to carefully consider potential tradeoffs. However, assessing and ranking this tradeoff is nontrivial, especially when the number of metrics is more than two. Often times, practitioners scalarize the metrics into a single objective, e.g., using a weighted sum.
In this talk, we pose this problem as a constrained multi-objective optimization problem. By setting and updating the constraints, we can efficiently explore only the region of the Pareto efficient frontier of the model/system of most interest. We motivate this problem with the application of an experimental design setting, where we are trying to fabricate high performance glass substrate for solar cell panels.
SigOpt at O'Reilly - Best Practices for Scaling Modeling PlatformsSigOpt
Companies are increasingly building modeling platforms to empower their researchers to efficiently scale the development and productionalization of their models. Scott Clark and Matt Greenwood share a case study from a leading algorithmic trading firm to illustrate best practices for building these types of platforms in any industry. Join in to learn how Two Sigma, a leading quantitative investment and technology firm, solved its model optimization problem.
In this presentation I review various data science techniques and discuss their usefulness to pricing actuaries working in general insurance.
This presentation was originally given at the TIGI webinar in 2020.
https://www.actuaries.org.uk/learn-develop/attend-event/tigi-2020-technical-issues-general-insurance
SigOpt CEO Scott Clark provides insights for modeling at scale in systematic trading. SigOpt works with algorithmic trading firms that collectively represent $300 billion in assets under management (AUM). In this presentation, Scott draws on this experience to provide a few critical insights to how these companies effectively model at scale. Alongside these insights, Scott shares a more specific case study from working with Two Sigma, a leading systematic investment manager.
Tuning the Untunable - Insights on Deep Learning OptimizationSigOpt
Patrick Hayes originally gave this talk at ODSC West in 2018. During this talk, Patrick discusses a couple key barriers to deep learning optimization and how SigOpt solves them. First, Patrick discusses the problem of lengthy training cycles and how novel techniques like multitask optimization are designed to use partial information to solve this challenge. Second, Patrick discusses automated cluster management and how solving this problem makes it much easier to manage training cycles for these models.
Kaggle Higgs Boson Machine Learning ChallengeBernard Ong
What It Took to Score the Top 2% on the Higgs Boson Machine Learning Challenge. A journey into advanced machine learning models ensembles stacking methods.
Valencian Summer School in Machine Learning 2017 - Day 1
Lectures Review: Summary Day 1 Sessions. By Mercè Martín (BigML).
https://bigml.com/events/valencian-summer-school-in-machine-learning-2017
Plotcon 2016 Visualization Talk by Alexandra JohnsonSigOpt
Machine learning is full of ideas that are far abstracted away from the underlying data and difficult to understand. Luckily, this represents an amazing opportunity for visualization! These slides dive into the machine learning meta-problem of hyperparameter optimization. We'll show 4 opportunities for visualization in helping people understand, implement, and evaluate hyperparameter optimization strategies.
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021Chris Ohk
RL 논문 리뷰 스터디에서 Evolving Reinforcement Learning Algorithms 논문 내용을 정리해 발표했습니다. 이 논문은 Value-based Model-free RL 에이전트의 손실 함수를 표현하는 언어를 설계하고 기존 DQN보다 최적화된 손실 함수를 제안합니다. 많은 분들에게 도움이 되었으면 합니다.
AWS makes it easy to build, train, tune, and deploy Machine Learning (ML) models. If you're excited to get started with ML on AWS but want a refresher on the ML concepts behind build, train, tune, and deploy, this Dev Chat is for you.
Originally delivered as a Dev Chat at AWS Summit SF by Software Engineer Alexandra Johnson
SigOpt helps your algorithmic traders and data scientists build better models faster. Learn how to integrate SigOpt into your modeling platform for quick ROI for your data science team.
This webinar, hosted by SigOpt co-founder and CEO Scott Clark, explains how advanced features can help you achieve your modeling goals. These features include metric definition and multimetric optimization, conditional parameters, and multitask optimization for long training cycles.
SigOpt founder and CEO, Scott Clark, PhD, explains the tradeoffs you'll want to consider when designing your modeling platform and integrating hyperparameter optimization to enhance data scientist productivity.
SigOpt at O'Reilly - Best Practices for Scaling Modeling PlatformsSigOpt
Companies are increasingly building modeling platforms to empower their researchers to efficiently scale the development and productionalization of their models. Scott Clark and Matt Greenwood share a case study from a leading algorithmic trading firm to illustrate best practices for building these types of platforms in any industry. Join in to learn how Two Sigma, a leading quantitative investment and technology firm, solved its model optimization problem.
In this presentation I review various data science techniques and discuss their usefulness to pricing actuaries working in general insurance.
This presentation was originally given at the TIGI webinar in 2020.
https://www.actuaries.org.uk/learn-develop/attend-event/tigi-2020-technical-issues-general-insurance
SigOpt CEO Scott Clark provides insights for modeling at scale in systematic trading. SigOpt works with algorithmic trading firms that collectively represent $300 billion in assets under management (AUM). In this presentation, Scott draws on this experience to provide a few critical insights to how these companies effectively model at scale. Alongside these insights, Scott shares a more specific case study from working with Two Sigma, a leading systematic investment manager.
Tuning the Untunable - Insights on Deep Learning OptimizationSigOpt
Patrick Hayes originally gave this talk at ODSC West in 2018. During this talk, Patrick discusses a couple key barriers to deep learning optimization and how SigOpt solves them. First, Patrick discusses the problem of lengthy training cycles and how novel techniques like multitask optimization are designed to use partial information to solve this challenge. Second, Patrick discusses automated cluster management and how solving this problem makes it much easier to manage training cycles for these models.
Kaggle Higgs Boson Machine Learning ChallengeBernard Ong
What It Took to Score the Top 2% on the Higgs Boson Machine Learning Challenge. A journey into advanced machine learning models ensembles stacking methods.
Valencian Summer School in Machine Learning 2017 - Day 1
Lectures Review: Summary Day 1 Sessions. By Mercè Martín (BigML).
https://bigml.com/events/valencian-summer-school-in-machine-learning-2017
Plotcon 2016 Visualization Talk by Alexandra JohnsonSigOpt
Machine learning is full of ideas that are far abstracted away from the underlying data and difficult to understand. Luckily, this represents an amazing opportunity for visualization! These slides dive into the machine learning meta-problem of hyperparameter optimization. We'll show 4 opportunities for visualization in helping people understand, implement, and evaluate hyperparameter optimization strategies.
Evolving Reinforcement Learning Algorithms, JD. Co-Reyes et al, 2021Chris Ohk
RL 논문 리뷰 스터디에서 Evolving Reinforcement Learning Algorithms 논문 내용을 정리해 발표했습니다. 이 논문은 Value-based Model-free RL 에이전트의 손실 함수를 표현하는 언어를 설계하고 기존 DQN보다 최적화된 손실 함수를 제안합니다. 많은 분들에게 도움이 되었으면 합니다.
AWS makes it easy to build, train, tune, and deploy Machine Learning (ML) models. If you're excited to get started with ML on AWS but want a refresher on the ML concepts behind build, train, tune, and deploy, this Dev Chat is for you.
Originally delivered as a Dev Chat at AWS Summit SF by Software Engineer Alexandra Johnson
SigOpt helps your algorithmic traders and data scientists build better models faster. Learn how to integrate SigOpt into your modeling platform for quick ROI for your data science team.
This webinar, hosted by SigOpt co-founder and CEO Scott Clark, explains how advanced features can help you achieve your modeling goals. These features include metric definition and multimetric optimization, conditional parameters, and multitask optimization for long training cycles.
SigOpt founder and CEO, Scott Clark, PhD, explains the tradeoffs you'll want to consider when designing your modeling platform and integrating hyperparameter optimization to enhance data scientist productivity.
In this video I’m going to show you how SigOpt can help you amplify your machine learning and AI models by optimally tuning them using our black-box optimization platform.
Video: https://youtu.be/EjGrRxXWg8o
The SigOpt platform provides an ensemble of state-of-the-art Bayesian and Global optimization algorithms via a simple Software-as-a-Service API.
Metric Management: a SigOpt Applied Use CaseSigOpt
These slides correspond to a recording of a live webcast of a demo of Metric Management functionality in SigOpt, keeping model size down while increasing validation accuracy for a road sign image classification problem.
In this deck I’m going to show you how SigOpt can help you amplify your trading models by optimally tuning them using our black-box optimization platform.
Production model lifecycle management 2016 09Greg Makowski
This talk covers going over the various stages of building data mining models, putting them into production and eventually replacing them. A common theme throughout are three attributes of predictive models: accuracy, generalization and description. I assert you can have it all, and having all three is important for managing the lifecycle. A subtle point is that this is a step to developing embedded, automated data mining systems which can figure out themselves when they need to be updated.
Evolving the Optimal Relevancy Ranking Model at Dice.comSimon Hughes
This is a talk about gathering a golden test set of relevancy judgements, either using manual annotators or search log mining, to use in either an automated or manual relevancy tuning process. We also discuss the dangers of positive feedback loops when building closed-loop machine learning models for search and recommendation.
BA is used to gain insights that inform business decisions and can be used to automate and optimize business processes. Data-driven companies treat their data as a corporate asset and leverage it for a competitive advantage. Successful business analytics depends on data quality, skilled analysts who understand the technologies and the business, and an organizational commitment to data-driven decision-making.
Business analytics examples
Business analytics techniques break down into two main areas. The first is basic business intelligence. This involves examining historical data to get a sense of how a business department, team or staff member performed over a particular time. This is a mature practice that most enterprises are fairly accomplished at using.
Please find the pre-print online:
https://hal.inria.fr/hal-01334851
Feature models are widely used to encode the configurations of a software product line in terms of mandatory, optional and exclusive features as well as propositional constraints over the features. Numerous computationally expensive procedures have been developed to model check, test, configure, debug, or compute relevant information of feature models. In this paper we explore the possible improvement of relying on the enumeration of all configurations when performing automated analysis operations. The key idea is to pre-compile configurations so that reasoning operations (queries and transformations) can then be performed in polytime. We tackle the challenge of how to scale the existing enu-meration techniques. We show that the use of distributed computing techniques might offer practical solutions to previously unsolvable problems and opens new perspectives for the automated analysis of software product lines.
Using SigOpt to Tune Deep Learning Models with Nervana CloudSigOpt
In this talk I'll show how the Bayesian Optimization methods used by SigOpt, coupled with the incredibly scalable deep learning architecture provided with ncloud and neon, allow anyone it easily tune their models to quickly achieve higher accuracy. I'll walk through the techniques and show an explicit example with results.
This is the slide from my talk at FULokoja Ingressive meetup.
XGBoost is a decision-tree-based ensemble Machine Learning algorithm that uses a gradient boosting framework. In prediction problems involving unstructured and structured data (images, text, etc.) artificial neural networks tend to outperform all other algorithms or frameworks. However, when it comes to small-to-medium structured/tabular data, decision tree-based algorithms are considered best-in-class right now. XGBoost model has the best combination of prediction performance and processing time compared to other algorithms.
For a company like Blue Apron that is radically transforming the way we buy, prepare and eat meals, experimentation is mission critical for delivering a great customer experience. Blue Apron doesn’t just think about experimenting to improve short term conversion, they focus on ways to impact longer term metrics like retention, referrals, and lifetime value.
Join John Cline, engineering manager at Blue Apron, to learn how his team has built their experimentation program on Optimizely’s platform.
Attend this webinar to learn:
-How Blue Apron built their experimentation program on top of Optimizely Full Stack
-How developers play a critical role in experimentation
-The key considerations for developers when thinking about experimentation
Scott Clark, Co-Founder and CEO, SigOpt at MLconf SF 2016MLconf
Using Bayesian Optimization to Tune Machine Learning Models: In this talk we briefly introduce Bayesian Global Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. We will motivate the problem and give example applications.
We will also talk about our development of a robust benchmark suite for our algorithms including test selection, metric design, infrastructure architecture, visualization, and comparison to other standard and open source methods. We will discuss how this evaluation framework empowers our research engineers to confidently and quickly make changes to our core optimization engine.
We will end with an in-depth example of using these methods to tune the features and hyperparameters of a real world problem and give several real world applications.
Similar to Advanced Optimization for the Enterprise Webinar (20)
Optimizing BERT and Natural Language Models with SigOpt Experiment ManagementSigOpt
SigOpt Machine Learning Engineer Meghana Ravikumar explains how she reduced the size of a BERT natural language model trained on the SQUAD 2.0 question-answer database, to reduce its size while maintaining performance using a "distillation" process optimized with SigOpt's Experiment Management functionality.
SigOpt's Fay Kallel, Head of Product, and Jim Blomo, Head of Engineering, describe the latest updates to SigOpt, a suite of features that help you manage your modeling process.
Efficient NLP by Distilling BERT and Multimetric OptimizationSigOpt
SigOpt ML Engineer Meghana Ravikumar explains how to use multimetric optimization to achieve a more efficient, compact BERT model to perform on a question-answering task.
SigOpt Research Engineer Michael McCourt and DarwinAI CTO Alexander Wong explain how they used SigOpt and hyperparameter optimization to successfully improve accuracy of detecting COVID-19 cases from chest X-Rays, using the COVID-Net model and the COVIDx open dataset.
Tuning Data Augmentation to Boost Model PerformanceSigOpt
In this webinar, SigOpt ML Engineer Meghana Ravikumar presents on and builds an image classifier trained on the Stanford Cars dataset to evaluate two approaches to transfer learning—fine tuning and feature extraction—and the impact of Multitask optimization, a more efficient form of Bayesian optimization, on these techniques. Once we define the most performant transfer learning technique for Stanford Cars, we will use image augmentation to double the size of the dataset to boost the classifier’s performance. Instead of manually tuning the hyperparameters associated with image augmentation, we will use Multitask Optimization to learn these hyperparameters using the downstream image classifier’s performance as the guide. In conjunction with model performance, we will also explore the features of these augmented images and the downstream implications for our image classifier.
As data science workloads grow, so does their need for infrastructure. But, is it fair to ask data scientists to also become infrastructure experts? If not the data scientists, then, who is responsible for spinning up and managing data science infrastructure? This talk will address the context in which ML infrastructure is emerging, walk through two examples of ML infrastructure tools for launching hyperparameter optimization jobs, and end with some thoughts for building better tools in the future.
Originally given as a talk at the PyData Ann Arbor meetup (https://www.meetup.com/PyData-Ann-Arbor/events/260380989/)
SigOpt at Uber Science Symposium - Exploring the spectrum of black-box optimi...SigOpt
At the inaugural Uber science symposium, SigOpt research engineer Bolong (Harvey) Cheng shares insights on black-box optimization from his experience working with both leading academics and innovative enterprises.
Training and tuning models with lengthy training cycles like those in deep learning can be extremely expensive and may sometimes involve techniques that degrade performance. We'll explore recent research on optimization strategies to efficiently tune these types of deep learning models. We will provide benchmarks and comparisons to other popular methods for optimizing the models, and we'll recommend valuable areas for further applied research.
SigOpt at GTC - Reducing operational barriers to optimizationSigOpt
Advanced hardware like NVIDIA technology lowers technical barriers to model size and scope, but issues remain in areas like model performance and training infrastructure management. We'll discuss operational challenges to training models at scale with a particular focus on how training management and hyperparameter tuning can inform each other to accomplish specific goals. We'll also explore techniques like parallelism and scheduling, discuss their impact on model optimization, and compare various techniques. We'll also evaluate results of this approach. In particular, we'll focus on how new tools that automate training orchestration accelerate model development and increase the volume and quality of models in production.
SigOpt at MLconf - Reducing Operational Barriers to Model TrainingSigOpt
In this talk at MLconf NYC, Alexandra Johnson, platform engineering lead at SigOpt, discusses common operational challenges with scaling model training and how solutions are designed to
Machine learning infrastructure solve data scientists' problems using infrastructure tools. This talk shows the case study of building SigOpt Orchestrate, an ML infrastructure tool. The talk highlights how data scientists' concerns as user mapped to solutions with some of today's most popular infrastructure tools.
To learn more about SigOpt Orchestrate: https://sigopt.com/orchestrate
Originally given as a talk for UC Berkeley's Women in Electrical Engineering and Computer Science group on January 24, 2019.
Tips and techniques for hyperparameter optimizationSigOpt
All machine learning and artificial intelligence pipelines - from reinforcement agents to deep neural nets - have tunable hyperparameters. Optimizing these hyperparameters can take a model from scrappy prototype to production-ready system. This presentation shows techniques for performing hyperparameter optimization from an engineer who builds advanced and widely used optimization tools.
MLconf 2017 Seattle Lunch Talk - Using Optimal Learning to tune Deep Learning...SigOpt
In this talk we introduce Bayesian Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. Deep learning pipelines are notoriously expensive to train and often have many tunable parameters including hyperparameters, the architecture, feature transformations that can have a large impact on the efficacy of the model.
We will motivate the problem by giving several example applications using multiple open source deep learning frameworks and open datasets. We’ll compare the results of Bayesian Optimization to standard techniques like grid search, random search, and expert tuning.
Using Optimal Learning to Tune Deep Learning PipelinesSigOpt
SigOpt talk from NVIDIA GTC 2017 and AWS SF AI Day
We'll introduce Bayesian optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time consuming or expensive. Deep learning pipelines are notoriously expensive to train and often have many tunable parameters, including hyperparameters, the architecture, and feature transformations, that can have a large impact on the efficacy of the model. We'll provide several example applications using multiple open source deep learning frameworks and open datasets. We'll compare the results of Bayesian optimization to standard techniques like grid search, random search, and expert tuning. Additionally, we'll present a robust benchmark suite for comparing these methods in general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
3. SigOpt. Confidential. 3
Abstract
SigOpt provides an extensive set of advanced features,
which help you, the expert, save time while
increasing performance.
Today, we will be sharing some of the intuition behind
these features, while combining and applying them to
tackle real-world problems
4. SigOpt. Confidential.
How experimentation impacts your modeling
4
Notebook & Model Framework
Hardware Environment
Data
Preparation
Experimentation, Training, Evaluation
Model
Productionalization
Validation
Serving
Deploying
Monitoring
Managing
Inference
Online Testing
Transformation
Labeling
Pre-Processing
Pipeline Dev.
Feature Eng.
Feature Stores
On-Premise Hybrid Multi-Cloud
Experimentation & Model Optimization
Insights, Tracking,
Collaboration
Model Search,
Hyperparameter Tuning
Resource Scheduler,
Management
...and more
5. SigOpt. Confidential. 5
Motivation
1. How to solve a black box optimization problem
2. Why you should optimize using
multiple competing metrics
3. How to continuously and efficiently employ your
project’s dedicated compute infrastructure
4. How to tune models with expensive training costs
6. SigOpt. Confidential.
SigOpt Features
Enterprise
Platform
Optimization
Engine
Experiment
Insights
Reproducibility
Intuitive web dashboards
Cross-team permissions
and collaboration
Advanced experiment
visualizations
Usage insights
Parameter importance
analysis
Multimetric optimization
Continuous, categorical, or
integer parameters
Constraints and
failure regions
Up to 10k observations,
100 parameters
Multitask optimization and
high parallelism
Conditional
parameters
Infrastructure agnostic
REST API
Parallel Resource Scheduler
Black-Box Interface
Tunes without
accessing any data
Libraries for Python,
Java, R, and MATLAB
6
8. SigOpt. Confidential.
Why black box optimization?
SigOpt was designed to empower you, the practitioner, to re-define most machine
learning problems as black box optimization problems with the added side benefits:
• Amplified performance — incremental gains in accuracy or other success metrics
• Productivity gains — a consistent platform across tasks that facilitates sharing
• Accelerated modeling — early elimination of non-scalable tasks
• Compute efficiency — continuous, full utilization of infrastructure
SigOpt uses an ensemble of Bayesian and Global Optimization methods to solve
these black box optimization problems.
8
Black Box Optimization
9. SigOpt. Confidential.
Your firewall
Training
Data
AI, ML, DL,
Simulation Model
Model Evaluation
or Backtest
Testing
Data
New
Configurations
Objective
Metric
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and
reproduce any model
ENTERPRISE PLATFORM
Built to fit any stack and scale
with your needs
OPTIMIZATION ENGINE
Explore and exploit with a
variety of techniques
RESTAPI
Configuration
Parameters or
Hyperparameters
Black Box Optimization
10. SigOpt. Confidential.
EXPERIMENT INSIGHTS
Track, organize, analyze and
reproduce any model
ENTERPRISE PLATFORM
Built to fit any stack and scale
with your needs
OPTIMIZATION ENGINE
Explore and exploit with a
variety of techniques
RESTAPI
Black Box Optimization
Better
Results
11. SigOpt. Confidential.
A graphical depiction of the iterative process
11
Build a statistical model Build a statistical model
Choose the next point
to maximize the acquisition function
Black Box Optimization
Choose the next point
to maximize the acquisition function
12. SigOpt. Confidential.
Gaussian processes: a powerful tool for modeling in spatial statistics
A standard tool for building statistical models is the Gaussian process
[Fasshauer et al, 2015, Fraizer, 2018].
• Assume that function values are jointly normally distributed.
• Apply prior beliefs about mean behavior and covariance between observations.
• Posterior beliefs about unobserved locations can be computed rather easily.
Different prior assumptions produce different statistical models:
12
Black Box Optimization
13. SigOpt. Confidential.
Acquisition function: given a model, how should we choose the next point?
An acquisition function is a strategy for defining the utility of a future sample, given the current samples, while
balancing exploration and exploitation [Shahriari et al, 2016].
Different acquisition functions choose different points (EI, PI, KG, etc.).
13
Black Box Optimization
Exploration:
Learning about the whole function f
Exploitation:
Further resolving regions where good f values
have already been observed
14. SigOpt Blog Posts: Intuition Behind Bayesian Optimization
Some Relevant Blog Posts
● Intuition Behind Covariance Kernels
● Approximation of Data
● Likelihood for Gaussian Processes
● Profile Likelihood vs. Kriging Variance
● Intuition behind Gaussian Processes
● Dealing with Troublesome Metrics
Find more blog posts visit:
https://sigopt.com/blog/
16. SigOpt. Confidential.
Why optimize against multiple competing metrics?
SigOpt allows the user to specify multiple competing metrics for either optimization or tracking
to better align success of the modeling with business value and has the additional benefit of:
• Multiple metrics — The option to defining multiple metrics, which can yield new and
interesting results
• Insights, metric storage — Insights through tracking of optimized and unoptimized
metrics
• Thresholds — The ability to define thresholds for success to better guide the optimizer
We believe this process gives models that deliver more reliable business outcomes and which
are better tied to real world applications.
16
Multiple Competing Metrics
17. SigOpt. Confidential.
Your firewall
Training
Data
AI, ML, DL,
Simulation Models
Model Evaluation
or Backtest
Testing
Data
New Configurations
Objective
Metric
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and reproduce
any model
ENTERPRISE PLATFORM
Built to fit any stack and scale with your
needs
OPTIMIZATION ENGINE
Explore and exploit with a variety of
techniques
RESTAPI
Configuration
Parameters or
Hyperparameters
Multiple Competing Metrics
18. SigOpt. Confidential.
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and reproduce
any model
ENTERPRISE PLATFORM
Built to fit any stack and scale with your
needs
OPTIMIZATION ENGINE
Explore and exploit with a variety of
techniques
RESTAPI
Multiple Competing Metrics
Multiple Optimized
and Unoptimized
Metrics
19. SigOpt. Confidential.
Multiple Competing Metrics
Balancing competing metrics to find the Pareto frontier
Most problems of practical relevance involve 2 or more competing metrics.
• Neural networks — Balancing accuracy and inference time
• Materials design — Balancing performance and maintenance cost
• Control systems — Balancing performance and safety
In a situation with Competing Metrics, the set of all efficient points (the Pareto frontier) is the solution.
19
Pareto
Frontier
Feasible
Region
20. SigOpt. Confidential.
Balancing competing metrics to find the Pareto frontier
As shown before, the goal in multi objective or multi criteria optimization the goal is to find
the optimal set of solution across a set of function [Knowles, 2006].
• This is formulated as finding the maximum of the set functions f1
to fn
over
the same domain x
• No single point exist as the solution, but we are actively trying to maximize the size of the
efficient frontier, which represent the set of solutions
• The solution is found through scalarization methods such as convex combination and
epsilon-constraint
20
Multiple Competing Metrics
21. SigOpt. Confidential.
Multiple Competing Metrics
Intuition: Convex Combination Scalarization
Idea: If we can convert the multimetric problem into a scalar problem, we can solve this problem using
Bayesian optimization.
One possible scalarization is through a convex combination of the objectives.
21
22. SigOpt. Confidential.
Balancing competing metrics to find the Pareto frontier with threshold
As shown before, the goal in multi objective or multi criteria optimization the goal is to find
the optimal set of solution across a set of function [Knowles, 2006].
• This is formulated as finding the maximum of the set functions f1
to fn
over the same
domain x
• No single point exist as the solution, but we are actively trying to maximize the size of the
efficient frontier, which represent the set of solutions
• The solution is found through constrained scalarization methods such as convex
combination and epsilon-constraint
• Allow users to change constraints as the search progresses [Letham et al, 2019]
22
Multiple Competing Metrics
23. SigOpt. Confidential.
Multiple Competing Metrics
Constrained Scalarization
1. Model all metrics independently.
• Requires no prior beliefs of how metrics interact.
• Missing data removed on a per metric basis if unrecorded.
2. Expose the efficient frontier through constrained scalar optimization.
• Enforce user constraints when given.
• Iterate through sub constraints to better resolve efficient frontier, if desired.
• Consider different regions of the frontier when parallelism is possible.
3. Allow users to change constraints as the search progresses.
• Allow the problems/goals to evolve as the user’s understanding changes.
Constraints give customers more control over the circumstances and more ability to understand our actions.
23
Variation on
Expected
Improvement
[Letham et al, 2019]
26. Multimetric Use Case 1
● Category: Time Series
● Task: Sequence Classification
● Model: CNN
● Data: Diatom Images
● Analysis: Accuracy-Time Tradeoff
● Result: Similar accuracy, 33% the inference time
Multimetric Use Case 2
● Category: NLP
● Task: Sentiment Analysis
● Model: CNN
● Data: Rotten Tomatoes Movie Reviews
● Analysis: Accuracy-Time Tradeoff
● Result: ~2% in accuracy versus 50% of training time
Learn more
https://devblogs.nvidia.com/sigopt-deep-learning-
hyperparameter-optimization/
Use Case: Balancing Speed & Accuracy in Deep Learning
27. Design: Question answering data and memory networks
Data Model
Sources:
Facebook AI Research (FAIR) bAbI dataset: https://research.fb.com/downloads/babi/
Sukhbaatar et al.: https://arxiv.org/abs/1503.08895
28. Comparison of Bayesian Optimization and Random Search
Setup: Hyperparameter Optimization
Standard Parameters Conditional Parameters
29. Result: Significant boost in consistency, accuracy
Comparison across random search versus Bayesian optimization with conditionals
30. Result: Highly cost efficient accuracy gains
Comparison across random search versus Bayesian optimization with conditionals
SigOpt is
18.5x as
efficient
31. SigOpt. Confidential.
How to continuously and efficiently
utilize your project’s allotted compute
infrastructure
3
32. SigOpt. Confidential.
Utilize compute by asynchronous parallel optimization
SigOpt natively handles Parallel Function Evaluation with the primary goal of
minimizing the Overall Wall-Clock Time. Parallelism also provides:
• Faster time-to-results — minimized overall wall-clock time
• Full resource utilization — asynchronous parallel optimization
• Scaling with infrastructure — optimize across the number of available compute resources
We believe this is essential to increase Research Productivity by lowering the time-to-results and
scaling with available infrastructure.
32
Continuously and efficiently utilize infrastructure
33. SigOpt. Confidential.
Your firewall
Training
Data
AI, ML, DL, Simulation
Model
Model Evaluation
or Backtest
Testing
Data
New
Configurations
Objective
Metric
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and
reproduce any model
ENTERPRISE PLATFORM
Built to fit any stack and scale with
your needs
OPTIMIZATION ENGINE
Explore and exploit with a variety
of techniques
RESTAPI
Configuration
Parameters or
Hyperparameters
Continuously and efficiently utilize infrastructure
34. SigOpt. Confidential.
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and
reproduce any model
ENTERPRISE PLATFORM
Built to fit any stack and scale with
your needs
OPTIMIZATION ENGINE
Explore and exploit with a variety
of techniques
RESTAPI
Worker
Continuously and efficiently utilize infrastructure
35. SigOpt. Confidential.
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and
reproduce any model
ENTERPRISE PLATFORM
Built to fit any stack and scale with
your needs
OPTIMIZATION ENGINE
Explore and exploit with a variety
of techniques
RESTAPI
Worker Worker
Worker Worker
Continuously and efficiently utilize infrastructure
36. SigOpt. Confidential.
Parallel function evaluations
36
Parallel function evaluations are a way of efficiently maximizing a function
while using all available compute resources [Ginsbourger et al, 2008, Garcia-Barcos et al. 2019].
• Choosing points by jointly maximizing criteria over the entire set
• Asynchronously evaluating over a collection of points
• Fixing points which are currently being evaluated while sampling new ones
Continuously and efficiently utilize infrastructure
1D - Acquisition Function 2D - Acquisition Function
37. SigOpt. Confidential.
Parallel optimization: multiple worker nodes jointly optimizes over a given function
37
Parallel bandwidth = 1 Parallel bandwidth = 2 Parallel bandwidth = 3
Parallel bandwidth = 4 Parallel bandwidth = 5
Next point(s) to
evaluate:
Parallel bandwidth
represent the # of
available compute
resources
Statistical Model
Continuously and efficiently utilize infrastructure
38. Parallelism Use Case
● Category: NLP
● Task: Sentiment Analysis
● Model: CNN
● Data: Rotten Tomatoes Movie Reviews
● Analysis: Predicting Positive vs. Negative Sentiment
● Result: 400x speedup
Learn more
https://aws.amazon.com/blogs/machine-learning/fast-cnn-tuni
ng-with-aws-gpu-instances-and-sigopt/
Use Case: Fast CNN Tuning with AWS GPU Instances
39. Variables we tested in this experiment
Axes of exploration:
• 6 hyperparameters versus 10 parameters (including SGD and alternate architectures)
• CPU versus GPU compute
• Grid search versus random search versus Bayesian optimization
Results that we considered:
• Accuracy
• Compute cost
• Wall clock time
41. Results
Speed and accuracy
SigOpt helps you train your model faster
and achieve higher accuracy
This results in higher practitioner
productivity and better business
outcomes
While random and grid search for
hyperparameters do yield an accuracy
improvement, SigOpt achieves better
results on both dimensions
42. Results: 8x more cost efficient performance boost
Detailed performance across different optimization processes
Experiment Type Accuracy Trials Epochs CPU Time CPU Cost GPU Time GPU
Cost
Link Percent
Change
% Δ per
Comp $
Default (No Tuning) 75.70 1 50 2 hours $1.82 0 hours $0.04 NA 0 0
Grid Search (SGD
Only)
79.30 729 38394 64 days $1401.38 32 hours $27.58 here 4.60 0.13
Random Search (SGD
Only)
79.94 2400 127092 214 days $4638.86 106 hours $91.29 here 4.24 0.05
SigOpt Search (SGD
Only)
80.40 240 15803 27 days $576.81 13 hours $11.35 here 4.70 0.42
Grid Search (SGD +
Architecture)
Not
Feasible
59049 3109914 5255 days $113511.86 107 days $2233.95 NA NA NA
Random Search (SGD
+ Architecture)
80.12 4000 208729 353 days $7618.61 174 hours $149.94 here 4.42 0.03
SigOpt Search (SGD
+ Architecture)
81.00 400 30060 51 days $1097.19 25 hours $21.59 here 5.30 0.25
% Δ per Comp $ is calculated using GPU compute
45. SigOpt. Confidential.
How to efficiently minimize time to optimize any function
SigOpt’s multitask feature is an efficient way for modelers to tune model with an expensive training cost
with the benefit of:
• Faster time-to-market — The ability to bring expensive models into production faster
• Reduction in infrastructure cost — Intelligently leverage infrastructure while reducing cost
Through novel research SigOpt helps the user lower the overall time-to-market,
while reducing the overall compute budget.
45
Expensive Training Cost
46. SigOpt. Confidential.
Your firewall
Training
Data
AI, ML, DL, Simulation
Model
Model Evaluation or
Backtest
Testing
Data
New
Configurations
Objective
Metric
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and
reproduce any model
ENTERPRISE PLATFORM
Built to fit any stack and scale with
your needs
OPTIMIZATION ENGINE
Explore and exploit with a variety
of techniques
RESTAPI
Configuration
Parameters or
Hyperparameters
Expensive Training Cost
47. SigOpt. Confidential.
Better
Results
EXPERIMENT INSIGHTS
Track, organize, analyze and
reproduce any model
ENTERPRISE PLATFORM
Built to fit any stack and scale with
your needs
OPTIMIZATION ENGINE
Explore and exploit with a variety
of techniques
RESTAPI
Expensive Training Cost
48. SigOpt. Confidential.
Using cheap or free information to speed learning
48
Sources:
Aaron Klein, Frank Hutter, et al.: https://arxiv.org/abs/1605.07079
SigOpt allows to the user to define lower-cost functions in order to quickly optimize expensive functions
• Cheaper-cost functions can be flexible (fewer epochs, subsampled data, other custom features)
• Use cheaper tasks earlier in the tuning process to explore
• Inform more expensive tasks later by exploiting what we learn
• In the process, reduce the full time required to tune an expensive model
Expensive Training Cost
49. SigOpt. Confidential.
Using cheap or free information to speed learning
We can build better models using inaccurate data to help point the
actual optimization in the right direction with less cost.
• Using a warm start through multi-task learning logic [Swersky et al, 2014]
• Combining good anytime performance with active learning [Klein et al, 2018]
• Accepting data from multiple sources without priors [Poloczek et al, 2017]
49
Expensive Training Cost
50. Use Case: Image Classification on a Budget
Use Case
● Category: Computer Vision
● Task: Image Classification
● Model: CNN
● Data: Stanford Cars Dataset
● Analysis: Architecture Comparison
● Result: 2.4% accuracy gain with a much shallower
model
Learn more
https://mlconf.com/blog/insights-for-building-high-performing-
image-classification-models/
54. SigOpt. Confidential.54
Insight: Multitask efficiency at the hyperparameter level
Example: Learning rate accuracy and values by cost of task over time
Progression of observations over time Accuracy and value for each observation Parameter importance analysis
55. SigOpt. Confidential.
Fine-tuning the smaller
network significantly
outperforms feature
extraction on a bigger
network
Results: Optimizing and tuning the full network outperforms
55
Multitask optimization
drives significant
performance gains
+3.92%
+1.58%
56. SigOpt. Confidential.
Implication: Fine-tuning significantly outperforms
Cost Breakdown for Multitask Optimization
Cost efficiency Feature Extractor ResNet 50 Fine-Tuning ResNet 18
Hours per training 4.08 4.2
Observations 220 220
Number of Runs 1 1
Total compute hours 898 924
Cost per GPU-hour $0.90 $0.90
% Improvement 1.58% 3.92%
Total compute cost $808 $832
cost ($) per %
improvement $509 $212
Similar Compute Cost
Similar Wall-Clock Time
Fine-Tuning Significantly
More Efficient and Effective