AWS makes it easy to build, train, tune, and deploy Machine Learning (ML) models. If you're excited to get started with ML on AWS but want a refresher on the ML concepts behind build, train, tune, and deploy, this Dev Chat is for you.
Originally delivered as a Dev Chat at AWS Summit SF by Software Engineer Alexandra Johnson
Tuning the Untunable - Insights on Deep Learning OptimizationSigOpt
Patrick Hayes originally gave this talk at ODSC West in 2018. During this talk, Patrick discusses a couple key barriers to deep learning optimization and how SigOpt solves them. First, Patrick discusses the problem of lengthy training cycles and how novel techniques like multitask optimization are designed to use partial information to solve this challenge. Second, Patrick discusses automated cluster management and how solving this problem makes it much easier to manage training cycles for these models.
In this video I’m going to show you how SigOpt can help you amplify your machine learning and AI models by optimally tuning them using our black-box optimization platform.
Video: https://youtu.be/EjGrRxXWg8o
The SigOpt platform provides an ensemble of state-of-the-art Bayesian and Global optimization algorithms via a simple Software-as-a-Service API.
MLconf 2017 Seattle Lunch Talk - Using Optimal Learning to tune Deep Learning...SigOpt
In this talk we introduce Bayesian Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. Deep learning pipelines are notoriously expensive to train and often have many tunable parameters including hyperparameters, the architecture, feature transformations that can have a large impact on the efficacy of the model.
We will motivate the problem by giving several example applications using multiple open source deep learning frameworks and open datasets. We’ll compare the results of Bayesian Optimization to standard techniques like grid search, random search, and expert tuning.
Plotcon 2016 Visualization Talk by Alexandra JohnsonSigOpt
Machine learning is full of ideas that are far abstracted away from the underlying data and difficult to understand. Luckily, this represents an amazing opportunity for visualization! These slides dive into the machine learning meta-problem of hyperparameter optimization. We'll show 4 opportunities for visualization in helping people understand, implement, and evaluate hyperparameter optimization strategies.
Using SigOpt to Tune Deep Learning Models with Nervana CloudSigOpt
In this talk I'll show how the Bayesian Optimization methods used by SigOpt, coupled with the incredibly scalable deep learning architecture provided with ncloud and neon, allow anyone it easily tune their models to quickly achieve higher accuracy. I'll walk through the techniques and show an explicit example with results.
Using Optimal Learning to Tune Deep Learning PipelinesScott Clark
SigOpt talk from NVIDIA GTC 2017 and AWS Popup Loft AI Day
We'll introduce Bayesian optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time consuming or expensive. Deep learning pipelines are notoriously expensive to train and often have many tunable parameters, including hyperparameters, the architecture, and feature transformations, that can have a large impact on the efficacy of the model. We'll provide several example applications using multiple open source deep learning frameworks and open datasets. We'll compare the results of Bayesian optimization to standard techniques like grid search, random search, and expert tuning. Additionally, we'll present a robust benchmark suite for comparing these methods in general.
Tuning the Untunable - Insights on Deep Learning OptimizationSigOpt
Patrick Hayes originally gave this talk at ODSC West in 2018. During this talk, Patrick discusses a couple key barriers to deep learning optimization and how SigOpt solves them. First, Patrick discusses the problem of lengthy training cycles and how novel techniques like multitask optimization are designed to use partial information to solve this challenge. Second, Patrick discusses automated cluster management and how solving this problem makes it much easier to manage training cycles for these models.
In this video I’m going to show you how SigOpt can help you amplify your machine learning and AI models by optimally tuning them using our black-box optimization platform.
Video: https://youtu.be/EjGrRxXWg8o
The SigOpt platform provides an ensemble of state-of-the-art Bayesian and Global optimization algorithms via a simple Software-as-a-Service API.
MLconf 2017 Seattle Lunch Talk - Using Optimal Learning to tune Deep Learning...SigOpt
In this talk we introduce Bayesian Optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time-consuming or expensive. Deep learning pipelines are notoriously expensive to train and often have many tunable parameters including hyperparameters, the architecture, feature transformations that can have a large impact on the efficacy of the model.
We will motivate the problem by giving several example applications using multiple open source deep learning frameworks and open datasets. We’ll compare the results of Bayesian Optimization to standard techniques like grid search, random search, and expert tuning.
Plotcon 2016 Visualization Talk by Alexandra JohnsonSigOpt
Machine learning is full of ideas that are far abstracted away from the underlying data and difficult to understand. Luckily, this represents an amazing opportunity for visualization! These slides dive into the machine learning meta-problem of hyperparameter optimization. We'll show 4 opportunities for visualization in helping people understand, implement, and evaluate hyperparameter optimization strategies.
Using SigOpt to Tune Deep Learning Models with Nervana CloudSigOpt
In this talk I'll show how the Bayesian Optimization methods used by SigOpt, coupled with the incredibly scalable deep learning architecture provided with ncloud and neon, allow anyone it easily tune their models to quickly achieve higher accuracy. I'll walk through the techniques and show an explicit example with results.
Using Optimal Learning to Tune Deep Learning PipelinesScott Clark
SigOpt talk from NVIDIA GTC 2017 and AWS Popup Loft AI Day
We'll introduce Bayesian optimization as an efficient way to optimize machine learning model parameters, especially when evaluating different parameters is time consuming or expensive. Deep learning pipelines are notoriously expensive to train and often have many tunable parameters, including hyperparameters, the architecture, and feature transformations, that can have a large impact on the efficacy of the model. We'll provide several example applications using multiple open source deep learning frameworks and open datasets. We'll compare the results of Bayesian optimization to standard techniques like grid search, random search, and expert tuning. Additionally, we'll present a robust benchmark suite for comparing these methods in general.
Common Problems in Hyperparameter OptimizationSigOpt
Originally given at MLConf NYC 2017.
All large machine learning pipelines have tunable parameters, commonly referred to as hyperparameters. Hyperparameter optimization is the process by which we find the values for these parameters that cause our system to perform the best. SigOpt provides a Bayesian optimization platform that is commonly used for hyperparameter optimization, and I’m going to share some of the common problems we’ve seen when integrating into machine learning pipelines.
Winning Kaggle 101: Introduction to StackingTed Xiao
An Introduction to Stacking by Erin LeDell, from H2O.ai
Presented as part of the "Winning Kaggle 101" event, hosted by Machine Learning at Berkeley and Data Science Society at Berkeley. Special thanks to the Berkeley Institute of Data Science for the venue!
H2O.ai: http://www.h2o.ai/
ML@B: ml.berkeley.edu
DSSB: http://dssberkeley.org
BIDS: http://bids.berkeley.edu/
Week 4 advanced labeling, augmentation and data preprocessingAjay Taneja
This is the Machine Learning Engineering in Production Course notes. This is the Week 4 of Machine Learning Data Life Cycle in Production (Course 2) course. This is the course 2 of MLOps specialization on coursera
"Automated machine learning (AutoML) is the process of automating the end-to-end process of applying machine learning to real-world problems. In a typical machine learning application, practitioners must apply the appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods that make the dataset amenable for machine learning. Following those preprocessing steps, practitioners must then perform algorithm selection and hyperparameter optimization to maximize the predictive performance of their final machine learning model. As many of these steps are often beyond the abilities of non-experts, AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning. Automating the end-to-end process of applying machine learning offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform models that were designed by hand."
In this talk we will discuss how QuSandbox and the Model Analytics Studio can be used in the selection of machine learning models. We will also illustrate AutoML frameworks through demos and examples and show you how to get started
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Sri Ambati
Abstract:
Explainability in the age of the EU GDPR is becoming an increasingly pertinent consideration for Machine Learning. At QuantumBlack, we address the traditional Accuracy vs. Interpretability trade-off, by leveraging modern XAI techniques such as LIME and SHAP, to enable individualised explanations without necessary limiting the utility and performance of the otherwise ‘black-box’ models. The talk focuses on Shapley additive explanations (Lundberg et al. 2017) that integrate Shapley values from the Game Theory for consistent and locally accurate explanations; provides illustrative examples and touches upon the wider XAI theory.
Bio:
Dr Torgyn Shaikhina is a Data Scientist at QuantumBlack, STEM Ambassador, and the founder of the Next Generation Programmers outreach initiative. Her background is in decision support systems for Healthcare and Biomedical Engineering with a focus on Machine Learning with limited information.
LinkedIn talk at Netflix ML Platform meetup Sep 2019Faisal Siddiqi
In this talk at the Netflix Machine Learning Platform Meetup on 12 Sep 2019, Kinjal Basu from LinkedIn discussed Online Parameter Selection for web-based Ranking vis Bayesian Optimization
Nikhil Garg, Engineering Manager, Quora at MLconf SF 2016MLconf
Building a Machine Learning Platform at Quora: Each month, over 100 million people use Quora to share and grow their knowledge. Machine learning has played a critical role in enabling us to grow to this scale, with applications ranging from understanding content quality to identifying users’ interests and expertise. By investing in a reusable, extensible machine learning platform, our small team of ML engineers has been able to productionize dozens of different models and algorithms that power many features across Quora.
In this talk, I’ll discuss the core ideas behind our ML platform, as well as some of the specific systems, tools, and abstractions that have enabled us to scale our approach to machine learning.
Mining model for hotel recommendations (Kaggle Challenge)Arjun Varma
The presentation describes an approach we devised to hotel recommendation systems and what could be done to improve it. It also contains a few obstacles I faced while programming it.
Kaggle Higgs Boson Machine Learning ChallengeBernard Ong
What It Took to Score the Top 2% on the Higgs Boson Machine Learning Challenge. A journey into advanced machine learning models ensembles stacking methods.
Accelerate Machine Learning with Ease using Amazon SageMakerAmazon Web Services
Organizations are using machine learning (ML) to address a host of business challenges, from product recommendations to demand forecasting. Until recently, developing these ML models took much time and effort, and it required expertise. In this session, we introduce Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models quickly and easily. We walk through the features and benefits of Amazon SageMaker and discuss the uniquely designed ML algorithms that allow for optimized model training, getting you to production fast.
Build Your Recommendation Engine on AWS Today!AWS Germany
Recommender systems are an important mechanism to personalize and enhance customer experience. In Amazon, we have been researching recommender systems for over two decades and nowadays AWS customers can use the same technologies to develop, train and deploy their own recommenders systems in just a couple of hours. In my presentation, I will give an overview of the most recent recommender systems papers and techniques, and demonstrate how to train and deploy a recommendation system on AWS in less than 15 minutes.
Common Problems in Hyperparameter OptimizationSigOpt
Originally given at MLConf NYC 2017.
All large machine learning pipelines have tunable parameters, commonly referred to as hyperparameters. Hyperparameter optimization is the process by which we find the values for these parameters that cause our system to perform the best. SigOpt provides a Bayesian optimization platform that is commonly used for hyperparameter optimization, and I’m going to share some of the common problems we’ve seen when integrating into machine learning pipelines.
Winning Kaggle 101: Introduction to StackingTed Xiao
An Introduction to Stacking by Erin LeDell, from H2O.ai
Presented as part of the "Winning Kaggle 101" event, hosted by Machine Learning at Berkeley and Data Science Society at Berkeley. Special thanks to the Berkeley Institute of Data Science for the venue!
H2O.ai: http://www.h2o.ai/
ML@B: ml.berkeley.edu
DSSB: http://dssberkeley.org
BIDS: http://bids.berkeley.edu/
Week 4 advanced labeling, augmentation and data preprocessingAjay Taneja
This is the Machine Learning Engineering in Production Course notes. This is the Week 4 of Machine Learning Data Life Cycle in Production (Course 2) course. This is the course 2 of MLOps specialization on coursera
"Automated machine learning (AutoML) is the process of automating the end-to-end process of applying machine learning to real-world problems. In a typical machine learning application, practitioners must apply the appropriate data pre-processing, feature engineering, feature extraction, and feature selection methods that make the dataset amenable for machine learning. Following those preprocessing steps, practitioners must then perform algorithm selection and hyperparameter optimization to maximize the predictive performance of their final machine learning model. As many of these steps are often beyond the abilities of non-experts, AutoML was proposed as an artificial intelligence-based solution to the ever-growing challenge of applying machine learning. Automating the end-to-end process of applying machine learning offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform models that were designed by hand."
In this talk we will discuss how QuSandbox and the Model Analytics Studio can be used in the selection of machine learning models. We will also illustrate AutoML frameworks through demos and examples and show you how to get started
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Sri Ambati
Abstract:
Explainability in the age of the EU GDPR is becoming an increasingly pertinent consideration for Machine Learning. At QuantumBlack, we address the traditional Accuracy vs. Interpretability trade-off, by leveraging modern XAI techniques such as LIME and SHAP, to enable individualised explanations without necessary limiting the utility and performance of the otherwise ‘black-box’ models. The talk focuses on Shapley additive explanations (Lundberg et al. 2017) that integrate Shapley values from the Game Theory for consistent and locally accurate explanations; provides illustrative examples and touches upon the wider XAI theory.
Bio:
Dr Torgyn Shaikhina is a Data Scientist at QuantumBlack, STEM Ambassador, and the founder of the Next Generation Programmers outreach initiative. Her background is in decision support systems for Healthcare and Biomedical Engineering with a focus on Machine Learning with limited information.
LinkedIn talk at Netflix ML Platform meetup Sep 2019Faisal Siddiqi
In this talk at the Netflix Machine Learning Platform Meetup on 12 Sep 2019, Kinjal Basu from LinkedIn discussed Online Parameter Selection for web-based Ranking vis Bayesian Optimization
Nikhil Garg, Engineering Manager, Quora at MLconf SF 2016MLconf
Building a Machine Learning Platform at Quora: Each month, over 100 million people use Quora to share and grow their knowledge. Machine learning has played a critical role in enabling us to grow to this scale, with applications ranging from understanding content quality to identifying users’ interests and expertise. By investing in a reusable, extensible machine learning platform, our small team of ML engineers has been able to productionize dozens of different models and algorithms that power many features across Quora.
In this talk, I’ll discuss the core ideas behind our ML platform, as well as some of the specific systems, tools, and abstractions that have enabled us to scale our approach to machine learning.
Mining model for hotel recommendations (Kaggle Challenge)Arjun Varma
The presentation describes an approach we devised to hotel recommendation systems and what could be done to improve it. It also contains a few obstacles I faced while programming it.
Kaggle Higgs Boson Machine Learning ChallengeBernard Ong
What It Took to Score the Top 2% on the Higgs Boson Machine Learning Challenge. A journey into advanced machine learning models ensembles stacking methods.
Accelerate Machine Learning with Ease using Amazon SageMakerAmazon Web Services
Organizations are using machine learning (ML) to address a host of business challenges, from product recommendations to demand forecasting. Until recently, developing these ML models took much time and effort, and it required expertise. In this session, we introduce Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models quickly and easily. We walk through the features and benefits of Amazon SageMaker and discuss the uniquely designed ML algorithms that allow for optimized model training, getting you to production fast.
Build Your Recommendation Engine on AWS Today!AWS Germany
Recommender systems are an important mechanism to personalize and enhance customer experience. In Amazon, we have been researching recommender systems for over two decades and nowadays AWS customers can use the same technologies to develop, train and deploy their own recommenders systems in just a couple of hours. In my presentation, I will give an overview of the most recent recommender systems papers and techniques, and demonstrate how to train and deploy a recommendation system on AWS in less than 15 minutes.
Talk by Sangeetha Krishnan, MTS at Adobe on the topic "Build, train and deploy your ML models with Amazon Sage Maker" at AWS Community Day, Bangalore 2018
Building Applications with Apache MXNetApache MXNet
This deck quickly walks through fundamentals of Deep Learning and describes how symbolic engine of MXNet implements such networks. It then introduces gluon and provides code examples. The last section of the presentation introduces latest developments in gluon family of tools to include GluonNLP, an NLP toolkit with SOTA implementation of NLP algorithms, GluonCV, a Computer Vision toolkit with SOTA implementation of Vision algorithms, and MXNet backend for Keras.
Recommendation is one of the most popular applications in machine learning (ML). In this workshop, we’ll show you how to build a movie recommendation model based on factorization machines — one of the built-in algorithms of Amazon SageMaker — and the popular MovieLens dataset.
ML Workflows with Amazon SageMaker and AWS Step Functions (API325) - AWS re:I...Amazon Web Services
Learn how you can build, train, and deploy machine learning workflows for Amazon SageMaker on AWS Step Functions. Learn how to stitch together services, such as AWS Glue, with your Amazon SageMaker model training to build feature-rich machine learning applications, and you learn how to build serverless ML workflows with less code. Cox Automotive also shares how it combined Amazon SageMaker and Step Functions to improve collaboration between data scientists and software engineers. We also share some new features to build and manage ML workflows even faster.
Amazon SageMaker Ground Truth: Build High-Quality and Accurate ML Training Da...Amazon Web Services
Successful machine learning models are built on high-quality training datasets. Labeling raw data to get accurate training datasets involves a lot of time and effort because sophisticated models can require thousands of labeled examples to learn from, before they can produce good results. Typically, the task of labeling is distributed across a large number of humans, adding significant overhead and cost. Join us as we introduce Amazon SageMaker Ground Truth, a new service that provides an effective solution to reduce this cost and complexity using a machine learning technique called active learning. Active learning reduces the time and manual effort required to do data labeling, by continuously training machine learning algorithms based on labels from humans. By iterating through ambiguous data points, Ground Truth improves the ability to automatically label data resulting in high-quality training datasets.
Level: 300
Speaker: Kris Skrinak - Partner Solutions Architect, ML Global Lead, AWS
Supercharge Your ML Model with SageMaker - AWS Summit Sydney 2018Amazon Web Services
Supercharge Your Machine Learning Model with Amazon SageMaker
In this session you will learn how to use Amazon SageMaker to build, train, test, and deploy a machine learning model. We will use a real life use case to share the simplicity of building and deploying ML models on Amazon SageMaker.
Koorosh Lohrasbi, Solutions Architect, Amazon Web Services
Build Your Recommendation Engine on AWS Today - AWS Summit Berlin 2018Yotam Yarden
Recommender systems are an important mechanism to personalize and enhance customer experience. In Amazon, we have been researching recommender systems for over two decades and nowadays AWS customers can use the same technologies to develop, train and deploy their own recommenders systems in just a couple of hours. In my presentation, I will give an overview of the most recent recommender systems papers and techniques, and demonstrate how to train and deploy a recommendation system on AWS in less than 15 minutes.
Presented at the AWS Summit in Berlin, June 6th 2018
Learning Objectives:
- Learn how Amazon SageMaker can be used for exploratory data analysis before training
- Learn how Amazon SageMaker provides managed distributed training with flexibility
- Learn how easy it is to deploy your models for hosting within Amazon SageMaker
Recommendation is one of the most popular applications in machine learning (ML). In this workshop, we’ll show you how to build a movie recommendation model based on factorization machines — one of the built-in algorithms of Amazon SageMaker — and the popular MovieLens dataset.
Building Deep Learning Applications with TensorFlow and SageMaker on AWS - Te...Amazon Web Services
Deep learning continues to push the state of the art in domains such as computer vision, natural language understanding, and recommendation engines. One of the key reasons for this progress is the availability of highly flexible and developer friendly deep learning frameworks. In this workshop, we provide an overview of deep learning, focusing on getting started with the TensorFlow framework on AWS.
Building Your Own ML Application with AWS Lambda and Amazon SageMaker (SRV404...Amazon Web Services
In this workshop, we step through the process of deploying and hosting machine learning (ML) models with AWS Lambda and get on-demand inferences. Given a demonstrative dataset, we build and train a simple ML classification model with Amazon SageMaker. Then, we host this model in an AWS Lambda function and expose an inference endpoint through Amazon API Gateway. Finally, we build a pipeline for automating model deployment to Lambda leveraging AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline.
DataRobot Cloud, built on AWS, helped Trupanion create an automated method for building data models using machine learning that reduced the time required to process claims from minutes to seconds. Join our webinar to hear how Trupanion transformed itself into an AI-driven organization, with robust data analysis and data science project prototyping that empowered the company to make better decisions and optimize business processes in less time and at a reduced cost.
Join our webinar to learn:
- Why you don’t need to be an expert in data science to create accurate predictive models.
- How you can build and deploy predictive models in less time on AWS.
- How to take full advantage of AI and machine learning to make better predictions faster and improve your bottom line.
Accelerate Machine Learning with Ease Using Amazon SageMaker - BDA301 - Chica...Amazon Web Services
Organizations are using machine learning (ML) to address a host of business challenges, from product recommendations to demand forecasting. Until recently, developing these ML models took much time and effort, and it required expertise. In this session, we discuss and dive deep into Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models quickly and easily. We walk through the features and benefits of Amazon SageMaker and discuss the uniquely designed ML algorithms that allow for optimized model training, getting you to production fast.
Work with Machine Learning in Amazon SageMaker - BDA203 - Toronto AWS SummitAmazon Web Services
Organizations are using machine learning (ML) to address a host of business challenges, from product recommendations to demand forecasting. Until recently, developing these ML models took considerable time and effort, and it required expertise. In this session, we dive deep into Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models quickly and easily. We walk through the features and benefits of Amazon SageMaker to get your ML models from concept to production.
Quickly and easily build, train, and deploy machine learning models at any scaleAWS Germany
The machine learning process often feels a lot harder than it should be to most developers because the process to build and train models, and then deploy them into production is too complicated and too slow.
This workshop starts with a brief review of the machine learning process, followed by an introduction and deep dive into the individual components of Amazon SageMaker. As part of the workshop we will train artificial neural networks, get insight into some of the built-in machine learning algorithms of SageMaker that you can use for a variety of problem types, and after successfully training a model, look at options on how to deploy and scale a model as a service.
This workshop is aimed at developers that are new to machine learning, as well as data scientists that continue to be challenged by the operational challenges of the machine learning process. Bring your own laptop with Python and Jupyter Notebook, and (ideally) your own activated AWS account to follow through the examples.
How Peak.AI Uses Amazon SageMaker for Product Personalization (GPSTEC316) - A...Amazon Web Services
In this session, learn how Peak’s Artificial Intelligence System (AIS) embeds Amazon SageMaker to solve business problems with outstanding results. We show you how Peak worked backwards from two customer problems to create a machine learning (ML) solution that used multiple models, trained, and then deployed on Amazon SageMaker. We highlight the challenges, classifying PII data and integrating data from multiple sources. Next, we walk through the ML model training phase for each customer, showing you how new data sources were used to improve the accuracy of the ML models. Finally, the results: Regit and Footasylum were able to use the intelligent predictions provided by Peak.AI to deliver a personalized service to their customers, resulting in a 30% increase in revenue.
Optimizing BERT and Natural Language Models with SigOpt Experiment ManagementSigOpt
SigOpt Machine Learning Engineer Meghana Ravikumar explains how she reduced the size of a BERT natural language model trained on the SQUAD 2.0 question-answer database, to reduce its size while maintaining performance using a "distillation" process optimized with SigOpt's Experiment Management functionality.
SigOpt's Fay Kallel, Head of Product, and Jim Blomo, Head of Engineering, describe the latest updates to SigOpt, a suite of features that help you manage your modeling process.
Efficient NLP by Distilling BERT and Multimetric OptimizationSigOpt
SigOpt ML Engineer Meghana Ravikumar explains how to use multimetric optimization to achieve a more efficient, compact BERT model to perform on a question-answering task.
SigOpt Research Engineer Michael McCourt and DarwinAI CTO Alexander Wong explain how they used SigOpt and hyperparameter optimization to successfully improve accuracy of detecting COVID-19 cases from chest X-Rays, using the COVID-Net model and the COVIDx open dataset.
Metric Management: a SigOpt Applied Use CaseSigOpt
These slides correspond to a recording of a live webcast of a demo of Metric Management functionality in SigOpt, keeping model size down while increasing validation accuracy for a road sign image classification problem.
Tuning for Systematic Trading: Talk 3: Training, Tuning, and Metric StrategySigOpt
This talk explains how you can train and tune efficiently using metric strategy to assign, store, and optimize a variety of metrics, even changing them over time. Tobias Andreassen, who supports a number of our systematic trading customers, explained how he helps customers tune more efficiently with these SigOpt features in real-world scenarios.
Tuning for Systematic Trading: Talk 2: Deep LearningSigOpt
This talk explains how to train deep learning and other expensive models with parallelism and multitask optimization to reduce wall clock time. Tobias Andreassen, who supports a number of our systematic trading customers, presented the intuition behind Bayesian optimization for model optimization with a single or multiple (often competing) metrics. Many times it makes sense to analyze a second metric to avoid myopic training runs that overfit on your data, or otherwise don’t represent or impede performance in real-world scenarios.
This talk discusses the intuition behind Bayesian optimization with and without multiple metrics. Tobias Andreassen, who supports a number of our systematic trading customers, presented the intuition behind Bayesian optimization for model optimization with a single or multiple (often competing) metrics. Many times it makes sense to analyze a second metric to avoid myopic training runs that overfit on your data, or otherwise don’t represent or impede performance in real-world scenarios.
Tuning Data Augmentation to Boost Model PerformanceSigOpt
In this webinar, SigOpt ML Engineer Meghana Ravikumar presents on and builds an image classifier trained on the Stanford Cars dataset to evaluate two approaches to transfer learning—fine tuning and feature extraction—and the impact of Multitask optimization, a more efficient form of Bayesian optimization, on these techniques. Once we define the most performant transfer learning technique for Stanford Cars, we will use image augmentation to double the size of the dataset to boost the classifier’s performance. Instead of manually tuning the hyperparameters associated with image augmentation, we will use Multitask Optimization to learn these hyperparameters using the downstream image classifier’s performance as the guide. In conjunction with model performance, we will also explore the features of these augmented images and the downstream implications for our image classifier.
Advanced Optimization for the Enterprise WebinarSigOpt
Building on the TWIML eBook, TWIMLcon event and TWIML podcast series that explore Machine Learning Platforms in great detail, this webinar examines the machine learning platforms that power enterprise leaders in AI. SigOpt CEO Scott Clark will provide an overview of critical technical capabilities that our customers have prioritized in their ML platforms.
Review these slides to learn about:
- Critical capabilities for data, experiment and model management
- Tradeoffs between building and buying these capabilities
- Lessons from the implementation of these platforms by AI leaders
Why focus on these platforms and the capabilities that power them? Nearly every company is investing in machine learning that differentiates products or generates revenue. These so-called "differentiated models" represent the biggest opportunity for AI to transform the business. Most of these teams find success hiring expert data scientists and machine learning engineers who can build these models. But most of these teams also struggle to create a more sustainable, scalable and reproducible process for model development, and have begun building ML platforms to tackle this challenge.
SigOpt founder and CEO, Scott Clark, PhD, explains the tradeoffs you'll want to consider when designing your modeling platform and integrating hyperparameter optimization to enhance data scientist productivity.
This webinar, hosted by SigOpt co-founder and CEO Scott Clark, explains how advanced features can help you achieve your modeling goals. These features include metric definition and multimetric optimization, conditional parameters, and multitask optimization for long training cycles.
SigOpt helps your algorithmic traders and data scientists build better models faster. Learn how to integrate SigOpt into your modeling platform for quick ROI for your data science team.
Interactive Tradeoffs Between Competing Offline Metrics with Bayesian Optimiz...SigOpt
Many real world applications - machine learning models, simulators, etc. - have multiple competing metrics that define performance; these require practitioners to carefully consider potential tradeoffs. However, assessing and ranking this tradeoff is nontrivial, especially when the number of metrics is more than two. Often times, practitioners scalarize the metrics into a single objective, e.g., using a weighted sum.
In this talk, we pose this problem as a constrained multi-objective optimization problem. By setting and updating the constraints, we can efficiently explore only the region of the Pareto efficient frontier of the model/system of most interest. We motivate this problem with the application of an experimental design setting, where we are trying to fabricate high performance glass substrate for solar cell panels.
As data science workloads grow, so does their need for infrastructure. But, is it fair to ask data scientists to also become infrastructure experts? If not the data scientists, then, who is responsible for spinning up and managing data science infrastructure? This talk will address the context in which ML infrastructure is emerging, walk through two examples of ML infrastructure tools for launching hyperparameter optimization jobs, and end with some thoughts for building better tools in the future.
Originally given as a talk at the PyData Ann Arbor meetup (https://www.meetup.com/PyData-Ann-Arbor/events/260380989/)
SigOpt at Uber Science Symposium - Exploring the spectrum of black-box optimi...SigOpt
At the inaugural Uber science symposium, SigOpt research engineer Bolong (Harvey) Cheng shares insights on black-box optimization from his experience working with both leading academics and innovative enterprises.
SigOpt at O'Reilly - Best Practices for Scaling Modeling PlatformsSigOpt
Companies are increasingly building modeling platforms to empower their researchers to efficiently scale the development and productionalization of their models. Scott Clark and Matt Greenwood share a case study from a leading algorithmic trading firm to illustrate best practices for building these types of platforms in any industry. Join in to learn how Two Sigma, a leading quantitative investment and technology firm, solved its model optimization problem.
Training and tuning models with lengthy training cycles like those in deep learning can be extremely expensive and may sometimes involve techniques that degrade performance. We'll explore recent research on optimization strategies to efficiently tune these types of deep learning models. We will provide benchmarks and comparisons to other popular methods for optimizing the models, and we'll recommend valuable areas for further applied research.
SigOpt at GTC - Reducing operational barriers to optimizationSigOpt
Advanced hardware like NVIDIA technology lowers technical barriers to model size and scope, but issues remain in areas like model performance and training infrastructure management. We'll discuss operational challenges to training models at scale with a particular focus on how training management and hyperparameter tuning can inform each other to accomplish specific goals. We'll also explore techniques like parallelism and scheduling, discuss their impact on model optimization, and compare various techniques. We'll also evaluate results of this approach. In particular, we'll focus on how new tools that automate training orchestration accelerate model development and increase the volume and quality of models in production.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host