The document discusses Amazon SageMaker and TensorFlow for deep learning applications. It provides an overview of SageMaker's capabilities for building, training and deploying machine learning models at scale. TensorFlow is discussed as an open source deep learning framework that is a first-class citizen on SageMaker. The document then presents a case study of how Advanced Microgrid Solutions used SageMaker and TensorFlow to develop deep learning models for forecasting electricity prices and optimizing participation in energy markets.
[REPEAT] Deep Learning Applications Using TensorFlow (AIM401-R) - AWS re:Inve...Amazon Web Services
The TensorFlow deep learning framework is used for developing diverse AI applications including computer vision, natural language, speech, and translation. In this session, learn how to use TensorFlow within the Amazon SageMaker machine learning platform. This code-level session also includes tutorials and examples using TensorFlow.
Work with Machine Learning in Amazon SageMaker - BDA203 - Atlanta AWS SummitAmazon Web Services
Organizations are using machine learning (ML) to address a host of business challenges, from product recommendations to demand forecasting. Until recently, developing these ML models took considerable time and effort, and it required expertise. In this session, we dive deep into Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models quickly and easily. We walk through the features and benefits of Amazon SageMaker to get your ML models from concept to production.
[REPEAT] Deep Learning Applications Using TensorFlow (AIM401-R) - AWS re:Inve...Amazon Web Services
The TensorFlow deep learning framework is used for developing diverse AI applications including computer vision, natural language, speech, and translation. In this session, learn how to use TensorFlow within the Amazon SageMaker machine learning platform. This code-level session also includes tutorials and examples using TensorFlow.
Work with Machine Learning in Amazon SageMaker - BDA203 - Atlanta AWS SummitAmazon Web Services
Organizations are using machine learning (ML) to address a host of business challenges, from product recommendations to demand forecasting. Until recently, developing these ML models took considerable time and effort, and it required expertise. In this session, we dive deep into Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models quickly and easily. We walk through the features and benefits of Amazon SageMaker to get your ML models from concept to production.
Predicting the Future with Amazon SageMaker - AWS Summit Sydney 2018Amazon Web Services
Predicting the Future with Amazon SageMaker
Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning. In this session you will learn how to use built-in, high performance machine learning algorithms for predictions and computer vision within your application. We will deploy machine learning models into production and start generating classifications with a few API calls using the SageMaker SDK. Additionally we will demonstrate how to run your custom trained machine learning model directly out of your web application to classify incoming user generated content.
Steve Shirkey, ASEAN Solutions Architect, Amazon Web Services
Supercharge Your ML Model with SageMaker - AWS Summit Sydney 2018Amazon Web Services
Supercharge Your Machine Learning Model with Amazon SageMaker
In this session you will learn how to use Amazon SageMaker to build, train, test, and deploy a machine learning model. We will use a real life use case to share the simplicity of building and deploying ML models on Amazon SageMaker.
Koorosh Lohrasbi, Solutions Architect, Amazon Web Services
Recommendation is one of the most popular applications in machine learning (ML). In this workshop, we’ll show you how to build a movie recommendation model based on factorization machines — one of the built-in algorithms of Amazon SageMaker — and the popular MovieLens dataset.
Intelligence of Things: IoT, AWS DeepLens and Amazon SageMaker - AWS Summit S...Amazon Web Services
Intelligence of Things: IoT, AWS DeepLens and Amazon SageMaker
With IoT, machine learning is going everywhere. Using Amazon SageMaker it's never been easier to build Intelligent Things. In this session we look at how we can push intelligence from cloud-trained models to the edge using AWS Greengrass and explore how devices such as AWS DeepLens make it easy to bring intelligence to your things.
Jan Haak, Global Solutions Architect, Amazon Web Services
Machine Learning with Amazon SageMaker - Algorithms and Frameworks - BDA304 -...Amazon Web Services
Algorithms and frameworks form a fundamental part of machine learning (ML). These critical components enable developers and data scientists to easily and quickly build ML models with well-defined interfaces for a range of use cases. The most commonly used algorithms and frameworks, built-in with Amazon SageMaker, make ML easier to address these use cases. In this session, we discuss the built-in algorithms and frameworks and how you can leverage them for your ML models. We also discuss the flexibility of bringing your own algorithm into Amazon SageMaker depending on your needs.
Building Machine Learning Inference Pipelines at Scale (July 2019)Julien SIMON
Talk at OSCON, Portland, 18/07/2019
Real-life Machine Learning applications require more than a single model. Data may need pre-processing: normalization, feature engineering, dimensionality reduction, etc. Predictions may need post-processing: filtering, sorting, combining, etc.
Our goal: build scalable ML pipelines with open source (Spark, Scikit-learn, XGBoost) and managed services (Amazon EMR, AWS Glue, Amazon SageMaker)
Work with Machine Learning in Amazon SageMaker - BDA203 - Toronto AWS SummitAmazon Web Services
Organizations are using machine learning (ML) to address a host of business challenges, from product recommendations to demand forecasting. Until recently, developing these ML models took considerable time and effort, and it required expertise. In this session, we dive deep into Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models quickly and easily. We walk through the features and benefits of Amazon SageMaker to get your ML models from concept to production.
Optimize your Machine Learning Workloads on AWS (July 2019)Julien SIMON
Talk at Floor 28, Tel Aviv.
Infrastructure, tips to speed up training, hyperparameter optimization, model compilation, Amazon SageMaker Neo, cost optimization, Amazon Elastic Inference
Build a Custom Model for Object & Logo Detection (AIM421) - AWS re:Invent 2018Amazon Web Services
Detecting specific objects and logos is a feature that can help companies in any industry, from media and entertainment to financial services. However, detecting new objects or logos requires building a custom model. In this chalk talk, learn how to use Amazon Rekognition and Amazon SageMaker to build a custom model to detect logos, objects, or even inappropriate content.
Integrating Amazon SageMaker into your Enterprise - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Get an introduction to Amazon SageMaker
- Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment
- View a walkthrough of the machine learning lifecycle to cover best practices in the ML process
Building a Recommender System Using Amazon SageMaker's Factorization Machine ...Amazon Web Services
Machine Learning Week at the San Francisco Loft: Building a Recommender System Using Amazon SageMaker's Factorization Machine Algorithm
Factorization Machines are a powerful algorithm in the click prediction and recommendation space. Amazon SageMaker has a nearly infinitely scalable implementation that we'll show you how to use to build a recommender of your own.
Speaker: David Arpin - AI Platform Selections Leader, AI Platforms
How Peak.AI Uses Amazon SageMaker for Product Personalization (GPSTEC316) - A...Amazon Web Services
In this session, learn how Peak’s Artificial Intelligence System (AIS) embeds Amazon SageMaker to solve business problems with outstanding results. We show you how Peak worked backwards from two customer problems to create a machine learning (ML) solution that used multiple models, trained, and then deployed on Amazon SageMaker. We highlight the challenges, classifying PII data and integrating data from multiple sources. Next, we walk through the ML model training phase for each customer, showing you how new data sources were used to improve the accuracy of the ML models. Finally, the results: Regit and Footasylum were able to use the intelligent predictions provided by Peak.AI to deliver a personalized service to their customers, resulting in a 30% increase in revenue.
Deep Learning Applications Using TensorFlow, ft. Advanced Microgrid Solutions...Amazon Web Services
The TensorFlow deep learning framework is used for developing diverse artificial intelligence (AI) applications, including computer vision, natural language, speech, and translation. In this session, learn how to use TensorFlow within the Amazon SageMaker machine learning platform. Then, hear from Advanced Microgrid Solutions about how they implemented a deep neural network architecture with Keras and TensorFlow to forecast energy prices in near real time.
Machine Learning e Amazon SageMaker: Algoritmos, Modelos e Inferências - MCL...Amazon Web Services
Atualmente, as organizações estão usando machine learning (ML) para endereçar uma série de desafios nos negócios, desde recomenções de produtos e previsão de preços, até o rastreamento da progressão de doença e previsão de demanda. Até recentemente, desenvolver esses modelos de ML demorava um período significante de tempo e esforços, e exigia especialização nesse campo. Nesta sessão, apresentaremos o Amazon SageMaker, um seviço ML totalmente gerenciado que permite desenvolvedores e cientistas de dados desenvolver e implementar modelos de aprendizagem profunda com mais rapidez e facilidade. Analisaremos os recursos e os benefícios do Amazon SageMaker e discutiremos os algoritmos ML exclusivamente projetados que permitem treinamento otimizado do modelo, para levar você à rápida produtividade.
Predicting the Future with Amazon SageMaker - AWS Summit Sydney 2018Amazon Web Services
Predicting the Future with Amazon SageMaker
Amazon SageMaker removes all the barriers that typically slow down developers who want to use machine learning. In this session you will learn how to use built-in, high performance machine learning algorithms for predictions and computer vision within your application. We will deploy machine learning models into production and start generating classifications with a few API calls using the SageMaker SDK. Additionally we will demonstrate how to run your custom trained machine learning model directly out of your web application to classify incoming user generated content.
Steve Shirkey, ASEAN Solutions Architect, Amazon Web Services
Supercharge Your ML Model with SageMaker - AWS Summit Sydney 2018Amazon Web Services
Supercharge Your Machine Learning Model with Amazon SageMaker
In this session you will learn how to use Amazon SageMaker to build, train, test, and deploy a machine learning model. We will use a real life use case to share the simplicity of building and deploying ML models on Amazon SageMaker.
Koorosh Lohrasbi, Solutions Architect, Amazon Web Services
Recommendation is one of the most popular applications in machine learning (ML). In this workshop, we’ll show you how to build a movie recommendation model based on factorization machines — one of the built-in algorithms of Amazon SageMaker — and the popular MovieLens dataset.
Intelligence of Things: IoT, AWS DeepLens and Amazon SageMaker - AWS Summit S...Amazon Web Services
Intelligence of Things: IoT, AWS DeepLens and Amazon SageMaker
With IoT, machine learning is going everywhere. Using Amazon SageMaker it's never been easier to build Intelligent Things. In this session we look at how we can push intelligence from cloud-trained models to the edge using AWS Greengrass and explore how devices such as AWS DeepLens make it easy to bring intelligence to your things.
Jan Haak, Global Solutions Architect, Amazon Web Services
Machine Learning with Amazon SageMaker - Algorithms and Frameworks - BDA304 -...Amazon Web Services
Algorithms and frameworks form a fundamental part of machine learning (ML). These critical components enable developers and data scientists to easily and quickly build ML models with well-defined interfaces for a range of use cases. The most commonly used algorithms and frameworks, built-in with Amazon SageMaker, make ML easier to address these use cases. In this session, we discuss the built-in algorithms and frameworks and how you can leverage them for your ML models. We also discuss the flexibility of bringing your own algorithm into Amazon SageMaker depending on your needs.
Building Machine Learning Inference Pipelines at Scale (July 2019)Julien SIMON
Talk at OSCON, Portland, 18/07/2019
Real-life Machine Learning applications require more than a single model. Data may need pre-processing: normalization, feature engineering, dimensionality reduction, etc. Predictions may need post-processing: filtering, sorting, combining, etc.
Our goal: build scalable ML pipelines with open source (Spark, Scikit-learn, XGBoost) and managed services (Amazon EMR, AWS Glue, Amazon SageMaker)
Work with Machine Learning in Amazon SageMaker - BDA203 - Toronto AWS SummitAmazon Web Services
Organizations are using machine learning (ML) to address a host of business challenges, from product recommendations to demand forecasting. Until recently, developing these ML models took considerable time and effort, and it required expertise. In this session, we dive deep into Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models quickly and easily. We walk through the features and benefits of Amazon SageMaker to get your ML models from concept to production.
Optimize your Machine Learning Workloads on AWS (July 2019)Julien SIMON
Talk at Floor 28, Tel Aviv.
Infrastructure, tips to speed up training, hyperparameter optimization, model compilation, Amazon SageMaker Neo, cost optimization, Amazon Elastic Inference
Build a Custom Model for Object & Logo Detection (AIM421) - AWS re:Invent 2018Amazon Web Services
Detecting specific objects and logos is a feature that can help companies in any industry, from media and entertainment to financial services. However, detecting new objects or logos requires building a custom model. In this chalk talk, learn how to use Amazon Rekognition and Amazon SageMaker to build a custom model to detect logos, objects, or even inappropriate content.
Integrating Amazon SageMaker into your Enterprise - AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Get an introduction to Amazon SageMaker
- Learn how to integrate Amazon SageMaker and other AWS Services within an Enterprise environment
- View a walkthrough of the machine learning lifecycle to cover best practices in the ML process
Building a Recommender System Using Amazon SageMaker's Factorization Machine ...Amazon Web Services
Machine Learning Week at the San Francisco Loft: Building a Recommender System Using Amazon SageMaker's Factorization Machine Algorithm
Factorization Machines are a powerful algorithm in the click prediction and recommendation space. Amazon SageMaker has a nearly infinitely scalable implementation that we'll show you how to use to build a recommender of your own.
Speaker: David Arpin - AI Platform Selections Leader, AI Platforms
How Peak.AI Uses Amazon SageMaker for Product Personalization (GPSTEC316) - A...Amazon Web Services
In this session, learn how Peak’s Artificial Intelligence System (AIS) embeds Amazon SageMaker to solve business problems with outstanding results. We show you how Peak worked backwards from two customer problems to create a machine learning (ML) solution that used multiple models, trained, and then deployed on Amazon SageMaker. We highlight the challenges, classifying PII data and integrating data from multiple sources. Next, we walk through the ML model training phase for each customer, showing you how new data sources were used to improve the accuracy of the ML models. Finally, the results: Regit and Footasylum were able to use the intelligent predictions provided by Peak.AI to deliver a personalized service to their customers, resulting in a 30% increase in revenue.
Deep Learning Applications Using TensorFlow, ft. Advanced Microgrid Solutions...Amazon Web Services
The TensorFlow deep learning framework is used for developing diverse artificial intelligence (AI) applications, including computer vision, natural language, speech, and translation. In this session, learn how to use TensorFlow within the Amazon SageMaker machine learning platform. Then, hear from Advanced Microgrid Solutions about how they implemented a deep neural network architecture with Keras and TensorFlow to forecast energy prices in near real time.
Machine Learning e Amazon SageMaker: Algoritmos, Modelos e Inferências - MCL...Amazon Web Services
Atualmente, as organizações estão usando machine learning (ML) para endereçar uma série de desafios nos negócios, desde recomenções de produtos e previsão de preços, até o rastreamento da progressão de doença e previsão de demanda. Até recentemente, desenvolver esses modelos de ML demorava um período significante de tempo e esforços, e exigia especialização nesse campo. Nesta sessão, apresentaremos o Amazon SageMaker, um seviço ML totalmente gerenciado que permite desenvolvedores e cientistas de dados desenvolver e implementar modelos de aprendizagem profunda com mais rapidez e facilidade. Analisaremos os recursos e os benefícios do Amazon SageMaker e discutiremos os algoritmos ML exclusivamente projetados que permitem treinamento otimizado do modelo, para levar você à rápida produtividade.
BDA301 Working with Machine Learning in Amazon SageMaker: Algorithms, Models,...Amazon Web Services
Today, organizations are using machine learning (ML) to address a host of business challenges, from product recommendations and pricing predictions, to tracking disease progression and demand forecasting. Until recently, developing these ML models took a significant amount of time and effort, and it required expertise in this field. In this session, we introduce you to Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models more quickly and easily. We walk through the features and benefits of Amazon SageMaker and discuss the uniquely designed ML algorithms that allow for optimized model training, to get you to production fast.
Build Deep Learning Applications Using Apache MXNet - Featuring Chick-fil-A (...Amazon Web Services
The Apache MXNet deep learning framework is used for developing, training, and deploying diverse AI applications, including computer vision, speech recognition, natural language processing, and more at scale. In this session, learn how to get started with Apache MXNet on the Amazon SageMaker machine learning platform. Chick-fil-A share how they got started with MXNet on Amazon SageMaker to measure waffle fry freshness and how they leverage AWS services to improve the Chick-fil-A guest experience.
Build Deep Learning Applications Using Apache MXNet, Featuring Workday (AIM40...Amazon Web Services
The Apache MXNet deep learning framework is used for developing, training, and deploying diverse AI applications, including computer vision, speech recognition, and natural language processing at scale. In this session, learn how to get started with MXNet on the Amazon SageMaker machine learning platform. Hear from Workday about how they built computer vision and natural language processing (NLP) models using MXNet to automatically extract information from paper documents, such as expense receipts and populate data records. Workday also shares its experience using Sockeye, an MXNet toolkit for quickly prototyping sequence-to-sequence NLP models.
AWS offers a wide selection of compute platforms. In this session, we highlight key platform features of different Amazon EC2 instance families, and we provide a framework in which to choose the best compute resource (including Amazon EC2 instances, AWS Fargate containers, and AWS Lambda functions) for your workloads based on metrics and workload profiles. We also share best practices and performance tips for getting the most out of your Amazon EC2 instances to help you reduce unnecessary spending and improve application performance.
Running Lean Architectures: How to Optimize for Cost Efficiency (ARC202-R2) -...Amazon Web Services
Aimed at solutions architects and technical managers, this session focuses on the practical ways our customers achieve cost-efficient architectures through service selection and configuration. We start by discussing the building block services. We cover the main trends, such as containers and serverless, and we explore some of the specific services and configurations customers have used. We also take you through real-life examples that can be implemented to minimize costs while driving innovation and business output. After you attend this session, you will understand what is possible on AWS, and you will know ways in which you can deploy new workloads or modify existing workloads for optimization.
Cost-Effectively Running Distributed Systems at Scale in the Cloud (CMP349) -...Amazon Web Services
At Mist, we believe the best way of achieving reliability is by embracing uncontrolled unreliability. We run our microservices and stream processing entirely on Amazon EC2 Spot Instances, which introduces uncontrolled chaos. In this session, we start with an overview of our system and discuss how we achieve high reliability on our largest service, Live Aggregators (LA). LA is our in-house, real-time, timeseries aggregation system that processes over a billion messages a day while aggregating over 100 million timeseries. We also share how we are autoscaling LA by predicting and adapting to changing loads that results in server utilizations of over 70%.
by Francesco Ruffino, SR. HPC Specialized Solutions
Architect, AWS
High Performance Computing (HPC) has been driving technology advancements for many decades. HPC enables performance-demanding applications and workloads to solve complex problems while dramatically reducing time to solution. With a history of requiring very large data centers, HPC is now on the edge of a paradigm shift. The AWS Cloud will allow customers to have access to near infinite compute and storage resources, without the overhead of running their own data centers. There are a vast number of HPC segments and verticals that are already seeing great success running their workloads on AWS. Life Sciences, Financial Services, Energy & Geo Sciences, as well as Manufacturing are successfully deploying their applications on AWS. In these two sessions we will discuss how AWS can help you run HPC workloads in the cloud. The first session will be a general introduction to HPC on AWS.
by Francesco Ruffino, SR. HPC Specialized Solutions Architect, AWS
High Performance Computing (HPC) has been driving technology advancements for many decades. HPC enables performance-demanding applications and workloads to solve complex problems while dramatically reducing time to solution. With a history of requiring very large data centers, HPC is now on the edge of a paradigm shift. The AWS Cloud will allow customers to have access to near infinite compute and storage resources, without the overhead of running their own data centers. There are a vast number of HPC segments and verticals that are already seeing great success running their workloads on AWS. Life Sciences, Financial Services, Energy & Geo Sciences, as well as Manufacturing are successfully deploying their applications on AWS. In these two sessions we will discuss how AWS can help you run HPC workloads in the cloud. The first session will be a general introduction to HPC on AWS.
Running a High-Performance Kubernetes Cluster with Amazon EKS (CON318-R1) - A...Amazon Web Services
How do you ensure that a containerized system can handle the needs of your application? Designing and testing for performance is a critical aspect of operating containerized architectures at scale. In this session, we cover best practices for designing perfomant containerized applications on AWS using Kubernetes. We also show you how State Street deployed a high-performance database at scale using Amazon Elastic Container Service for Kubernetes (Amazon EKS).
Architecting for Real-Time Insights with Amazon Kinesis (ANT310) - AWS re:Inv...Amazon Web Services
Amazon Kinesis makes it easy to speed up the time it takes for you to get valuable, real-time insights from your streaming data. In this session, we walk through the most popular applications that customers implement using Amazon Kinesis, including streaming extract-transform-load, continuous metric generation, and responsive analytics. Our customer Autodesk joins us to describe how they created real-time metrics generation and analytics using Amazon Kinesis and Amazon Elasticsearch Service. They walk us through their architecture and the best practices they learned in building and deploying their real-time analytics solution.
Achieving Global Consistency Using AWS CloudFormation StackSets - AWS Online ...Amazon Web Services
Learning Objectives:
- Understand how AWS CloudFormation StackSets work and how to use them
- Use StackSets to manage AWS resources across multiple regions in multiple AWS accounts
- Incorporate StackSets into Disaster Recovery and account isolation strategies
Accelerate Machine Learning Workloads using Amazon EC2 P3 Instances - SRV201 ...Amazon Web Services
Organizations are tackling exponentially complex questions across advanced scientific, energy, high tech, and medical fields. Machine learning (ML) makes it possible to quickly explore a multitude of scenarios and generate the best answers, ranging from image, video, and speech recognition to autonomous vehicle systems and weather prediction. Learn how Amazon EC2 P3 instances can help data scientists, researchers, and developers significantly lower their time and cost to train ML models, speed up their development process, and bring innovations to market sooner.
[NEW LAUNCH!] Introducing Amazon Elastic Inference: Reduce Deep Learning Infe...Amazon Web Services
Deploying deep learning applications at scale can be cost prohibitive due to the need for hardware acceleration to meet latency and throughput requirements of inference. Amazon Elastic Inference helps you tackle this problem by reducing the cost of inference by up to 75% with GPU-powered acceleration that can be right-sized to your application’s inference needs. In this session, learn about how to deploy TensorFlow, Apache MXNet, and ONNX models with Amazon Elastic Inference on Amazon EC2 and Amazon SageMaker. Hear from Autodesk on the positive impact of AI on tools used to design and make a better world. Learn about how Autodesk and the Autodesk AI Lab are using Amazon Elastic Inference to make it cost efficient to run these tools at scale.
NLP in Healthcare to Predict Adverse Events with Amazon SageMaker (AIM346) - ...Amazon Web Services
In healthcare, pharmacovigilance is key to improving patient outcomes. The prediction of adverse events will enable pharmaceutical companies and drug distributors in accurately meeting their pharmacovigilance requirements and scaling their operations. In this chalk talk, we discuss how Amazon SageMaker can be used to classify large-scale agent and reporter interaction summaries. We also discuss natural language processing (NLP) methods and results.
In this session, learn how to seamlessly combine Amazon EC2 On-Demand, Spot, and Reserved Instances. Also learn how to use the best practices deployed by customers all over the world for the most common applications and workloads. Discover multiple ways to grow your compute capacity and enable new types of cloud computing applications—without it costing you a lot of money.
Amazon EC2 T Instances – Burstable, Cost-Effective Performance (CMP209) - AWS...Amazon Web Services
In this session, you will learn how to utilize low cost T2 and T3 instances while still having access to sustainable high performance when needed. Designed for applications with variable CPU usage that experience occasional spikes in demand, T instances enable customers’ applications to burst seamlessly to meet temporary traffic peaks and then scale back down to operate at typical traffic levels. The next generation T3 instances provide up to 30% better price performance over T2 instances and include unlimited bursting by default, making it a cost-effective choice for general-purpose computing.
Mainframe Modernization with AWS: Patterns and Best Practices (GPSTEC305) - A...Amazon Web Services
Customers have compelling business reasons to modernize and migrate mainframe workloads to AWS. Mainframes typically process complex and critical applications. Fortunately, we have accumulated experience and learned lessons based on the many successful customer modernization projects to AWS. In this session, we present patterns and best practices that facilitate successful mainframe to AWS initiatives.
Similar to AWS re:Invent 2018 - AIM401-R2 - Deep Learning Applications with Tensorflow (20)
An introduction to computer vision with Hugging FaceJulien SIMON
In this code-level talk, Julien will show you how to quickly build and deploy computer vision applications based on Transformer models. Along the way, you'll learn about the portfolio of open source and commercial Hugging Face solutions, and how they can help you deliver high-quality solutions faster than ever before.
Starting your AI/ML project right (May 2020)Julien SIMON
In this talk, we’ll see how you can put your AI/ML project on the right track from the get-go. Applying common sense and proven best practices, we’ll discuss skills, tools, methods, and more. We’ll also look at several real-life projects built by AWS customers in different industries and startups.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
12. “Of 388 projects, 80 percent using TensorFlow
and other frameworks are running exclusively
on AWS.
88% using only TensorFlow are running
exclusively on AWS.”
Nucleus Research report,
December 2017
https://aws.amazon.com/tensorflow
AWS is the place of choice for TensorFlow
workloads
Amazon SageMaker removes the complexity that holds back developer success with each of these steps. Amazon SageMaker includes modules that can be used together or independently to build, train, and deploy your machine learning models.
SageMaker makes it easy to build ML models and get them ready for training by providing everything you need to quickly connect to your training data, and to select and optimize the best algorithm and framework for your application. Amazon SageMaker includes hosted Jupyter notebooks that make it is easy to explore and visualize your training data stored in Amazon S3. You can connect directly to data in S3, or use AWS Glue to move data from Amazon RDS, Amazon DynamoDB, and Amazon Redshift into S3 for analysis in your notebook.
To help you select your algorithm, Amazon SageMaker includes the 10 most common machine learning algorithms which have been pre-installed and optimized to deliver up to 10 times the performance you’ll find running these algorithms anywhere else. Amazon SageMaker also comes pre-configured to run TensorFlow and Apache MXNet, two of the most popular open source frameworks, or you have the option of using your own framework.
You can begin training your model with a single click in the Amazon SageMaker console. The service manages all of the underlying infrastructure for you and can easily scale to train models at petabyte scale. To make the training process even faster and easier, Amazon SageMaker can automatically tune your model to achieve the highest possible accuracy.
Once your model is trained and tuned, SageMaker makes it easy to deploy in production so you can start generating predictions on new data (a process called inference). Amazon SageMaker deploys your model on an auto-scaling cluster of Amazon EC2 instances that are spread across multiple availability zones to deliver both high performance and high availability. It also includes built-in A/B testing capabilities to help you test your model and experiment with different versions to achieve the best results.
For maximum versatility, we designed Amazon SageMaker in three modules – Build, Train, and Deploy – that can be used together or independently as part of any existing ML workflow you might already have in place.
https://www.businesswire.com/news/home/20180404006122/en/Tens-Thousands-Customers-Flocking-AWS-Machine-Learning
Edmunds.com is a car-shopping website that offers detailed, constantly updated information about vehicles to 20 million monthly visitors. “We have a strategic initiative to put machine learning into the hands of all our engineers,” said Stephen Felisan, Chief Information Officer at Edmunds.com. “Amazon SageMaker is key to helping us achieve this goal, making it easier for engineers to build, train, and deploy machine learning models and algorithms at scale. We are excited to see how we can use Amazon SageMaker to innovate new solutions across the organization for our customers.”
The Move, Inc. network, which includes Realtor.com, Doorsteps, and Moving.com, provides real estate information, tools, and professional expertise across a family of websites and mobile experiences for consumers and real estate professionals. “We believe that Amazon SageMaker is a transformative addition to the realtor.com toolset as we support consumers along their homeownership journey," said Vineet Singh, Chief Data Officer and Senior Vice President at Move, Inc. "Machine learning workflows that have historically taken a long time, like training and optimizing models, can be done with greater efficiency and by a broader set of developers, empowering our data scientists and analysts to focus on creating the richest experience for our users."
Dow Jones is a publishing and financial information firm that publishes the world's most trusted business news and financial information in a variety of media. It delivers breaking news, exclusive insights, expert commentary and personal finance strategies. “As Dow Jones continues to focus on integrating machine learning into our products and services, AWS has been a great resource,” said Ramin Beheshti, Group Chief Product and Technology Officer. “Leading up to our recent Machine Learning Hackathon, the AWS team provided training to participants on Amazon SageMaker and Amazon Rekognition, and offered day-of support to all the teams. The result was that our teams developed some great ideas for how we can apply machine learning, many of which we we’ll continue to develop on AWS. The event was a huge success, and an example of what a great relationship can look like.”
Every day Grammarly’s algorithms help millions of people communicate more effectively by offering writing assistance on multiple platforms across devices. Through a combination of natural language processing and advanced machine learning technologies, Grammarly is tackling critical communication and business challenges. “Amazon SageMaker makes it possible for us to develop our TensorFlow models in a distributed training environment,” said Stanislav Levental, Technical Lead at Grammarly. “Our workflows also integrate with Amazon EMR for pre-processing, so we can get our data from Amazon Simple Storage Service (Amazon S3), filtered with Amazon EMR and Spark from a Jupyter notebook, and then train in Amazon SageMaker with the same notebook. Amazon SageMaker is also flexible for our different production requirements. We can run inferences on Amazon SageMaker itself, or if we need just the model, we download it from Amazon S3 and run inferences of our mobile device implementations for iOS and Android customers.”
Cookpad is Japan’s largest recipe sharing service, with about 60 million monthly users in Japan and about 90 million monthly users globally. “With the increasing demand for easier use of Cookpad’s recipe service, our data scientists will be building more machine learning models in order to optimize the user experience,” said Mr. Yoichiro Someya, Research Engineer at Cookpad. “Attempting to minimize the number of training job iterations for best performance, we recognized a significant challenge in the deployment of machine learning inference endpoints, which was slowing down our development processes. To automate the machine learning model deployment such that data scientists could deploy models by themselves, we used Amazon SageMaker inference APIs and proved that Amazon SageMaker would eliminate the need for application engineers to deploy machine learning models. We anticipate automating this process with Amazon SageMaker in production.”
Before diving in to the use case, I want to establish some fundamental concepts around the “industry-specific” terms used.
Electricity is traded in regional “wholesale energy markets”, for example there is a market that covers California. One of the primary purposes of this financial market to ensure that supply exactly equals demand at all times, as failing to do so would disrupt the physical power systems which could result in grid failure and wide-spread power outages.
This element of “at all times” is complicated by large variations in demand through time generally dictated by changes behavior and environmental factors like weather.
So the market operator role is to procure the right amount of energy supply to match this changing target.
For suppliers participating in this market, they bids into the market the quantity that they are willing to generate for at different price points (more on how they select their price point in a moment).
The outcome of this matching process is the market clearing price, which is for the most expensive energy source procured to meet demand, and is the price paid to to all suppliers.
All energy is bought and sold in regional “wholesale energy markets”. As demand and supply varies dramatically through time, so does the corresponding price of energy.
There are several types of technologies that generate and sell energy that end-users ultimately buy and consume. These include thermal (gas-fired) generators, renewable resources like wind and solar, hydroelectric dams, and the new kid on the block – battery energy storage.
Each of these differ in the manner they generate power in but also have very different costs to generate (comparable to cost of goods sold).
Some technology-type’s costs are rather straight forward, thermal generators for example are subject to the price of gas burned to generate electricity.
Others are much more complex. Batteries costs to generate come in the form of charging and therefore must make price arbitrage decisions of charging cost to discharge revenue. Simplistically this means discharging during high prices to maximize revenue and charge during lower prices to minimize cost.
Accurate understanding of future prices is critical to correct decision making for profitable energy market participation.
Image Credits
Coal Plant: https://commons.wikimedia.org/wiki/File:Sherco_Generating_Station_-_Xcel_Energy_Sherburne_County_Coal-Fired_Power_Plant_-_Sunset_(24077210421).jpg
Solar Farm: https://commons.wikimedia.org/wiki/File:Taean_Solar_Farm_at_7pm_-_panoramio.jpg
Wind Farm: https://commons.wikimedia.org/wiki/File:Sunset_at_Royd_Moor_Wind_Farm.jpg
Dam: https://commons.wikimedia.org/wiki/File:Diablo_Dam_(from_WA_SR_20).jpg
Our use case is to optimize the market participation or manner of market bidding for energy storage assets in Australia’s National Energy Market. This market serves 9M customers spread around the five eastern AUS states. This is one of the largest energy markets in the world both in terms of volume and value, where 200 TWh and $16.6B are traded annually.
The Australia market is designed as a “spot” market where all supply and demand bid to generate and consume energy during the upcoming 5-minute time window. Additionally there are 9 different avenue in which you can bid. In practice all parties submit a bid (quantity & price) to the market. The market operator considers all bids, awards the suppliers with the best bids the ability to generate during this time window.
Wrapping in batteries, value of discharging at a later point in time has to be weigh against discharging now. So while the upcoming 5-minutes is the focus of the market, proper bidding of the battery requires an understanding of future time intervals as well.
Image Credit: https://www.electranet.com.au/what-we-do/network/national-electricity-market-and-rules/
Provide motivation for developing our solution before getting into the how we are implementing the solution
AEMO is unique in that it publishes it’s own market forecast
Didn’t know if any or all of these features are important, which makes it well suited for neural networks, as the model will decide what is important
The complexity and volatility of the market is also increases so want a framework that would work well into the future
There were also a few papers that showed promising results
Before getting in to the model, want to take a step back to look at the components that we ended up using and briefly describe the final architecture.
Walk through the development from simple model to the more complicated, highly parameterized model.
Previous model will give you the point estimate, but we also want to model uncertainty.
This can be done by adding a dimension to the output that represents the quantiles.