This document discusses deep learning at the edge using Apache MXNet and AWS services. It describes how models can be flexibly experimented with and scalably trained in the cloud using MXNet and SageMaker, then deployed for good prediction performance at the edge using technologies like MXNet, Greengrass and DeepLens. Models and code trained in SageMaker can be simply deployed to edge devices using Greengrass to allow local prediction and updates even without connectivity.
Machine Learning Inference at the EdgeJulien SIMON
Machine Learning works by using powerful algorithms to discover patterns in data and construct complex mathematical models using these patterns. Once the model is built, you perform inference by applying new data to the trained model to make predictions for your application. Building and training ML models require massive computing resources so it is a natural fit for the cloud. But, inference takes a lot less computing power and is typically done in real-time when new data is available, so getting inference results with very low latency is important to making sure your applications can respond quickly to local events. AWS Greengrass ML inference gives you the best of both worlds. You use ML models that are built and trained in the cloud and you deploy and run ML inference locally on connected devices. For example, autonomous cars need to identify road signs in real time, drones need to recognize objects with or without network connectivity.
Machine Learning (ML) works by using powerful algorithms to discover patterns in data, and constructing complex mathematical models using these patterns. Once a model is built, you perform inference by applying data to the trained model to make predictions for your application. Building and training ML models requires massive computing resources so it is a natural fit for the cloud. But, inference takes a lot less computing power and is typically done in real-time when new data is available, so getting inference results with very low latency is important to making sure your applications can respond quickly to local events. AWS Greengrass ML inference gives you the best of both worlds. You use ML models that are built and trained in the cloud, and you deploy and run ML inference locally on connected devices. For example, autonomous cars need to identify road signs in real time; and drones need to recognize objects with or without network connectivity.
Machine Learning Inference at the EdgeJulien SIMON
Machine Learning works by using powerful algorithms to discover patterns in data and construct complex mathematical models using these patterns. Once the model is built, you perform inference by applying new data to the trained model to make predictions for your application. Building and training ML models require massive computing resources so it is a natural fit for the cloud. But, inference takes a lot less computing power and is typically done in real-time when new data is available, so getting inference results with very low latency is important to making sure your applications can respond quickly to local events. AWS Greengrass ML inference gives you the best of both worlds. You use ML models that are built and trained in the cloud and you deploy and run ML inference locally on connected devices. For example, autonomous cars need to identify road signs in real time, drones need to recognize objects with or without network connectivity.
Machine Learning (ML) works by using powerful algorithms to discover patterns in data, and constructing complex mathematical models using these patterns. Once a model is built, you perform inference by applying data to the trained model to make predictions for your application. Building and training ML models requires massive computing resources so it is a natural fit for the cloud. But, inference takes a lot less computing power and is typically done in real-time when new data is available, so getting inference results with very low latency is important to making sure your applications can respond quickly to local events. AWS Greengrass ML inference gives you the best of both worlds. You use ML models that are built and trained in the cloud, and you deploy and run ML inference locally on connected devices. For example, autonomous cars need to identify road signs in real time; and drones need to recognize objects with or without network connectivity.
Machine Learning Models with Apache MXNet and AWS FargateAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep Learning has been delivering state of the art results across a growing number of domains and use cases. Correspondingly, Deep Learning models are being deployed across a growing number of applications across segments. In this session, we will dive deep into serving machine learning models in production, and demonstrate how to efficiently deploy and serve models over serverless infrastructure using the open source project Model Server for Apache MXNet, Containers and AWS Fargate.
Orchestrating Machine Learning Training for Netflix Recommendations - MCL317 ...Amazon Web Services
At Netflix, we use machine learning (ML) algorithms extensively to recommend relevant titles to our 100+ million members based on their tastes. Everything on the member home page is an evidence-driven, A/B-tested experience that we roll out backed by ML models. These models are trained using Meson, our workflow orchestration system. Meson distinguishes itself from other workflow engines by handling more sophisticated execution graphs, such as loops and parameterized fan-outs. Meson can schedule Spark jobs, Docker containers, bash scripts, gists of Scala code, and more. Meson also provides a rich visual interface for monitoring active workflows and inspecting execution logs. It has a powerful Scala DSL for authoring workflows as well as the REST API. In this session, we focus on how Meson trains recommendation ML models in production, and how we have re-architected it to scale up for a growing need of broad ETL applications within Netflix. As a driver for this change, we have had to evolve the persistence layer for Meson. We talk about how we migrated from Cassandra to Amazon RDS backed by Amazon Aurora.
Machine Learning is increasingly being used by organisations to move from analysis to prediction. How AWS and open source technology can help you to perform both Deep Learning and Machine Learning
Scalable Deep Learning on AWS with Apache MXNetJulien SIMON
Session @ AWS Summit Stockholm, 03/04/2017
AI: The Story So Far
Applications of Deep Learning
Apache MXNet Overview
Apache MXNet API
Code and Demos
Tools and Resources
Amazon SageMaker is a fully-managed platform that lets developers and data scientists build and scale machine learning solutions. First, we'll show you how SageMaker Ground Truth helps you label large training datasets. Then, using Jupyter notebooks, we'll show you how to build, train and deploy models using built-in algorithms and frameworks (TensorFlow, Apache MXNet, etc). Finally, we'll show you how to use 3rd-party models from the AWS marketplace.
Optimize your Machine Learning Workloads on AWS (July 2019)Julien SIMON
Talk at Floor 28, Tel Aviv.
Infrastructure, tips to speed up training, hyperparameter optimization, model compilation, Amazon SageMaker Neo, cost optimization, Amazon Elastic Inference
Amazon SageMaker is a fully-managed platform that lets developers and data scientists build and scale machine learning solutions. First, we'll show you how SageMaker Ground Truth helps you label large training datasets. Then, using Jupyter notebooks, we'll show you how to build, train and deploy models using built-in algorithms and frameworks (TensorFlow, Apache MXNet, etc). Finally, we'll show you how to use 3rd-party models from the AWS marketplace.
Risk Management and Particle Accelerators: Innovating with New Compute Platfo...Amazon Web Services
What does risk modeling and analytics in financial services have in common with large scale computing in high energy physics? Come to this session to hear how financial services customers like Aon are taking advantage of new approaches like predictive analytics and AI/deep learning on AWS to perform risk modeling and how Brookhaven National Laboratory are using 10s of thousands of cores to do large scale grid computing for Monte Carlo simulations in high energy physics. In addition, we will also showcase how CSIRO eHealth team in Australia are innovating with serverless architectures using AWS Lambda for personalized medicine and genomics.
Speakers: Adrian White, Sr SciCo Technical Manager, Amazon Web Services
Intel and Amazon - Powering your innovation together. Eran Shlomo
In these slides we go over the current joined offering from Intel and amazon, the coming great technologies and how the two companies are creating synergy that boost your innovation and productivity.
This was presented in TLV AWS loft Mar 2017.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
By Hagay Lupesko, SDM, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
AWS Machine Learning Week SF: Build, Train & Deploy ML Models Using SageMakerAmazon Web Services
AWS Machine Learning Week at the San Francisco Loft: Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
In this deck from the HPC User Forum at Argonne, Ian Colle from Amazon presents: What Can HPC on AWS Do?
"AWS provides the most elastic and scalable cloud infrastructure to run your HPC applications. With virtually unlimited capacity, engineers, researchers, and HPC system owners can innovate beyond the limitations of on-premises HPC infrastructure. AWS delivers an integrated suite of services that provides everything needed to quickly and easily build and manage HPC clusters in the cloud to run the most compute intensive workloads across various industry verticals. These workloads span the traditional HPC applications, like genomics, computational chemistry, financial risk modeling, computer aided engineering, weather prediction, and seismic imaging, as well as emerging applications, like machine learning, deep learning, and autonomous driving."
Watch the video: https://wp.me/p3RLHQ-kUh
Learn more: https://aws.amazon.com/hpc/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Using Amazon SageMaker to build, train, & deploy your ML ModelsAmazon Web Services
Machine Learning Workshops at the San Francisco Loft
Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Martin Schade - R&D Engineer, AWS Solutions Architecture
by Mahendra Bairagi, AI Specialist Solutions Architect, AWS
As the CTO of a new startup, you have taken up a challenge of improving the EDM music festival experience. At venues with multiple stages, festival-goers are always looking to identify DJ stage areas with the liveliest atmosphere. This causes them to constantly move around between different stages and miss out on having fun. You are looking to use Machine Learning and IoT technologies to solve this unique problem.
Do you accept the Challenge?
The objective of this task is to help the festival-goers quickly identify the DJ stage where crowd is the happiest. You've seen a lot of buzz around computer vision, machine learning, and IoT and want to use this technology to detect crowd emotions. From your initial research there are existing ML models that you can leverage to do face and emotion detection, but there are two ways that the predictions (inference) can be done; on the cloud and on the camera itself, but which one will work the best for your needs at the festival? You are going to test both approaches and find out!
In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
Machine Learning Models with Apache MXNet and AWS FargateAmazon Web Services
by Ahmad Khan, Sr. Solutions Architect, AWS
Deep Learning has been delivering state of the art results across a growing number of domains and use cases. Correspondingly, Deep Learning models are being deployed across a growing number of applications across segments. In this session, we will dive deep into serving machine learning models in production, and demonstrate how to efficiently deploy and serve models over serverless infrastructure using the open source project Model Server for Apache MXNet, Containers and AWS Fargate.
Orchestrating Machine Learning Training for Netflix Recommendations - MCL317 ...Amazon Web Services
At Netflix, we use machine learning (ML) algorithms extensively to recommend relevant titles to our 100+ million members based on their tastes. Everything on the member home page is an evidence-driven, A/B-tested experience that we roll out backed by ML models. These models are trained using Meson, our workflow orchestration system. Meson distinguishes itself from other workflow engines by handling more sophisticated execution graphs, such as loops and parameterized fan-outs. Meson can schedule Spark jobs, Docker containers, bash scripts, gists of Scala code, and more. Meson also provides a rich visual interface for monitoring active workflows and inspecting execution logs. It has a powerful Scala DSL for authoring workflows as well as the REST API. In this session, we focus on how Meson trains recommendation ML models in production, and how we have re-architected it to scale up for a growing need of broad ETL applications within Netflix. As a driver for this change, we have had to evolve the persistence layer for Meson. We talk about how we migrated from Cassandra to Amazon RDS backed by Amazon Aurora.
Machine Learning is increasingly being used by organisations to move from analysis to prediction. How AWS and open source technology can help you to perform both Deep Learning and Machine Learning
Scalable Deep Learning on AWS with Apache MXNetJulien SIMON
Session @ AWS Summit Stockholm, 03/04/2017
AI: The Story So Far
Applications of Deep Learning
Apache MXNet Overview
Apache MXNet API
Code and Demos
Tools and Resources
Amazon SageMaker is a fully-managed platform that lets developers and data scientists build and scale machine learning solutions. First, we'll show you how SageMaker Ground Truth helps you label large training datasets. Then, using Jupyter notebooks, we'll show you how to build, train and deploy models using built-in algorithms and frameworks (TensorFlow, Apache MXNet, etc). Finally, we'll show you how to use 3rd-party models from the AWS marketplace.
Optimize your Machine Learning Workloads on AWS (July 2019)Julien SIMON
Talk at Floor 28, Tel Aviv.
Infrastructure, tips to speed up training, hyperparameter optimization, model compilation, Amazon SageMaker Neo, cost optimization, Amazon Elastic Inference
Amazon SageMaker is a fully-managed platform that lets developers and data scientists build and scale machine learning solutions. First, we'll show you how SageMaker Ground Truth helps you label large training datasets. Then, using Jupyter notebooks, we'll show you how to build, train and deploy models using built-in algorithms and frameworks (TensorFlow, Apache MXNet, etc). Finally, we'll show you how to use 3rd-party models from the AWS marketplace.
Risk Management and Particle Accelerators: Innovating with New Compute Platfo...Amazon Web Services
What does risk modeling and analytics in financial services have in common with large scale computing in high energy physics? Come to this session to hear how financial services customers like Aon are taking advantage of new approaches like predictive analytics and AI/deep learning on AWS to perform risk modeling and how Brookhaven National Laboratory are using 10s of thousands of cores to do large scale grid computing for Monte Carlo simulations in high energy physics. In addition, we will also showcase how CSIRO eHealth team in Australia are innovating with serverless architectures using AWS Lambda for personalized medicine and genomics.
Speakers: Adrian White, Sr SciCo Technical Manager, Amazon Web Services
Intel and Amazon - Powering your innovation together. Eran Shlomo
In these slides we go over the current joined offering from Intel and amazon, the coming great technologies and how the two companies are creating synergy that boost your innovation and productivity.
This was presented in TLV AWS loft Mar 2017.
Using Amazon SageMaker to build, train, and deploy your ML ModelsAmazon Web Services
By Hagay Lupesko, SDM, AWS
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
AWS Machine Learning Week SF: Build, Train & Deploy ML Models Using SageMakerAmazon Web Services
AWS Machine Learning Week at the San Francisco Loft: Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
In this deck from the HPC User Forum at Argonne, Ian Colle from Amazon presents: What Can HPC on AWS Do?
"AWS provides the most elastic and scalable cloud infrastructure to run your HPC applications. With virtually unlimited capacity, engineers, researchers, and HPC system owners can innovate beyond the limitations of on-premises HPC infrastructure. AWS delivers an integrated suite of services that provides everything needed to quickly and easily build and manage HPC clusters in the cloud to run the most compute intensive workloads across various industry verticals. These workloads span the traditional HPC applications, like genomics, computational chemistry, financial risk modeling, computer aided engineering, weather prediction, and seismic imaging, as well as emerging applications, like machine learning, deep learning, and autonomous driving."
Watch the video: https://wp.me/p3RLHQ-kUh
Learn more: https://aws.amazon.com/hpc/
and
http://hpcuserforum.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Using Amazon SageMaker to build, train, & deploy your ML ModelsAmazon Web Services
Machine Learning Workshops at the San Francisco Loft
Build, Train, and Deploy ML Models Using SageMaker
Amazon SageMaker is a fully-managed service that enables data scientists and developers to quickly and easily build, train, and deploy machine learning models, at scale. This session will introduce you the features of Amazon SageMaker, including a one-click training environment, highly-optimized machine learning algorithms with built-in model tuning, and deployment without engineering effort. With zero-setup required, Amazon SageMaker significantly decreases your training time and overall cost of building production machine learning systems.
Level: 200-300
Speaker: Martin Schade - R&D Engineer, AWS Solutions Architecture
by Mahendra Bairagi, AI Specialist Solutions Architect, AWS
As the CTO of a new startup, you have taken up a challenge of improving the EDM music festival experience. At venues with multiple stages, festival-goers are always looking to identify DJ stage areas with the liveliest atmosphere. This causes them to constantly move around between different stages and miss out on having fun. You are looking to use Machine Learning and IoT technologies to solve this unique problem.
Do you accept the Challenge?
The objective of this task is to help the festival-goers quickly identify the DJ stage where crowd is the happiest. You've seen a lot of buzz around computer vision, machine learning, and IoT and want to use this technology to detect crowd emotions. From your initial research there are existing ML models that you can leverage to do face and emotion detection, but there are two ways that the predictions (inference) can be done; on the cloud and on the camera itself, but which one will work the best for your needs at the festival? You are going to test both approaches and find out!
In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
DataPalooza at the San Francisco Loft: In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
As the CTO of a new startup, you have taken up a challenge of improving the EDM music festival experience. At venues with multiple stages, festival-goers are always looking to identify DJ stage areas with the liveliest atmosphere. This causes them to constantly move around between different stages and miss out on having fun.
In this workshop you will use AWS and Intel technologies to learn how to build, deploy, and run ML inference on the cloud as well as on the IoT Edge. You will learn to use Amazon SageMaker with Intel C5 Instances, AWS DeepLens, AWS Greengrass, Amazon Rekognition, and AWS Lambda to build an end-to-end IoT solution that performs machine learning.
Enabling Deep Learning in IoT Applications with Apache MXNetAmazon Web Services
by Pratap Ramamurthy, SDM and Hagay Lupesko SDM
Many state of the art deep learning models have hefty compute, storage and power consumption requirements which make them impractical or difficult to use on resource-constrained devices. In this TechTalk, you'll learn why Apache MXNet, an open Source library for Deep Learning, is IoT-friendly in many ways. In addition, you'll learn how services like Amazon SageMaker, AWS Lambda, AWS Greengrass, and AWS DeepLens make it easy to deploy MXNet models on edge devices.
If you're like most of the world, you're on an aggressive race to implement machine learning applications and on a path to get to deep learning. If you can give better service at a lower cost, you will be the winners in 2030. But infrastructure is a key challenge to getting there. What does the technology infrastructure look like over the next decade as you move from Petabytes to Exabytes? How are you budgeting for more colossal data growth over the next decade? How do your data scientists share data today and will it scale for 5-10 years? Do you have the appropriate security, governance, back-up and archiving processes in place? This session will address these issues and discuss strategies for customers as they ramp up their AI journey with a long term view.
The session presented a perspective how INTEL Cloud Solutions enables a deployment model that is workload optimized to every application
Speaker: Kavitha Mohammad,Director Industry Solutions Group, Intel
AWS re:Invent 2016: Bringing Deep Learning to the Cloud with Amazon EC2 (CMP314)Amazon Web Services
Algorithmia is a startup with a mission to make state of the art machine learning discoverable by everyone&emdash;they offer the largest algorithm marketplace in the world, with over 2500 algorithms supporting tens of thousands of application developers. Algorithma is the first company to make deep learning, one of the most conceptually difficult areas of computing, accessible to any company via microservices. In this session, you learn how this startup has selected and optimized Amazon EC2 instances for various algorithms (including the latest generation of GPU optimized instances), to create a flexible and scalable platform. They also share their architecture and best practices for getting any computationally-intensive application started quickly.
OS for AI: Elastic Microservices & the Next Gen of MLNordic APIs
AI has been a hot topic lately, with advances being made constantly in what is possible, there has not been as much discussion of the infrastructure and scaling challenges that come with it. How do you support dozens of different languages and frameworks, and make them interoperate invisibly? How do you scale to run abstract code from thousands of different developers, simultaneously and elastically, while maintaining less than 15ms of overhead?
At Algorithmia, we’ve built, deployed, and scaled thousands of algorithms and machine learning models, using every kind of framework (from scikit-learn to tensorflow). We’ve seen many of the challenges faced in this area, and in this talk I’ll share some insights into the problems you’re likely to face, and how to approach solving them.
In brief, we’ll examine the need for, and implementations of, a complete “Operating System for AI” – a common interface for different algorithms to be used and combined, and a general architecture for serverless machine learning which is discoverable, versioned, scalable and sharable.
Qualcomm is an at-scale company. It powered the smartphone revolution and connected billions of people. It pioneered 3G and 4G, and now it is leading the way to 5G and a new era of intelligent, connected devices. Mobile is going to be the largest machine learning platform on the planet. Come learn how Qualcomm is making efficient on-device machine learning possible, how Qualcomm and Facebook worked closely to support machine learning in Facebook applications, and what’s next for Qualcomm and AI.
AI for an intelligent cloud and intelligent edge: Discover, deploy, and manag...James Serra
Discover, manage, deploy, monitor – rinse and repeat. In this session we show how Azure Machine Learning can be used to create the right AI model for your challenge and then easily customize it using your development tools while relying on Azure ML to optimize them to run in hardware accelerated environments for the cloud and the edge using FPGAs and Neural Network accelerators. We then show you how to deploy the model to highly scalable web services and nimble edge applications that Azure can manage and monitor for you. Finally, we illustrate how you can leverage the model telemetry to retrain and improve your content.
Backend.AI Technical Introduction (19.09 / 2019 Autumn)Lablup Inc.
This slide introduces technical specs and details about Backend.AI 19.09.
* On-premise clustering / container orchestration / scaling on cloud
* Container-level fractional GPU technology to use one GPU as many GPUs on many containers at the same time.
* NVidia GPU Cloud integrations
* Enterprise features
An introduction to computer vision with Hugging FaceJulien SIMON
In this code-level talk, Julien will show you how to quickly build and deploy computer vision applications based on Transformer models. Along the way, you'll learn about the portfolio of open source and commercial Hugging Face solutions, and how they can help you deliver high-quality solutions faster than ever before.
Starting your AI/ML project right (May 2020)Julien SIMON
In this talk, we’ll see how you can put your AI/ML project on the right track from the get-go. Applying common sense and proven best practices, we’ll discuss skills, tools, methods, and more. We’ll also look at several real-life projects built by AWS customers in different industries and startups.
Building Machine Learning Inference Pipelines at Scale (July 2019)Julien SIMON
Talk at OSCON, Portland, 18/07/2019
Real-life Machine Learning applications require more than a single model. Data may need pre-processing: normalization, feature engineering, dimensionality reduction, etc. Predictions may need post-processing: filtering, sorting, combining, etc.
Our goal: build scalable ML pipelines with open source (Spark, Scikit-learn, XGBoost) and managed services (Amazon EMR, AWS Glue, Amazon SageMaker)
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
2. Deep Learning at the Edge
1. Flexible experimentation in the Cloud.
2. Scalable training in the Cloud.
3. Good prediction performance at the Edge.
4. Simple deployment of code and model
3. Flexible experimentation in the Cloud
• Apache MXNet: Python, R, Perl, Matlab, Scala, C++.
• Gluon
• Imperative programming aka ‘define-by-run’.
• Inspect, debug and modify models during training.
• Extensive model zoo
• Pre-trained computer vision models.
• MobileNet, SqueezeNet for resource-constrained devices.
4. Scalable training in the Cloud
Amazon SageMaker AWS Deep Learning AMI
Amazon EC2 c5 p3
5. Good prediction performance at the Edge
• MXNet is written in C++.
• Gluon networks can be ‘hybridized’ for additional speed.
• Two libraries boost performance on CPU-only devices
• Fast implementation of math primitives
• Hardware-specific instructions, e.g. Intel AVX or ARM NEON
• Intel Math Kernel Library https://software.intel.com/en-us/mkl
• NNPACK https://github.com/Maratyszcza/NNPACK
• Mixed precision training
• Use float16 instead of float32 for weights and activations
• Almost 2x reduction in model size, no loss of accuracy, faster inference
• https://devblogs.nvidia.com/parallelforall/mixed-precision-training-deep-neural-networks/
6. Simple deployment of code and model
• Train a model in SageMaker (or bring your own).
• Write a Lambda function performing prediction.
• Add both as resources in your Greengrass group.
• Let Greengrass handle deployment and updates.
Best when
You want the same programming model in the Cloud
and at the Edge.
Code and models need to be updated, even if
network connectivity is infrequent or unreliable.
One device in the group should be able to perform
prediction on behalf on other devices.
Requirements
Devices are powerful enough to run Greengrass
(XXX HW requirements)
Devices are provisioned in AWS IoT (certificate, keys).
7. ML Inference using AWS Greengrass
Deploy model
and Lambda separately
Send insights
Predict and take
actions locally on
device
AWS Cloud
8. AWS DeepLens Intel Atom CPU
Gen9 graphics
Ubuntu 16.04 LTS
100 GFLOPS performance
Dual band Wi-Fi
8 GB RAM
16 GB Storage (eMMC)
32 GB SD card
4 MP camera with MJPEG
H.264 encoding at 1080p resolution
2 USB ports
Micro HDMI
Audio out
AWS Greengrass preconfigured
Intel clDNN for Apache MXNet
9. AWS DeepLens Architecture
Video out
Data out
D E P L O Y P R O J E C T S
Manage device
Console Project
Management
I N F E R E N C E
AWS Cloud
Intel Model Optimizer