Machine learning at the edge?
Leveraging AWS services
Case study: Toyota Connected Data Services
Alternative scenarios
Optimizing for inference at the edge
Getting started
Working with Amazon SageMaker Algorithms for Faster Model TrainingAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Level: 300-400
Speaker: Zlatan Dzinic - Partner Solutions Architect, AWS
Working with Amazon SageMaker Algorithms for Faster Model TrainingAmazon Web Services
Amazon SageMaker is a fully-managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models, at any scale. Amazon SageMaker provides high-performance, machine learning algorithms optimized for speed, scale, and accuracy, to perform training on petabyte-scale data sets. This webinar will introduce you to the collection of distributed streaming ML algorithms that come with Amazon SageMaker. You will learn about the difference between streaming and batch ML algorithms, and how SageMaker has been architected to run these algorithms at scale. We will demo Neural Topic Modeling of text documents using a sample SageMaker Notebook, which will be made available to attendees.
Level: 300-400
Speaker: Zlatan Dzinic - Partner Solutions Architect, AWS
Starting your AI/ML project right (May 2020)Julien SIMON
In this talk, we’ll see how you can put your AI/ML project on the right track from the get-go. Applying common sense and proven best practices, we’ll discuss skills, tools, methods, and more. We’ll also look at several real-life projects built by AWS customers in different industries and startups.
Recommendation is one of the most popular applications in machine learning (ML). In this workshop, we’ll show you how to build a movie recommendation model based on factorization machines — one of the built-in algorithms of Amazon SageMaker — and the popular MovieLens dataset.
Simon Poile, GM of AWS Digital User Engagement, presented at Digital Summit Seattle February 26, 2019. Over the last 10 years, the only thing that hasn’t changed in customer engagement is the value of a trusted relationship. At Amazon, we believe strongly in customer-driven innovation and are constantly striving to provide the best experience for our customers. In our experience, customers are always going to adopt new devices and channels, want personalized outreach, and demand timely and relevant communication on matters they care about the most. Marketers concerned about engaging their own customers must challenge themselves in the same way, evolving and innovating while never losing focus on the most important thing, your relationship with your customer.
Session attendee learned how to leverage innovative technology to:
–Learn more about their customers through a single view derived from disparate data sources
—Create highly personalized engagement experiences
—Better understand when and on which channel to engage their users
In this session we show how engineers, analysts and scientists can develop, train, tune and deploy machine learning models via Amazon SageMaker, a docker-based, framework-agnostic orchestrator dedicated to machine learning with strong primitives for automation, monitoring and deployment. We will cover the following topics:
- Amazon SageMaker architecture and key functionality
- Scaling and deploying open source frameworks on Amazon SageMaker (sklearn, MXNet, Keras, TensorFlow, pyTorch, R)
- Amazon SageMaker built-in Algorithm library: 18 state-of-the art algorithm covering broad use-cases, from computer vision to recommender systems
- ML model deployment
Optimize your Machine Learning Workloads on AWS (July 2019)Julien SIMON
Talk at Floor 28, Tel Aviv.
Infrastructure, tips to speed up training, hyperparameter optimization, model compilation, Amazon SageMaker Neo, cost optimization, Amazon Elastic Inference
Amazon SageMaker Ground Truth: Build High-Quality and Accurate ML Training Da...Amazon Web Services
Successful machine learning models are built on high-quality training datasets. Labeling raw data to get accurate training datasets involves a lot of time and effort because sophisticated models can require thousands of labeled examples to learn from, before they can produce good results. Typically, the task of labeling is distributed across a large number of humans, adding significant overhead and cost. Join us as we introduce Amazon SageMaker Ground Truth, a new service that provides an effective solution to reduce this cost and complexity using a machine learning technique called active learning. Active learning reduces the time and manual effort required to do data labeling, by continuously training machine learning algorithms based on labels from humans. By iterating through ambiguous data points, Ground Truth improves the ability to automatically label data resulting in high-quality training datasets.
Level: 300
Speaker: Kris Skrinak - Partner Solutions Architect, ML Global Lead, AWS
Learn how to get started with Amazon SageMaker—our fully-managed service that spans the entire machine learning (ML) workflow—so you can build, train, and deploy models quickly. Use Amazon SageMaker to label and prepare your data, choose an algorithm, train, tune, and optimize it for deployment, make predictions, and take action. Get your models to production faster with Amazon SageMaker SDKs, builder tools, and APIs tailored to your programming language or platform. Also, discover how Amazon SageMaker Ground Truth can aid in the adoption of ML technology for your organization.
Workshop slides for the introduction to Amazon SageMaker, and integration of Amazon SageMaker with other tools within your AWS environment. Visit https://aws.amazon.com/sagemaker for more information.
Machine Learning at the Edge (AIM302) - AWS re:Invent 2018Amazon Web Services
Video-based tools have enabled advancements in computer vision, such as in-vehicle use cases for AI. However, it is not always possible to send this data to the cloud to be processed. In this session, learn how to train machine learning models using Amazon SageMaker and deploy them to an edge device using AWS Greengrass, enabling you process data quickly at the edge, even when there is no connectivity.
Quickly and easily build, train, and deploy machine learning models at any scaleAWS Germany
The machine learning process often feels a lot harder than it should be to most developers because the process to build and train models, and then deploy them into production is too complicated and too slow.
This workshop starts with a brief review of the machine learning process, followed by an introduction and deep dive into the individual components of Amazon SageMaker. As part of the workshop we will train artificial neural networks, get insight into some of the built-in machine learning algorithms of SageMaker that you can use for a variety of problem types, and after successfully training a model, look at options on how to deploy and scale a model as a service.
This workshop is aimed at developers that are new to machine learning, as well as data scientists that continue to be challenged by the operational challenges of the machine learning process. Bring your own laptop with Python and Jupyter Notebook, and (ideally) your own activated AWS account to follow through the examples.
Starting your AI/ML project right (May 2020)Julien SIMON
In this talk, we’ll see how you can put your AI/ML project on the right track from the get-go. Applying common sense and proven best practices, we’ll discuss skills, tools, methods, and more. We’ll also look at several real-life projects built by AWS customers in different industries and startups.
Recommendation is one of the most popular applications in machine learning (ML). In this workshop, we’ll show you how to build a movie recommendation model based on factorization machines — one of the built-in algorithms of Amazon SageMaker — and the popular MovieLens dataset.
Simon Poile, GM of AWS Digital User Engagement, presented at Digital Summit Seattle February 26, 2019. Over the last 10 years, the only thing that hasn’t changed in customer engagement is the value of a trusted relationship. At Amazon, we believe strongly in customer-driven innovation and are constantly striving to provide the best experience for our customers. In our experience, customers are always going to adopt new devices and channels, want personalized outreach, and demand timely and relevant communication on matters they care about the most. Marketers concerned about engaging their own customers must challenge themselves in the same way, evolving and innovating while never losing focus on the most important thing, your relationship with your customer.
Session attendee learned how to leverage innovative technology to:
–Learn more about their customers through a single view derived from disparate data sources
—Create highly personalized engagement experiences
—Better understand when and on which channel to engage their users
In this session we show how engineers, analysts and scientists can develop, train, tune and deploy machine learning models via Amazon SageMaker, a docker-based, framework-agnostic orchestrator dedicated to machine learning with strong primitives for automation, monitoring and deployment. We will cover the following topics:
- Amazon SageMaker architecture and key functionality
- Scaling and deploying open source frameworks on Amazon SageMaker (sklearn, MXNet, Keras, TensorFlow, pyTorch, R)
- Amazon SageMaker built-in Algorithm library: 18 state-of-the art algorithm covering broad use-cases, from computer vision to recommender systems
- ML model deployment
Optimize your Machine Learning Workloads on AWS (July 2019)Julien SIMON
Talk at Floor 28, Tel Aviv.
Infrastructure, tips to speed up training, hyperparameter optimization, model compilation, Amazon SageMaker Neo, cost optimization, Amazon Elastic Inference
Amazon SageMaker Ground Truth: Build High-Quality and Accurate ML Training Da...Amazon Web Services
Successful machine learning models are built on high-quality training datasets. Labeling raw data to get accurate training datasets involves a lot of time and effort because sophisticated models can require thousands of labeled examples to learn from, before they can produce good results. Typically, the task of labeling is distributed across a large number of humans, adding significant overhead and cost. Join us as we introduce Amazon SageMaker Ground Truth, a new service that provides an effective solution to reduce this cost and complexity using a machine learning technique called active learning. Active learning reduces the time and manual effort required to do data labeling, by continuously training machine learning algorithms based on labels from humans. By iterating through ambiguous data points, Ground Truth improves the ability to automatically label data resulting in high-quality training datasets.
Level: 300
Speaker: Kris Skrinak - Partner Solutions Architect, ML Global Lead, AWS
Learn how to get started with Amazon SageMaker—our fully-managed service that spans the entire machine learning (ML) workflow—so you can build, train, and deploy models quickly. Use Amazon SageMaker to label and prepare your data, choose an algorithm, train, tune, and optimize it for deployment, make predictions, and take action. Get your models to production faster with Amazon SageMaker SDKs, builder tools, and APIs tailored to your programming language or platform. Also, discover how Amazon SageMaker Ground Truth can aid in the adoption of ML technology for your organization.
Workshop slides for the introduction to Amazon SageMaker, and integration of Amazon SageMaker with other tools within your AWS environment. Visit https://aws.amazon.com/sagemaker for more information.
Machine Learning at the Edge (AIM302) - AWS re:Invent 2018Amazon Web Services
Video-based tools have enabled advancements in computer vision, such as in-vehicle use cases for AI. However, it is not always possible to send this data to the cloud to be processed. In this session, learn how to train machine learning models using Amazon SageMaker and deploy them to an edge device using AWS Greengrass, enabling you process data quickly at the edge, even when there is no connectivity.
Quickly and easily build, train, and deploy machine learning models at any scaleAWS Germany
The machine learning process often feels a lot harder than it should be to most developers because the process to build and train models, and then deploy them into production is too complicated and too slow.
This workshop starts with a brief review of the machine learning process, followed by an introduction and deep dive into the individual components of Amazon SageMaker. As part of the workshop we will train artificial neural networks, get insight into some of the built-in machine learning algorithms of SageMaker that you can use for a variety of problem types, and after successfully training a model, look at options on how to deploy and scale a model as a service.
This workshop is aimed at developers that are new to machine learning, as well as data scientists that continue to be challenged by the operational challenges of the machine learning process. Bring your own laptop with Python and Jupyter Notebook, and (ideally) your own activated AWS account to follow through the examples.
Perform Machine Learning at the IoT Edge using AWS Greengrass and Amazon Sage...Amazon Web Services
"Learning Objectives:
- Develop intelligent IoT edge solutions using AWS Greengrass
- Develop data science models in the cloud with Amazon SageMaker
- Learn how AWS Greengrass and Amazon SageMaker enable you to perform machine learning at the edge"
Learning Objectives:
- Learn how Amazon SageMaker can be used for exploratory data analysis before training
- Learn how Amazon SageMaker provides managed distributed training with flexibility
- Learn how easy it is to deploy your models for hosting within Amazon SageMaker
Build Your Recommendation Engine on AWS Today - AWS Summit Berlin 2018Yotam Yarden
Recommender systems are an important mechanism to personalize and enhance customer experience. In Amazon, we have been researching recommender systems for over two decades and nowadays AWS customers can use the same technologies to develop, train and deploy their own recommenders systems in just a couple of hours. In my presentation, I will give an overview of the most recent recommender systems papers and techniques, and demonstrate how to train and deploy a recommendation system on AWS in less than 15 minutes.
Presented at the AWS Summit in Berlin, June 6th 2018
Remove Undifferentiated Heavy Lifting from CI/CD Toolsets with Corteva Agrisc...Amazon Web Services
Cloud engineering teams at Corteva Agriscience, Agriculture Division of DowDuPont, have a challenge: how to support a global business of research scientists and software developers in building a world-class innovation organization. Modern agriculture produces larger and more varied data types, so their approach must be not only scalable and flexible, but also commit to operational excellence while remaining adoptable. This session will walk through how Corteva Agriscience builds container-based infrastructures with CI/CD pipelines that remove undifferentiated heavy lifting and allow teams to empower developers. Members of the cloud engineering team will discuss problems they face, solutions they implement, and show an example of how they leverage AWS services (AWS CodeCommit, AWS CodePipeline, AWS CloudFormation, AWS Fargate) to deploy a novel machine learning algorithm for scoring genetic markers.
Build Your Recommendation Engine on AWS Today!AWS Germany
Recommender systems are an important mechanism to personalize and enhance customer experience. In Amazon, we have been researching recommender systems for over two decades and nowadays AWS customers can use the same technologies to develop, train and deploy their own recommenders systems in just a couple of hours. In my presentation, I will give an overview of the most recent recommender systems papers and techniques, and demonstrate how to train and deploy a recommendation system on AWS in less than 15 minutes.
CI/CD for Your Machine Learning Pipeline with Amazon SageMaker (DVC303) - AWS...Amazon Web Services
Amazon SageMaker is a powerful tool that enables us to build, train, and deploy at scale our machine learning-based workloads. With help from AWS CI/CD tools, we can speed up this pipeline process. In this talk, we discuss how to integrate Amazon SageMaker into a CI/CD pipeline as well as how to orchestrate with other serverless components.
This session is part of re:Invent Developer Community Day, a series led by AWS enthusiasts who share first-hand, technical insights on trending topics.
Your road to a Well Architected solution in the Cloud - Tel Aviv Summit 2018Amazon Web Services
So what's the right way to operate a cloud environment? This session will guide you through best practices, the core principals of a well architected environment.
We will demonstrate this via looking at some real-world use cases and evaluate them against criteria in the Performance, Reliability, Security, Cost and Cloud operation.
Building Serverless IoT solutions - EPAM SEC 2018 MinskBoaz Ziniman
AWS IoT is a managed cloud platform, that lets connected devices easily and securely interact with cloud applications and other devices. With the power of Serverless and AWS Lambda, it allows you to build IoT solutions, at any scale, without dealing with any servers.
In this tech talk, we will discuss the challenges of running IoT devices and how constrained devices can leverage AWS IoT. We will use AWS IoT Button and other devices to demonstrate building a real, securely connected, product with AWS IoT.
Container technology provides unparalleled improvements in efficiency and agility of packaging and deploying applications, and hence are becoming the de-facto method for deploying microservices. However, using containers for running services at scale has required that operations team handle complex, dynamically changing infrastructure requirements, or run the risk or under/over-provisioning infrastructure. Let's explore together best practices for developing containerized applications with AWS services while running them at scale.
Speaker: Donnie Prakoso, ASEAN Technical Evangelist, Amazon Web Services
AWS Customer Speaker: Achmad Nazmy, GM of Technology, MOKA
Work with Machine Learning in Amazon SageMaker - BDA203 - Atlanta AWS SummitAmazon Web Services
Organizations are using machine learning (ML) to address a host of business challenges, from product recommendations to demand forecasting. Until recently, developing these ML models took considerable time and effort, and it required expertise. In this session, we dive deep into Amazon SageMaker, a fully managed ML service that enables developers and data scientists to develop and deploy deep learning models quickly and easily. We walk through the features and benefits of Amazon SageMaker to get your ML models from concept to production.
Serverless AI with Scikit-Learn (GPSWS405) - AWS re:Invent 2018Amazon Web Services
Take advantage of serverless technologies for artificial intelligence (AI) by making a prediction on the fly. There is no model hosting and no servers to maintain. In this session, we show how to train a model in scikit-learn, an open source machine learning library for Python. Then we load and call the trained model from an AWS Lambda function, and finally we demonstrate how to load the library and send the data for prediction.
Get Started with Deep Learning and Computer Vision Using AWS DeepLens (AIM316...Amazon Web Services
If you're new to deep learning, this workshop is for you. Learn how to build and deploy computer vision models using the AWS DeepLens deep learning-enabled video camera. Also learn to build a machine learning application and a model from scratch using Amazon SageMaker. Finally, learn to extend that model to Amazon SageMaker to build an end-to-end AI application.
Safeguard the Integrity of Your Code for Fast and Secure Deployments (DEV349-...Amazon Web Services
As companies employ DevOps practices to push applications faster into production through better collaboration and automated testing, security is often seen as an inhibitor to speed. The challenge for many organizations is getting applications delivered at a fast pace while embedding security at the speed of DevOps. In this session, learn how AWS Marketplace products and customers help make DevSecOps a well-orchestrated methodology to ensure the speed, stability, and security of your applications.
[REPEAT 1] Safeguard the Integrity of Your Code for Fast and Secure Deploymen...Amazon Web Services
As companies employ DevOps practices to push applications faster into production through better collaboration and automated testing, security is often seen as an inhibitor to speed. The challenge for many organizations is getting applications delivered at a fast pace while embedding security at the speed of DevOps. In this session, learn how AWS Marketplace products and customers help make DevSecOps a well-orchestrated methodology to ensure the speed, stability, and security of your applications.
This session highlights how Earth observation data shared in the cloud is accelerating research in machine learning that can have a dramatic impact on the effectiveness of future warfighting capability. Come learn about SpaceNet, a project sponsored by CosmiQ Works, DigitalGlobe, and Nvidia that makes commercial satellite imagery available for machine learning research on AWS. In this session, you will learn how AWS machine learning services like SageMaker, hyper-scale GPU compute capacity, and datasets shared in the cloud can ultimately produce machine learning models that could have a dramatic impact on the effectiveness of future warfighting capability. As the DoD strives to bring innovation directly to the warfighter, a combination of global open data sets coupled with easily built ML models can give warfighters access to critical information when the mission needs it most.
See how Public Sector organisations and AWS Partners are leveraging Smart Devices and Artificial Intelligence to create flexible, secure and cost-effective solutions. By integrating learning models to live video/audio, cameras can be transformed into flexible IoT devices that perform critical functions around public safety, security, property management, smart parking & environmental management. Observe how to architect these solutions using AWS services such as AWS IoT Core, AWS GreenGrass, AWS DeepLens, Amazon SageMaker and Amazon Alexas
Speaker: Craig Lawton, Smart Australia & IoT Specialist, AWS
Accelerate ML Training on Amazon SageMaker Using GPU-Based EC2 P3 Instances (...Amazon Web Services
In this workshop, you gain hands-on experience in training deep learning neural networks with Amazon SageMaker using GPU-based EC2 P3 instances. Amazon EC2 P3 instances, powered by NVIDIA Tesla V100 GPUs, offer the highest-performing GPU-based instances in the cloud for efficient model training. We discuss and work through building convolution neural networks for solving common computer vision problems. All attendees must bring their own laptop (Windows, macOS, and Linux all supported). Tablets are not appropriate. We also recommend having the current version of Chrome or Firefox installed.
Similar to AWS re:Invent 2018 - AIM302 - Machine Learning at the Edge (20)
An introduction to computer vision with Hugging FaceJulien SIMON
In this code-level talk, Julien will show you how to quickly build and deploy computer vision applications based on Transformer models. Along the way, you'll learn about the portfolio of open source and commercial Hugging Face solutions, and how they can help you deliver high-quality solutions faster than ever before.
Building Machine Learning Inference Pipelines at Scale (July 2019)Julien SIMON
Talk at OSCON, Portland, 18/07/2019
Real-life Machine Learning applications require more than a single model. Data may need pre-processing: normalization, feature engineering, dimensionality reduction, etc. Predictions may need post-processing: filtering, sorting, combining, etc.
Our goal: build scalable ML pipelines with open source (Spark, Scikit-learn, XGBoost) and managed services (Amazon EMR, AWS Glue, Amazon SageMaker)
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.