This document outlines an ECS workshop agenda that covers: continuous delivery using infrastructure as code; containerizing services; deploying to ECS and maintaining uptime; platform thinking for infrastructure as code; ECS service and cluster scaling; blue/green deployments; and demos of cluster updates, multiple environments, and on-demand environments. The workshop teaches development of microservices like an Asgard portal with Odin and Thor gods, containerizing the services, deploying to ECS, and scaling the cluster and services. It addresses shortcomings of the initial solution and introduces platform thinking to provision clusters and services together in an adaptive, disposable way.
by Harrell Stiles, Sr. Consultant, AWS ProServe
Batch computing is a common way to run a series of programs, called batch jobs, on a large pool of shared compute resources, such as servers, virtual machines, and containers. But running batch workloads at scale is a challenging task, configuring and scaling a cluster of virtual machines to process complex batch jobs is difficult and resource intensive. In this session, we’ll discuss options and best practices for running batch jobs on AWS including AWS Batch, a fully managed batch-processing service, and building batch processing architectures with the Amazon EC2 Container Service. We’ll also discuss best practices for ensuring efficient and opportunistic scheduling, fine-grained monitoring, compute resource auto-scaling, and security for batch jobs. Level 200
Batch computing is a common way to run a series of programs, called batch jobs, on a large pool of shared compute resources, such as servers, virtual machines, and containers. But running batch workloads at scale is a challenging task, configuring and scaling a cluster of virtual machines to process complex batch jobs is difficult and resource intensive. In this session, we’ll discuss options and best practices for running batch jobs on AWS including AWS Batch, a fully managed batch-processing service, and building batch processing architectures with the Amazon EC2 Container Service. We’ll also discuss best practices for ensuring efficient and opportunistic scheduling, fine-grained monitoring, compute resource auto-scaling, and security for batch jobs.
by Nathan Wray, Sr. Technical Account Manager, AWS
Managing the code testing and deployment lifecycle for containerized applications is a complex task. In this session, we will explore how to build effective CICD workflows to manage containerized code deployments using Amazon EC2 Container Service, Amazon EC2 Container Registry, and AWS Code Suite tools. We will explore best practices for CICD architectures used by our customers to deploy containers onto AWS, including how to create an accessible CICD platform and how to execute Blue/Green and Canary deployments for containerized apps. Level 300
Microservices is a software architectural method where you decompose complex applications into smaller, independent services. Containers are great for running small decoupled services, but how do you coordinate running microservices in production at scale and what AWS services do you use?
In this session, we will explore the reasoning and concepts behind microservices and how containers simplify building microservices based applications. We will also demonstrate how you can easily deploy and monitor microservices on Amazon EC2 Container Service.
We will walk through the exploration, training and serving of a machine learning model by leveraging Kubeflow's main components. We will use Jupyter notebooks on the cluster to train the model and then introduce Kubeflow Pipelines to chain all the steps together, to automate the entire process.
Learn how to use Amazon Web Services (AWS). This "how-to" webinar will cover the basics to get started with AWS. After a brief overview, this session will dive into discussions of core AWS services and provide demonstrations of how to set up and utilize those services. Demonstrations and discussions will include:
- Setting up and connecting to your first Elastic Compute Cloud (EC2) virtual machine
- How to backup and restore your virtual machine instance
- How to set an email alert for changes in your virtual machine instance
- How to Upload files to Amazon's Simple Storage Service (S3) and make them publicly available on the Internet
Amazon EC2 Container Service is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Amazon EC2 Container Service lets you define, schedule, and stop sets of containers. You have access to the state of your resources, making it easy to confirm that tasks are running or view the utilization of Amazon EC2 instances in your cluster. This session will describe the benefits of containers, introduce the Amazon EC2 Container Service, and demonstrate how to use Amazon EC2 Container Service for your applications.
Speakers:
Ian Massingham, AWS Technical Evangelist and
Boyan Dimitrov, Platform Automation Lead, Hailo Cabs
by Harrell Stiles, Sr. Consultant, AWS ProServe
Batch computing is a common way to run a series of programs, called batch jobs, on a large pool of shared compute resources, such as servers, virtual machines, and containers. But running batch workloads at scale is a challenging task, configuring and scaling a cluster of virtual machines to process complex batch jobs is difficult and resource intensive. In this session, we’ll discuss options and best practices for running batch jobs on AWS including AWS Batch, a fully managed batch-processing service, and building batch processing architectures with the Amazon EC2 Container Service. We’ll also discuss best practices for ensuring efficient and opportunistic scheduling, fine-grained monitoring, compute resource auto-scaling, and security for batch jobs. Level 200
Batch computing is a common way to run a series of programs, called batch jobs, on a large pool of shared compute resources, such as servers, virtual machines, and containers. But running batch workloads at scale is a challenging task, configuring and scaling a cluster of virtual machines to process complex batch jobs is difficult and resource intensive. In this session, we’ll discuss options and best practices for running batch jobs on AWS including AWS Batch, a fully managed batch-processing service, and building batch processing architectures with the Amazon EC2 Container Service. We’ll also discuss best practices for ensuring efficient and opportunistic scheduling, fine-grained monitoring, compute resource auto-scaling, and security for batch jobs.
by Nathan Wray, Sr. Technical Account Manager, AWS
Managing the code testing and deployment lifecycle for containerized applications is a complex task. In this session, we will explore how to build effective CICD workflows to manage containerized code deployments using Amazon EC2 Container Service, Amazon EC2 Container Registry, and AWS Code Suite tools. We will explore best practices for CICD architectures used by our customers to deploy containers onto AWS, including how to create an accessible CICD platform and how to execute Blue/Green and Canary deployments for containerized apps. Level 300
Microservices is a software architectural method where you decompose complex applications into smaller, independent services. Containers are great for running small decoupled services, but how do you coordinate running microservices in production at scale and what AWS services do you use?
In this session, we will explore the reasoning and concepts behind microservices and how containers simplify building microservices based applications. We will also demonstrate how you can easily deploy and monitor microservices on Amazon EC2 Container Service.
We will walk through the exploration, training and serving of a machine learning model by leveraging Kubeflow's main components. We will use Jupyter notebooks on the cluster to train the model and then introduce Kubeflow Pipelines to chain all the steps together, to automate the entire process.
Learn how to use Amazon Web Services (AWS). This "how-to" webinar will cover the basics to get started with AWS. After a brief overview, this session will dive into discussions of core AWS services and provide demonstrations of how to set up and utilize those services. Demonstrations and discussions will include:
- Setting up and connecting to your first Elastic Compute Cloud (EC2) virtual machine
- How to backup and restore your virtual machine instance
- How to set an email alert for changes in your virtual machine instance
- How to Upload files to Amazon's Simple Storage Service (S3) and make them publicly available on the Internet
Amazon EC2 Container Service is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Amazon EC2 Container Service lets you define, schedule, and stop sets of containers. You have access to the state of your resources, making it easy to confirm that tasks are running or view the utilization of Amazon EC2 instances in your cluster. This session will describe the benefits of containers, introduce the Amazon EC2 Container Service, and demonstrate how to use Amazon EC2 Container Service for your applications.
Speakers:
Ian Massingham, AWS Technical Evangelist and
Boyan Dimitrov, Platform Automation Lead, Hailo Cabs
Getting Started with Docker on AWS: AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session will cover the benefits of containers, introduce Amazon EC2 Container Service, and demonstrates how to use Amazon ECS to run containerized applications at scale in production.
Deep learning is an implementation of machine learning that uses neural networks to solve difficult and complex problems, such as computer vision, natural language processing, and recommendations. Due to the availability of deep learning libraries and frameworks, developers have the ability to enhance the capabilities of their applications and projects. In this workshop, you learn how to build and deploy a powerful deep learning framework called MXNet on containers. The portability and resource management benefit of containers means developers can focus less on infrastructure and more on building. The labs start by demonstrating the automation capabilities of AWS CloudFormation to stand up core infrastructure; as an added bonus, you use Spot Fleet to leverage the cost benefits of using Spot Instances, especially for developer environments. Then, you walk through creating an MXNet container in Docker and deploying it with Amazon ECS. Finally, you walk through an image classification demo of MXNet to validate that everything is working as expected. Note: This workshop focuses on containerizing MXNet. The features of MXNet and capabilities of deep learning in general are vast, and there are recorded sessions from re:Invent that dive deeper on these topics. All you need to participate is a laptop and AWS account. Pizza will be provided.
(APP309) Running and Monitoring Docker Containers at Scale | AWS re:Invent 2014Amazon Web Services
If you have tried Docker but are unsure about how to run it at scale, you will benefit from this session. Like virtualization before, containerization (à; la Docker) is increasing the elastic nature of cloud infrastructure by an order of magnitude. But maybe you still have questions: How many containers can you run on a given Amazon EC2 instance type? Which metric should you look at to measure contention? How do you manage fleets of containers at scale?
Datadog is a monitoring service for IT, operations, and development teams who write and run applications at scale. In this session, the cofounder of Datadog presents the challenges and benefits of running containers at scale and how to use quantitative performance patterns to monitor your infrastructure at this magnitude and complexity. Sponsored by Datadog.
Wild Rydes (www.wildrydes.com), the world’s leading unicorn transportation startup, needs your help! After building the first iteration of its serverless web application, Wild Rydes needs serverless DevOps experts like yourself to help it rapidly build and iterate upon its web app. In this workshop, you’ll help Wild Rydes set up a CI/CD pipeline that enables the company to rapidly build, test, and deploy changes to its serverless application. You’ll also learn to monitor and diagnose issues for its application. This workshop will teach you how to model and deploy serverless apps with the AWS Serverless Application Model. You’ll learn to use AWS CodePipeline and AWS CodeBuild to create a CI/CD pipeline for AWS Lambda and other services. Finally, you’ll learn to use AWS X-Ray to diagnose issues in your Lambda functions.
Batch Processing with Containers on AWS - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the options for running batch workloads on AWS
- Learn how to architect a containerized batch processing service on Amazon ECS
- Learn best practices for optimizing and scaling complex batch workload requirements
Batch processing is useful when you need to periodically analyze large amounts of data, but configuring and scaling a cluster of virtual machines to process complex batch jobs can be difficult. Containers provide a great solution for running batch jobs by providing easily managed, scalable, and portable code environments.
In this tech talk, we’ll show you how to use containers on AWS for batch processing jobs that can scale quickly and cost-effectively. We’ll discuss AWS Batch, our fully managed batch-processing service, and show you how to architect your own batch processing service using the Amazon EC2 Container Service. We’ll also discuss best practices for ensuring efficient and opportunistic scheduling, fine-grained monitoring, compute resource auto-scaling, and security for your batch jobs.
In this presentation we will discuss the evolution of IaaS, PaaS, CaaS, FaaS and how serverless computing is beneficial and what are the challenges we have faced so far
This talk is about KSQL, an open source streaming SQL engine for Apache Kafka. KSQL aims to make stream processing available to everybody without the need to write Java or Scala code. Streaming SQL makes it easy to get started with a wide-range of stream processing applications such as real-time ETL, sessionization, monitoring and alerting, or fraud detection. We will give a general introduction to KSQL covering its SQL dialect, core concepts, and architecture including some technical deep-dives how it works under the hood.
This is a basic workshop for Amazon ECS. In this workshop you will learn:
AWS computing services overview
Monolith and Microservices
What is Docker
How to dockerize your app in your local laptop
How to run your Docker app in Amazon ECS and ECR
How to use ecs-cli
Best Practices designing your Dockerfile
AWS Atlanta meetup group Slides from March 20th 2015 group presentation with CloudCheckr COO Aaron Klein speaking about Tracking, Allocating and Optimizing AWS Costs.
Sub topics include Instance and Service Tagging strategies in AWS for Master and child account management.
Amazon EC2 Container Service: Manage Docker-Enabled Apps in EC2Amazon Web Services
Amazon EC2 Container Service (Amazon ECS) is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Amazon ECS lets you define, schedule, and stop sets of containers. You have access to the state of your resources, making it easy to confirm that tasks are running or view the utilization of EC2 instances in your cluster. This session will describe the benefits of containers, introduce ECS, and demonstrate how to use ECS for your applications.
A Brief introduction to Amazon ECS, Dockerization of Spring boot application, CI/CD and notifications using Slack.
This PPT also explains how CI/CD pipeline can be build using Jenkins. And
Using Deep Learning Toolkits with Kubernetes clustersJoy Qiao
Slides for the talk at the O'Reilly AI Conference San Francisco 2017 - https://conferences.oreilly.com/artificial-intelligence/ai-ca/public/schedule/detail/59613
Getting Started with Docker on AWS: AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers to its users. This session will cover the benefits of containers, introduce Amazon EC2 Container Service, and demonstrates how to use Amazon ECS to run containerized applications at scale in production.
Deep learning is an implementation of machine learning that uses neural networks to solve difficult and complex problems, such as computer vision, natural language processing, and recommendations. Due to the availability of deep learning libraries and frameworks, developers have the ability to enhance the capabilities of their applications and projects. In this workshop, you learn how to build and deploy a powerful deep learning framework called MXNet on containers. The portability and resource management benefit of containers means developers can focus less on infrastructure and more on building. The labs start by demonstrating the automation capabilities of AWS CloudFormation to stand up core infrastructure; as an added bonus, you use Spot Fleet to leverage the cost benefits of using Spot Instances, especially for developer environments. Then, you walk through creating an MXNet container in Docker and deploying it with Amazon ECS. Finally, you walk through an image classification demo of MXNet to validate that everything is working as expected. Note: This workshop focuses on containerizing MXNet. The features of MXNet and capabilities of deep learning in general are vast, and there are recorded sessions from re:Invent that dive deeper on these topics. All you need to participate is a laptop and AWS account. Pizza will be provided.
(APP309) Running and Monitoring Docker Containers at Scale | AWS re:Invent 2014Amazon Web Services
If you have tried Docker but are unsure about how to run it at scale, you will benefit from this session. Like virtualization before, containerization (à; la Docker) is increasing the elastic nature of cloud infrastructure by an order of magnitude. But maybe you still have questions: How many containers can you run on a given Amazon EC2 instance type? Which metric should you look at to measure contention? How do you manage fleets of containers at scale?
Datadog is a monitoring service for IT, operations, and development teams who write and run applications at scale. In this session, the cofounder of Datadog presents the challenges and benefits of running containers at scale and how to use quantitative performance patterns to monitor your infrastructure at this magnitude and complexity. Sponsored by Datadog.
Wild Rydes (www.wildrydes.com), the world’s leading unicorn transportation startup, needs your help! After building the first iteration of its serverless web application, Wild Rydes needs serverless DevOps experts like yourself to help it rapidly build and iterate upon its web app. In this workshop, you’ll help Wild Rydes set up a CI/CD pipeline that enables the company to rapidly build, test, and deploy changes to its serverless application. You’ll also learn to monitor and diagnose issues for its application. This workshop will teach you how to model and deploy serverless apps with the AWS Serverless Application Model. You’ll learn to use AWS CodePipeline and AWS CodeBuild to create a CI/CD pipeline for AWS Lambda and other services. Finally, you’ll learn to use AWS X-Ray to diagnose issues in your Lambda functions.
Batch Processing with Containers on AWS - June 2017 AWS Online Tech TalksAmazon Web Services
Learning Objectives:
- Learn about the options for running batch workloads on AWS
- Learn how to architect a containerized batch processing service on Amazon ECS
- Learn best practices for optimizing and scaling complex batch workload requirements
Batch processing is useful when you need to periodically analyze large amounts of data, but configuring and scaling a cluster of virtual machines to process complex batch jobs can be difficult. Containers provide a great solution for running batch jobs by providing easily managed, scalable, and portable code environments.
In this tech talk, we’ll show you how to use containers on AWS for batch processing jobs that can scale quickly and cost-effectively. We’ll discuss AWS Batch, our fully managed batch-processing service, and show you how to architect your own batch processing service using the Amazon EC2 Container Service. We’ll also discuss best practices for ensuring efficient and opportunistic scheduling, fine-grained monitoring, compute resource auto-scaling, and security for your batch jobs.
In this presentation we will discuss the evolution of IaaS, PaaS, CaaS, FaaS and how serverless computing is beneficial and what are the challenges we have faced so far
This talk is about KSQL, an open source streaming SQL engine for Apache Kafka. KSQL aims to make stream processing available to everybody without the need to write Java or Scala code. Streaming SQL makes it easy to get started with a wide-range of stream processing applications such as real-time ETL, sessionization, monitoring and alerting, or fraud detection. We will give a general introduction to KSQL covering its SQL dialect, core concepts, and architecture including some technical deep-dives how it works under the hood.
This is a basic workshop for Amazon ECS. In this workshop you will learn:
AWS computing services overview
Monolith and Microservices
What is Docker
How to dockerize your app in your local laptop
How to run your Docker app in Amazon ECS and ECR
How to use ecs-cli
Best Practices designing your Dockerfile
AWS Atlanta meetup group Slides from March 20th 2015 group presentation with CloudCheckr COO Aaron Klein speaking about Tracking, Allocating and Optimizing AWS Costs.
Sub topics include Instance and Service Tagging strategies in AWS for Master and child account management.
Amazon EC2 Container Service: Manage Docker-Enabled Apps in EC2Amazon Web Services
Amazon EC2 Container Service (Amazon ECS) is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Amazon ECS lets you define, schedule, and stop sets of containers. You have access to the state of your resources, making it easy to confirm that tasks are running or view the utilization of EC2 instances in your cluster. This session will describe the benefits of containers, introduce ECS, and demonstrate how to use ECS for your applications.
A Brief introduction to Amazon ECS, Dockerization of Spring boot application, CI/CD and notifications using Slack.
This PPT also explains how CI/CD pipeline can be build using Jenkins. And
Using Deep Learning Toolkits with Kubernetes clustersJoy Qiao
Slides for the talk at the O'Reilly AI Conference San Francisco 2017 - https://conferences.oreilly.com/artificial-intelligence/ai-ca/public/schedule/detail/59613
What are you going to do if you have 60,000 jobs coming in a blink of an eye? It's normal in the Machine Learning world that you are going to process a huge load of the jobs that coming instantly in no time. We are going to walk you through our journey to scale out Kubernetes cluster to handle them. The tools we used, load testing, how to measure it and our solution.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1stYuc2.
Brennan Saeta covers aspects of Coursera’s architecture that enables them to rapidly build sophisticated features for their learning platform. Saeta discusses also their experience running containers in production, what works, what doesn’t, and why. He briefly touches upon container threat models, and how to architect a defense-in-depth strategy to mitigate both known and unknown vulnerabilities. Filmed at qconlondon.com.
Brennan Saeta is a Lead Infrastructure Engineer, leading the ‘Cour’ (core) group responsible for the development environment, core libraries, and the common infrastructure powering Coursera.
OS for AI: Elastic Microservices & the Next Gen of MLNordic APIs
AI has been a hot topic lately, with advances being made constantly in what is possible, there has not been as much discussion of the infrastructure and scaling challenges that come with it. How do you support dozens of different languages and frameworks, and make them interoperate invisibly? How do you scale to run abstract code from thousands of different developers, simultaneously and elastically, while maintaining less than 15ms of overhead?
At Algorithmia, we’ve built, deployed, and scaled thousands of algorithms and machine learning models, using every kind of framework (from scikit-learn to tensorflow). We’ve seen many of the challenges faced in this area, and in this talk I’ll share some insights into the problems you’re likely to face, and how to approach solving them.
In brief, we’ll examine the need for, and implementations of, a complete “Operating System for AI” – a common interface for different algorithms to be used and combined, and a general architecture for serverless machine learning which is discoverable, versioned, scalable and sharable.
By David Smith. Presented at Microsoft Build (Seattle), May 7 2018.
Your data scientists have created predictive models using open-source tools, proprietary software, or some combination of both, and now you are interested in lifting and shifting those models to the cloud. In this talk, I'll describe how data scientists can transition their existing workflows — while using mostly the same tools and processes — to train and deploy machine learning models based on open source frameworks to Azure. I'll provide guidance on keeping connections to data sources up-to-date, evaluating and monitoring models, and deploying applications that make use of those models.
With Docker it became easy to start applications locally without installing any dependencies. Even running a local cluster is not a big thing anymore. AWS on the other side offers with ECS a managed container service that states to schedule containers based on resource needs, isolation policies and availability requirements. But what happens between? Is it really that easy? In this talk you’ll see which existing services can already be used to deploy your containers automatically and what still needs to be done to get them running on AWS.
Cloud providers like Amazon or Goggle have great user experience to create and manage PaaS and IaaS services. But is it possible to reproduce same experience and flexibility locally, in on premise datacenter? This talk describes success story of creation private cloud based on DC/OS cluster. It is used to host and share different services like hadoop or kafka for development teams, dynamically manage services and resource pools with GKE integration.
Democratizing machine learning on kubernetesDocker, Inc.
One of the largest challenges facing the machine learning community today is understanding how to build a platform to run common open-source machine learning libraries such as Tensorflow. Both Joy and Lachie are both passionate about making machine learning accessible to the masses using Kubernetes. In this session they'll share how to deploy a distributed Tensorflow training cluster complete with GPU scheduling on Kubernetes. We'll also share how distributed Tensorflow training works, various options for distributed training, and when to choose what option. We'll also share some best practices on using distributed Tensorflow on top of Kubernetes, based on our latest performance tests performed on public cloud providers. All work presented in this session will be accessible via a public Github repository.
Versioning an API can be a somewhat daunting task for the uninitiated. Even worse, some of the most common approaches are less than ideal. In this session I discuss the struggles and outcomes of my first foray into versioning and deploying. I will show how using a combination immutable docker containers, nginx, and a few other friendly tools made for the creation of a fully automated versioning and deployment system at the push of a button.
Design principles to modularise a monolith codebase.pptxPrashant Kalkar
In this talk, we will discuss
How a good module look like (Discussion on module boundaries - Reusable code).
Properties of good reusable code.
Principles that help get the module boundaries right (cohesion principles)
Principles to manage dependencies between the modules (coupling principles).
Tools that can be used to decouple the code base. (We will be using IntelliJ Idea DSM to visualise the code base dependencies and to break them).
Techniques that can help to prevent future issues.
We won't be covering all the principles or every aspect of the given principles due to time constrain. Main goal of the talk is to provide actionable points and to have a healthy discussion around the same.
Exploring the flow of network traffic through kubernetes cluster.pptxPrashant Kalkar
https://www.meetup.com/devday_pune/events/287898343/
The modern kubernetes cluster hosts a range of services ranging from 10 to 100. Every service with multiple pods might scale out or scale in. Further, a unified application url is exposed to outside world masking the complexity of micro-services. How does a request flow through the kubernetes and cloud infrastructure and reach a single pod? How does kubernetes networking make the request flow to the destination pod?
What role do concepts like Pod IPs, cluster IP services, node port services, ingress records etc play in exposing the kubernetes applications? In this talk, we will explore
Inter pod connectivity with Pod IPs.
The need for kubernetes services.
What is a clusterIP service?
What is a nodeIP service?
What is a load balancer service?
Why do we need ingress records and ingress controller?
How does ingress controller receive traffic?
How does AWS load balancer send traffic to kubernetes services from outside of cluster (without really knowing that traffic is going to kubernetes cluster)?
Uncover the mysteries of infrastructure as code (iac)!Prashant Kalkar
In the era of cloud and containerisation, infrastructure as code (IAC) is invaluable. In this talk, we will explore the evolution of Infrastructure practices and tools. We will further look at the practices and tools before the emergence of the clouds. Then we will explore how the rise of the cloud changed the infrastructure automation practices and made the IAC a mainstream practice.
We will also explore what it means to treat infrastructure as code. We will talk about Code vs Configuration, versioning, Configurability vs Standardisation, Modularity and code organisation for infrastructure code.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
4. Tools you need
● Git
● Git bash (for windows only)
● Github account (with SSH key configured)
● Virtualbox
● Vagrant
● IntelliJ Idea or Eclipse.
5. Development Setup
● Fork workshop repository and clone on local machine.
● Login to aws and download access key csv.
● Generate Github token. (refer here)
● Copy and populate workshop_config.template to your user home.
● Build vagrant image.
● Ssh into vagrant
● Upload vagrant github ssh key to your github account.
7. Example details - Asgard Portal
Provides information about Asgard and its Gods - Odin & Thor.
A microservice will be deployed for every God.
We will have fun deploying these services.
14. Service Deployment
Exercise 4 - Odin service deployment on the ECS cluster.
Exercise 5 - Thor service deployment on ECS cluster.
15. Shortcomings of CD solution till now
Deployment pipelines per service - Issue if there are many services.
Provisioning a fully functional cluster will take time as all services have to be
deployed.
Different services with different needs (CPU/Memory) will need frequent change
to cluster.
Provisioning a new environment will require a new pipeline per service.
16. Platform Thinking
Solution that is adaptive.
Cluster owns the services.
Provisioning and restoring a cluster provisions the services.
Uniform way of mapping cluster and services.
19. Cluster Scaling
ECS is responsible for container scaling
ASG responsible for instance scaling
We need a way to scale cluster as containers are scaling.
Scaling speed mismatch between EC2 and ECS.
20. Ahead of time Cluster Scaling
Ref: https://engineering.depop.com/ahead-of-time-scheduling-on-ecs-ec2-d4ef124b1d9e
21. Exercise 8 - Cluster scaling
Add auto scaling policies and alarms to scale cluster
Try out certain scenarios.
27. Cons
This design requires new ALB and clusters for running different version of the
services.
DNS might require some time to propagate change (TTL time)
Misbehaving client might still send traffic to old load balancer even after switch
is complete.
28. Final Thoughts
ECS can provide path to transition to containerization.
Think platform and service team.
Let containers scale early.
Treat everything Disposable.
Adapt On-Demand and Self-Service model.
29. References & Credits
Ahead of time cluster scaling
https://engineering.depop.com/ahead-of-time-scheduling-on-ecs-ec2-
d4ef124b1d9e
How to Automate Container Instance Draining in Amazon ECS
https://aws.amazon.com/blogs/compute/how-to-automate-container-instance-
draining-in-amazon-ecs/