With Amazon CodeDeploy, you can automate your code deployments to one Amazon EC2 instance or thousands. AWS CodeDeploy eliminates the need for error-prone manual operations and helps you get new features to your customers faster. The service also lets you build on your existing investments in Ansible, Chef, Puppet, and SaltStack; and it’s integrated with popular developer tools like GitHub and Jenkins. Join us in this breakout to learn how AWS CodeDeploy works and to see a live demonstration of the service in action.
We’ll also illustrate AWS CodeDeploy’s integration with the forthcoming AWS CodeCommit, a scalable, redundant, and durable Git repository; as well as AWS CodePipeline, a continuous delivery and release automation service that automates your release process.
Speakers:
Shaun Pearce, AWS Solutions Architect
With Amazon CodeDeploy, you can automate your code deployments to one Amazon EC2 instance or thousands. AWS CodeDeploy eliminates the need for error-prone manual operations and helps you get new features to your customers faster. The service also lets you build on your existing investments in Ansible, Chef, Puppet, and SaltStack; and it’s integrated with popular developer tools like GitHub and Jenkins. Join us in this breakout to learn how AWS CodeDeploy works and to see a live demonstration of the service in action.
We’ll also illustrate AWS CodeDeploy’s integration with the forthcoming AWS CodeCommit, a scalable, redundant, and durable Git repository; as well as AWS CodePipeline, a continuous delivery and release automation service that automates your release process.
Speakers:
Shaun Pearce, AWS Solutions Architect
Ansible is simple open source IT engine which automates application deployment,intra service orchestration,cloud provisioning and many other IT tools.we will discuss what is ansible ,its feature ,architecture,writing ansible playbook,ansible role and ansible vs chef.
Serverless Design Patterns for Rethinking Traditional Enterprise Application ...Amazon Web Services
AWS Lambda is a powerful and flexible tool for solving diverse business problems, from traditional grid computing to scheduled batch processing workflows. Cloud native solutions using AWS Lambda enable architectures that depart from traditional enterprise application design. These new design patterns can provide substantially increased performance and reduced costs. In this session, learn how Fannie Mae re-architected one of their mission-critical traditional grid computing applications to a modern serverless solution using AWS Lambda. Learn More: https://aws.amazon.com/government-education/
Hands On Introduction To Ansible Configuration Management With Ansible Comple...SlideTeam
Hands On Introduction To Ansible Configuration Management With Ansible Complete Deck is designed for the upper and mid-level management. Take advantage of the informative visuals of this PPT slideshow to elucidate the application deployment tool. With the help of our intuitive PowerPoint template deck, explain the advantages of the Ansible automation tool. This viewer-friendly PPT theme is perfect to elaborate on the architecture of Ansible software. This is because of the state-of-the-art diagrams that simplify the explanation. Consolidate the characteristics and capabilities of Ansible applications such as configuration management and cloud provisioning. This PowerPoint presentation features an Ansible installation flowchart for an organization. Employ the neat tabular format to compile the differences between Ansible and Puppet. This will assist your organization to implement Ansible and its configuration in an effective manner. Hit the download icon and begin instant personalization. https://bit.ly/3mLQJtJ
[AWS Dev Day] 앱 현대화 | AWS Fargate를 사용한 서버리스 컨테이너 활용 하기 - 삼성전자 개발자 포털 사례 - 정영준...Amazon Web Services Korea
삼성전자 개발자 포탈은 SmartThings Cloud, Bixby 와 같은 삼성전자의 어플리케이션 에코시스템에 개발자 도구를 활용하여 어플리케이션을 개발할 수 있게 해주는 플랫폼입니다. 이 플랫폼을 컨테이너로 개발하고, 컨테이너에 패키징하는 어플리케이션 로직에만 집중 할 수 있다면 배포와 관리가 얼마나 손쉬워 질까요? 삼성전자의 실제 사례를 통하여 Fargate 를 활용한 컨테이너 환경의 장점에 대해서 알아봅니다.
Ansible is simple open source IT engine which automates application deployment,intra service orchestration,cloud provisioning and many other IT tools.we will discuss what is ansible ,its feature ,architecture,writing ansible playbook,ansible role and ansible vs chef.
Serverless Design Patterns for Rethinking Traditional Enterprise Application ...Amazon Web Services
AWS Lambda is a powerful and flexible tool for solving diverse business problems, from traditional grid computing to scheduled batch processing workflows. Cloud native solutions using AWS Lambda enable architectures that depart from traditional enterprise application design. These new design patterns can provide substantially increased performance and reduced costs. In this session, learn how Fannie Mae re-architected one of their mission-critical traditional grid computing applications to a modern serverless solution using AWS Lambda. Learn More: https://aws.amazon.com/government-education/
Hands On Introduction To Ansible Configuration Management With Ansible Comple...SlideTeam
Hands On Introduction To Ansible Configuration Management With Ansible Complete Deck is designed for the upper and mid-level management. Take advantage of the informative visuals of this PPT slideshow to elucidate the application deployment tool. With the help of our intuitive PowerPoint template deck, explain the advantages of the Ansible automation tool. This viewer-friendly PPT theme is perfect to elaborate on the architecture of Ansible software. This is because of the state-of-the-art diagrams that simplify the explanation. Consolidate the characteristics and capabilities of Ansible applications such as configuration management and cloud provisioning. This PowerPoint presentation features an Ansible installation flowchart for an organization. Employ the neat tabular format to compile the differences between Ansible and Puppet. This will assist your organization to implement Ansible and its configuration in an effective manner. Hit the download icon and begin instant personalization. https://bit.ly/3mLQJtJ
[AWS Dev Day] 앱 현대화 | AWS Fargate를 사용한 서버리스 컨테이너 활용 하기 - 삼성전자 개발자 포털 사례 - 정영준...Amazon Web Services Korea
삼성전자 개발자 포탈은 SmartThings Cloud, Bixby 와 같은 삼성전자의 어플리케이션 에코시스템에 개발자 도구를 활용하여 어플리케이션을 개발할 수 있게 해주는 플랫폼입니다. 이 플랫폼을 컨테이너로 개발하고, 컨테이너에 패키징하는 어플리케이션 로직에만 집중 할 수 있다면 배포와 관리가 얼마나 손쉬워 질까요? 삼성전자의 실제 사례를 통하여 Fargate 를 활용한 컨테이너 환경의 장점에 대해서 알아봅니다.
Learn how you can achieve a sophisticated level of standardization, configuration compliance, and monitoring using a combination of AWS Service Catalog, AWS Config, and AWS CloudTrail.
How to use Lambda to build web, mobile, or IoT backends and voice-enabled apps, and we'll show you how to extend both AWS and third party services by triggering Lambda functions.
Data-driven companies have a need to make their data easily accessible to those who analyze it. Many organizations have adopted the Looker application, LookML on AWS, a centralized analytical database with a user-friendly interface that allows employees to ask and answer their own questions to make informed business decisions.
Join our webinar to learn how our customer, Casper, an online mattress retailer, made the switch from a transactional database to Looker’s data analytics program on Amazon Redshift. Looker on Amazon Redshift can help you greatly reduce your analytics lifecycle with a simplified infrastructure and rapid cloud scaling.
Join us to learn:
• How to utilize LookML to build reusable definitions and logic for your data
• Best practices for architecting a centralized analytical database
• How Casper leveraged Looker and Amazon Redshift to provide all their employees access to their data and metrics
Who should attend: Heads of Analytics, Heads of BI, Analytics Managers, BI Teams, Senior Analysts
Log Analytics with Amazon Elasticsearch Service - September Webinar SeriesAmazon Web Services
Elasticsearch is a popular open-source search and analytics engine used for log analytics. With Amazon Elasticsearch Service, you can easily run Elasticsearch on AWS. In this webinar, we will provide an overview of Amazon Elasticsearch Service and demo how to set up and configure an Amazon Elasticsearch domain for the log analytics use case.
Learning Objectives:
'- Understand Amazon Elasticsearch Service use cases and key features
- Learn how to secure your Amazon Elasticsearch cluster for access from Kibana and other plug-ins
- Learn best practices for scaling, monitoring, and troubleshooting Amazon Elasticsearch domains
Data Warehousing in the Era of Big Data: Intro to Amazon RedshiftAmazon Web Services
An overview of how Amazon Redshift uses columnar technology, massively parallel processing, and other techniques to deliver fast query performance on petabyte-size datasets.
In dynamic cloud environments, many organizations have a need to implement a unified threat management solution that enhances visibility across their workloads. Learn how REAN Cloud adopted Sophos Unified Threat Management (UTM) for increased simplicity, visibility, and security of their AWS workloads. Sophos is an Advanced Technology Partner in the AWS Partner Network that provides a reliable, unified security solution capable of scaling to meet the agility and speed of the AWS Cloud. Join the upcoming webinar to hear Sri Vasireddy from REAN Cloud, Bryan Nairn from Sophos, and Nick Matthews from AWS discuss security innovations on the AWS Cloud. Join us to learn: • Why Sophos end user REAN Cloud trusts Sophos UTM for simplicity, visibility and security. • How easy it can be to protect your AWS workloads, with a proven and scalable solution designed for the AWS Cloud. • AWS security innovations, including support across multiple Availability Zones and UTM Auto Scaling.
Who should attend: Security Managers, Security Engineers, Security Architects, IT System Administrators, System Administrators, IT Administrators, IT Managers, DevOps, Architects, IT Architects, IT Security Engineers, Business Decision Makers
The Getting Started on AWS deck serves to introduce Amazon users and prospective customers to the Amazon VPC, EC2 and the concepts and components that are necessary building Fault Tolerant & High Available environments on AWS. It also serves to introduce services like Direct Connect, Router53 (Amazon DNS Service) and one of our new additions, the Amazon
Application Load Balancer (ALB). After perusing this deck, users should have a better understanding of what these services are and their propose benefits.
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and use work load management.
Learning Objectives:
• Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
• Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
• Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
Who Should Attend:
• Data Warehouse Developers, Big Data Architects, BI Managers, and Data Engineers
Deep Dive Amazon Redshift for Big Data Analytics - September Webinar SeriesAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze big data for a fraction of the cost of traditional data warehouses. By following a few best practices, you can take advantage of Amazon Redshift’s columnar technology and parallel processing capabilities to minimize I/O and deliver high throughput and query performance. This webinar will cover techniques to load data efficiently, design optimal schemas, and tune query and database performance.
Learning Objectives:
• Get an inside look at Amazon Redshift's columnar technology and parallel processing capabilities
• Learn how to migrate from existing data warehouses, optimize schemas, and load data efficiently
• Learn best practices for managing workload, tuning your queries, and using Amazon Redshift's interleaved sorting features
An overview of the Amazon ElastiCache managed service, with examples of how it can be used to increase performance, lower costs and augment other database services and databases to make things faster, easier and less expensive.
Rackspace provides a comprehensive set of tooling and expertise on AWS that further unlocks your ability to secure your environment efficiently and cost effectively. The dynamic environment of data, applications, and infrastructure can pose challenges for businesses trying to manage security while following compliance regulations. To mitigate these challenges, businesses need a scalable security solution to ensure their data is safe, secure, and stable. In this webinar, Brad Schulteis, Jarret Raim and Todd Gleason will discuss the topic of security control requirements on AWS through the lens of three common compliance scenarios: HIPAA, PCI-DSS, and generalized security compliance based on the NIST Risk Management Framework. Watch our webinar to learn how Rackspace combines AWS and security expertise with tools like AWS CloudFormation, AWS CodeCommit and AWS CodeDeploy to help customers meet their security and compliance needs.
Join us to learn:
• Best practices for securely operating workloads on the AWS Cloud
• Architecting a secure environment for dynamic workloads
• How to incorporate Security by Design principles to address compliance needs across 3 use cases: HIPAA, PCI-DSS and generalized security compliance based on the NIST Risk Management Framework
Who should attend: Directors and Managers of Security, IT Administers, IT Architects, and IT Security Engineers
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share the processes that Amazon’s engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodePipeline and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
Managing the deployment of code to multiple AWS Lambda functions and updating your API Gateway methods can be manual and time consuming.
In this session, we will show you how to build a deployment pipeline to AWS Lambda using AWS CodePipeline, a continuous delivery service based on Amazon’s internal release automation tooling. We will discuss how to use versioning, which enables you to better manage the different variations of your Lambda functions and API Gateway methods in your development workflow (e.g., development, staging, and production). We will walk through how to automate the entire release process of your application from development, to staging, and finally to production; performing automated integration tests at each stage.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Andrew Baird, Solutions Architecture
Continuous Delivery with AWS Lambda - AWS April 2016 Webinar SeriesAmazon Web Services
Managing the deployment of code to multiple AWS Lambda functions and updating your API Gateway methods can be manual and time consuming.
In this webinar, we will show you how to build a deployment pipeline to AWS Lambda using AWS CodePipeline. We will discuss how to use versioning, allowing you to better manage the different variations of your Lambda function and API Gateway methods in your development workflow, such as development, staging, and production. We will walk through how to automate the entire release process of your application from development to staging and finally to production, performing automated integration tests at each stage.
Learning Objectives:
Understand the basics of AWS CodePipeline
Learn how to version AWS Lambda functions and API Gateway methods
Build a deployment pipeline to AWS Lambda
Continuous delivery makes teams more agile and quickens the pace of innovation. Too often, though, teams adopt continuous delivery without defining the meta components or putting the right safety mechanisms in place. In this session, we'll start from the meta components and transform a typical software release process into one that will scale and is safe. We'll use DevOps techniques like continuous integration, a variety of non-production testing stages, rollbacks, redundancy, canary deployments and synthetic tests. We'll use AWS services such as Lambda, CloudFormation, CodePipeline, CodeBuild, CodeDeploy and CloudWatch alarms and dashboards.
Speakers:
Brent Maxwell, Partner Solutions Architect, Amazon Web Services
Daniel Zoltak, Solutions Architect, Amazon Web Services
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share the processes that Amazon’s engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodePipeline and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
Managing the deployment of code to multiple AWS Lambda functions and updating your API Gateway methods can be manual and time consuming.
In this session, we will show you how to build a deployment pipeline to AWS Lambda using AWS CodePipeline, a continuous delivery service based on Amazon’s internal release automation tooling. We will discuss how to use versioning, which enables you to better manage the different variations of your Lambda functions and API Gateway methods in your development workflow (e.g., development, staging, and production). We will walk through how to automate the entire release process of your application from development, to staging, and finally to production; performing automated integration tests at each stage.
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsAmazon Web Services
Today’s cutting-edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share the processes that Amazon’s engineers use to practice DevOps and discuss how you can bring these processes to your company by using a new set of AWS tools (AWS CodeCommit, AWS CodePipeline, and AWS CodeDeploy). These services were inspired by Amazon's own internal developer tools and DevOps culture.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous integration and delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share the processes followed by Amazon engineers and discuss how you can bring them to your company by using a set of application lifecycle management tools from AWS: the newly announced AWS CodeBuild service, AWS CodePipeline, and AWS CodeDeploy.
by Nick Brandaleone, Solutions Architect AWS
Join us to learn about continuous integration, continuous delivery, and DevOps. The AWS Developer Tools have been designed based on the tools used by Amazon engineers to rapidly and reliably deliver products and features to customers. We’ll provide overviews of the services and best practices followed by a hands-on workshop to help you learn how to automate your software release processes, deploy application code, and monitor your application and infrastructure performance.
Software release cycles are now measured in days instead of months. Cutting edge companies are continuously delivering high-quality software at a fast pace. In this session, we will cover how you can begin your DevOps journey by sharing best practices and tools used by the engineering teams at Amazon. We will showcase how you can accelerate developer productivity by implementing continuous Integration and delivery workflows. We will also cover an introduction to AWS CodeStar, AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, AWS CodeDeploy, AWS Cloud9, and AWS X-Ray the services inspired by Amazon's internal developer tools and DevOps practice.
Level: 200
Speaker: Nick Brandaleone - Solutions Architect, AWS
Learn how to use AWS services to automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps. In this session, we will provide an overview of the various AWS development and deployment services and when best to use them. We will show how to build a fully automated infrastructure and software delivery pipeline with AWS CodePipeline, AWS CodeBuild, AWS CloudFormation and AWS CodeDeploy. At the end of the session, a GitHub repository of AWS CloudFormation templates will be provided so you can quickly deploy the same pipeline to your AWS account(s).
Serverless in production, an experience report (London js community)Yan Cui
AWS Lambda has changed the way we deploy and run software, but this new serverless paradigm has created new challenges to old problems - how do you test a cloud-hosted function locally? How do you monitor them? What about logging and config management? And how do we start migrating from existing architectures?
In this talk Yan and Diana will discuss solutions to these challenges by drawing from real-world experience running Lambda in production and migrating from an existing monolithic architecture.
The morning session, building out a facial recognition solution ultimately stored into a blockchain DB using the AWS platform.
Johannesburg Pop-up Loft Workshop 14 March 2019.
Why does DevOps matter? How can you use continuous integration to build your product faster, make it more highly available, and be able to recover from bugs quickly? Let one of our solutions architects walk you through continuous integration and continuous delivery on AWS. This session includes live demos of our tools AWS CodeCommit, AWS CodePipeline, and AWS CodeDeploy.
Speaker: Leo Zhandovsky, Solutions Architect, Amazon Web services
recordings to the Canberra Summit can be found here
https://aws.amazon.com/events/anz/on-demand/canberra-summit/
Serverless in production, an experience report (microservices london)Yan Cui
AWS Lambda has changed the way we deploy and run software, but the serverless paradigm has created new challenges to old problems: How do you test a cloud-hosted function locally? How do you monitor them? What about logging and config management? And how do we start migrating from existing architectures?
Yan Cui shares solutions to these challenges, drawing on his experience running Lambda in production and migrating from an existing monolithic architecture.
Today’s cutting edge companies have software release cycles measured in days instead of months. This agility is enabled by the DevOps practice of continuous delivery, which automates building, testing, and deploying all code changes. This automation helps you catch bugs sooner and accelerates developer productivity. In this session, we’ll share best practices (including ones followed internally at Amazon) and how you can bring them to your company by using open source and AWS services.
Speaker: Raghuraman Balachandran, Solutions Architect, Amazon India
Itb 2021 - Bulding Quick APIs by Gavin PickinGavin Pickin
In this session we will use ColdBox’s built in REST BaseHandler, and with CBSecurity and Quick ORM we will setup a secure API using fluent query language - and you’ll see how Quick Quick development can be!
Similar to Releasing Software Quickly and Reliably with AWS CodePipline (20)
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
2. What to Expect from the Session
1. What we learned as we evolved our release processes
2. Overview of release process terminology
3. A tour of AWS CodePipeline
4. Look under the hood of AWS CodePipeline
5. Extending AWS CodePipeline
13. Release Processes have four major phases
Source Build Test Production
• Check-in
source code
such as .java
files.
• Peer review
new code
• Compile code
• Unit tests
• Style
Checkers –
FindBugs,
CheckStyle
• Code Metrics
– Cobertura,
EMMA
• Packaging -
docker
• Integration
tests with
other systems
• Load testings
• UI Tests
• Penetration
testing
• Incremental
rollout to
production
environments
15. A real pipeline of a simple service
Build and Unit
Test
DeploymentsValidation
With increase confidence we increase the blast radius
Does it compile
and pass unit
tests?
Does it
integrate in
an isolated
stack?
Does it
integrate
against
prod?
Does it
integrate
in
production
region 1?
Does it
integrate in
production
region 2?
Deploy
to prod
18. A build service is not enough
• Our release processes emphasize safety, so we have more steps
• Many CI systems hide the release process, making failures hard to
find
• CI systems don’t provide needed modeling primitives
• Serial and parallel execution
• Easily add a new step to your process
• Pause for manual approvals
• Multiple deployment actions
• Multiple source actions
• CI systems don’t allow multiple changes concurrently through the
release process
23. CodePipeline concepts on the pipeline page
PipelineStage
Action
Pipeline Run
Source change
• starts a run; and
• creates an artifact to be used by other actions.
Manual Approval
24. CI is great start. CD with CodePipepline is
better.
• Visualizes your release process so they can be understood
• Allows powerful modeling of your release process
• Serial and parallel execution
• Easily add a new step to your process
• Pause for manual approvals
• Multiple deployment steps
• Allows multiple changes to be processed concurrently
28. Extend AWS CodePipeline Using Custom Actions
Update tickets Provision resources
Update dashboards
Mobile testing
Send notifications Security scan
29. How would we send a message to slack?
CodePipeline
App Pipeline
Source
Source
GitHub
Build
JenkinsForReinvent
Jenkins
Deploy
RailsApp
Elastic Beanstalk
30. AWS CodePipeline extension options
Per account based extension – for customers
• Option 1: AWS Lambda function
• Option 2: Custom Actions
Global Extensions – for AWS partners
• Option 3: Third Party Actions
32. Extend AWS CodePipeline with Lambda
Push message to slack when our Pipeline run completes
1. Add in a Lambda Invoke stage
2. Select a “send message to slack” function
3. Run the pipeline
33.
34.
35. Lambda example – include libs and handler
var AWS = require('aws-sdk');
var https = require('https');
exports.handler = function(event, context) {
var cp = new AWS.CodePipeline();
…
};
36. Lambda example – setup HTTP config
var httpParams = {
hostname: 'slack.com',
path: '/api/chat.postMessage?token=MYTOKEN&
text=Hello&channel=%23testing',
method: 'GET'
};
37. Lambda example – send message to Slack
// Send Message to Slack
var req = https.request(httpParams, function(response) {
response.on('data', function(c) {});
response.on('end', sendResultToCodePipeline);
response.resume();
});
req.end();
38. Lambda example – notify AWS CodePipeline
var sendResultToCodePipeline = function () {
var jobId = event["CodePipeline.job"].id;
cp.putJobSuccessResult({ jobId: jobId }, function(err,
data) {
if(err) { context.fail(err); }
else { context.succeed("Passed"); }
});
};
40. Custom Actions and Job Workers collaborate
Stage
Action
Custom Action
1. Poll for Job
2. Acknowledge Job
3. Put Success
EC2 instance
Job Worker
41. Creating a custom action and job worker takes
3 easy steps
1. Register you custom action in CodePipeline
2. Write your Custom Action
• Integrate with an external service
• Write stand alone Custom Action
3. Deploy custom action
43. Make your Custom Action available to users
1. Register you custom action in CodePipeline
2. Write your Custom Action
• Integrate with an external service
• Combine Custom Action and processing task
3. Deploy custom action
44. CodePipeline
App Pipeline
Source
Source
GitHub
Build
JenkinsOnEc2
Jenkins
Deploy
Action
Custom Action
RailsApp
Elastic Beanstalk
RegisterCustomAction.json
{
"category": "Deploy",
"provider": "Slack-Notifier",
"version": "2",
"settings": {
"entityUrlTemplate": "https://codepipeline-demo.slack.com/messages/general/",
"executionUrlTemplate": "https://codepipeline-demo.slack.com/archives/general/{ExternalExecutionId}"
},
"inputArtifactDetails": {
"maximumCount": 0,
"minimumCount": 0
},
"outputArtifactDetails": {
"maximumCount": 0,
"minimumCount": 0
}
}
Unique identifier information
Files to consume during the action
Files to produce during the action
45. Use the AWS CLI to register the Custom Action
$ aws codepipeline
create-custom-action-type
--cli-input-json
file://lib/custom_action/RegisterCustomAction.json
46. Write the code to talk to AWS CodePipeline and
Slack
1. Register you custom action in CodePipeline
2. Write your Job Worker
• Integrate with an external service. e.g. Slack
• Combine with processing task.
3. Deploy Job Worker
51. Deploy Job Worker code to compute instance
1. Register you custom action in CodePipeline
2. Write your Job Worker
• Integrate with an external service
• Combine Job Worker and processing task
3. Deploy Job Worker
52. Recap creation of a custom action and job
worker
1. Register you custom action in CodePipeline
2. Write your Custom Action
• Integrate with an external service
• Combine Custom Action and processing task
3. Deploy custom action
53. What extension method should I use?
Lambda Custom Action
Short running tasks are easy to build Can perform any type of workload
Long running tasks need more work Control over links displayed in console
Node.js, Python and Java support Any language support
Runs on AWS Can run on-premise
No servers to provision or manage Requires compute resources
54. What did we cover today?
• The benefits of moving to Continuous Delivery
• We can get our software out in front of our users much
more rapidly
• By moving faster we can actually ensure better quality
• CodePipeline allows for integration with almost any
service or tool you can think of!
• Plus visualization of what’s going on!
55. How you can try AWS CodePipeline
• Use your AWS account to create a free pipeline
• We have examples and a tutorial
• There is thorough documentation too
• We provide support in the forums
• More CodePipeline code in awslabs on github.com
59. Images
Haystack rock - https://commons.wikimedia.org/wiki/File:Haystack_rock_00022.jpg
Heatpipe tunnel copenhagen 2009 -
https://commons.wikimedia.org/wiki/File:Heatpipe_tunnel_copenhagen_2009.jpg
Lewis Hine, Boy studying -
https://commons.wikimedia.org/wiki/File:Lewis_Hine,_Boy_studying,_ca._1924.jpg
Cells – https://pixabay.com/en/stem-cell-sphere-163711/
Editor's Notes
- welcome everyone
- my name is Rob Brigham, and I'm here with members from the AWS Developer Tools group
- we build the tools that developers inside of Amazon use, as well as a new set of AWS tools that all of customers can use
- today, we're going to talk about DevOps at Amazon, and give you an inside peak at how Amazon develops our web applications and services
I’ll share what Amazon learned as we adopted continuous delivery practices.
Next we’ll take a tour of AWS CodePipeline. I’ll start with an overview of CodePipeline concepts and then we’ll walkthrough of the product in the console.
We’ll then look under the hood of how work is coordinated and executed in CodePipeline.
Finally, we show the extensibility and flexibility of CodePipeline by integrating a new services into AWS CodePipeline
- now to make this more concrete, let's look at the story of Amazon's transformation to DevOps
- like most companies, we did not start out this way
In 2001, Amazon had already been a successful company for quite a few years.
We were growing fast and we were listening to what customers wanted, resulting in a continuous stream of new functionality to the Amazon.com website.
As the retail website continued to become more feature rich, we experienced an increasing number of issues when building and maintaining the site.
Building the website became hard, testing the website became hard and deploying also became hard.
Over time we had lost the agility of a startup.
We decided to divide
We broke into small teams
We broke our software into small pieces that could be fully owned and managed by each of our small teams
And We invented very flexible Build, Test and Deployment tools, that put control back in the hands of each team.
This meant that teams could now manage the end-to-end software development cycle by themselves and at their own pace.
These structural changes removed a bottleneck in our processes, and we continued to grow.
8 years later
In 2009, we had continued to grow rapidly
Teams had long since moved to being fully independent, small teams with the ability to prioritize their own work and deliver software at their own cadence.
We’d come a long way, but we felt we could be more efficient.
The problem is, that we weren’t sure of where our bottlenecks were.
So we conducted a study. “Understanding Amazon’s Software Development Process Through Data” in 2009 by John Rauser
We wanted to find out the steps, and timing of the steps, that were taken from code check-in through to code being available in production. This included the time it took to build, test and deploy our software.
We learned that this was taking a long time. In the order of weeks. And we didn’t want it take weeks to get a code change out to production.
What we did discover was our processes had lot of human, manual work in them which were taking most of the time. Developers would use tickets or emails to track their release process. Developers would ticket or email other developers to run a build at which point a bunch of requests would batch up before being run. Once the build was done, new tickets were cut to deploy their software. Those requests may also batch up, increasing the time it took for a change to reach production.
This was the problem we needed to solve. We needed to automate the production line of developer work so that humans were not longer causing developers to wait, when that work could be automated away.
To solve our problem with manual coordination of software delivery we created Pipelines.
Pipelines automates the orchestration of work in the software development life cycle. The software production line.
Pipelines allowed teams to model their software release process. We built pipelines to be incredibly flexible. It is flexible enough for some of our largest products to use including S3, EC2 and the Amazon.com website.
With Pipelines, we now had a platform that could automate the coordination of build, test and deployments of software for all of Amazon.
Very successful internally. Used by over 90% of the teams.
The combination of:
the organizational changes
architectural changes; and
new tools like pipelines
Amazon was able to perform 50 million deployments last year or 1 every 1.6 seconds.
We learned a lot when moving to an automated release process.
We delivered software to customers Faster:
Pipelines allowed us to automate away the waiting time between tasks that had been present.
We reduced the amount of boring, error prone and repetitive work that humans had to do. And instead gave the repetitive work to computers. Computers are great at this type of work because they don’t get bored and they don’t make mistakes.
Teams that adopted pipelines saw the time it took to from code check-in to seeing that code in customer’s hands was now in the order of minutes instead of weeks.
We found that automated release processes were Safer:
In theory, continuous delivery does not reduce the number of mistakes that developers make, so we do not expect the rate of bugs per line of code checked in to change when teams adopt continuous delivery.
Generally, teams that automated their processes with Pipelines were seeing a reduction in customer facing errors
Because it now took minutes or hours for a change to get to customers, teams were pushing out smaller changes, more often. Smaller changes contain less risk of incorporating a new defects, which contributes to safer releases.
Another reason teams saw less errors was that many had decided to automate their current test processes. The automated tests were integrated into the teams pipelines and the tests were continuously improved over time.
Visualizing your release process was key to improving it.
PROBLEM: Documenting your process is important, but sometimes words can be confusing.
Visualizing the processes made it easier to understand.
Once a team had modelled their process in pipelines, they could iterate on it. Inefficiencies could easily be identified as they were usually the manual steps, and teams could work on automating each manual step, one at a time until there were no manual steps left.
Release processes could not be inspected. More experienced engineers could help make other people’s release processes better. This processes built trust within the company that a team was following good deployment practices.
// The Agile community has been using visualization techniques, known as “Big Visible Charts”, such as burndown charts, story walls, parking lot diagram and story mapping to achieve the same results.
Much of the benefit of CD comes from process simplification and standardization
PROBLEM: In the past teams would have one process for a bug fix and another process for a feature release. This could lead a team to push out bug fix that would pass their targeted tests, but may fail in some other part of the system, causing a customer impacting event. Teams often developed not just two, but many ways to release code with each of them having different quality standards, some of which would lead to outages as different people typed in different commands on to test and deploy software.
In Pipelines there is one release process for one application. Whether your shipping out a bug fix or a large new feature, the process for releasing software for an application is always the same.
(simplification) This caused many teams to revisit their release processes as they now needed a single, standardized process in order to automatically release software. A common outcome for teams was that their release processes were simplified, often dramatically.
(standardization) Standardizing a release process meant that all software releases now went through the same quality checks every time. This was a contributor to increased quality as any confusion around the processes was removed.
(consistency) Automating the process meant also that there were less opportunities for people to make errors. Builds, Tests and Deployments were always triggered the same way with the same parameters. Teams wouldn’t build the wrong software or make mistakes dues to humans typing in different information from one release to the next. This was another factor in increasing quality.
I want to take a moment to talk about different release processes.
Each team’s release process takes a different shape to accommodate the needs of each team.
Nearly all release processes can be simplified down to four stages – source, build, test and production. Each phase of the process provides increase confidence that the code being made available to customers will work in the way that was intended.
During the source phase, developers check changes into a source code repository. Many teams require peer feedback on code changes before shipping code into production. Some teams use code reviews to provide peer feedback on the quality of code change. Others use pair programming as a way to provide real time peer feedback.
During the Build phase an application’s source code is built and the quality of the code is tested on the build machine. The most common type of quality check are automated tests that do not require a server in order to execute and can be initiated from a test harness. Some teams extend their quality tests to include code metrics and style checks. There is an opportunity for automation any time a human is needed to make a decision on the code.
The goal of the test phase is to perform tests that cannot be done on during the build phase and require the software to be deployed to a production like stages. Often these tests include testing integration with other live systems, load testing, UI testing and penetration testing. At Amazon we have many different pre-production stages we deploy to. A common pattern is for engineers to deploy builds to a personal development stage where an engineer can poke and prod their software running in a mini prod like stage to check that their automated tests are working correctly. Teams deploy to pre-production stages where their application interacts with other systems to ensure that the newly changed software work in an integrated environment.
Finally code gets deployed to production. Different teams have different deployment strategies though we all share a goal of reducing risk when deploying new changes and minimizing the impact if a bad change does get out to production.
Each of these steps can be automated without the entire release process being automated. There are several levels of release automation that I’ll step through.
Continuous Integration
Continuous Integration is the practice of checking in your code to the mainline branch on a daily basis and verifying each change with an automated build and test process. Over the past 10 years Continuous Integration has gained popularity in the software community. In the past developers were working in isolation for an extended period of time and only attempting to merge their changes into the mainline of their code once their feature was completed. Batching up changes to merge back into the mainline made not only merging the business logic hard, but it also made merging the test logic difficult. Continuous Integration practices have made teams more productive and allowed them to develop new features faster. Continuous Integration requires teams to write automated tests which, as we learned, improve the quality of the software being released and reduce the time it takes to validate that the new version of the software is good.
There are different definitions of Continuous Integration, but the one we hear from our customers is that CI stops at the build stage, so I’m going to use that definition.
Continuous Delivery
Continuous Delivery extends Continuous Integration to include testing out to production-like stages and running verification testing against those deployments. Continuous Delivery may extend all the way to a production deployment, but they have some form of manual intervention between a code check-in and when that code is available for customers to use.
Continuous Delivery is a big step forward over Continuous Integration allowing teams to be gain a greater level of certainty that their software will work in production.
Continuous Deployment
Continuous Deployment extends continuous delivery and is the automated release of software to customers from check in through to production without human intervention. Many of the teams at Amazon have reached a state of continuous deployment. Continuous Deployment reduces the time for your customers to get value from the code your team has just written, with the team getting faster feedback on the changes you’ve made. This fast customer feedback loop allow you to iterate quickly, allowing you to deliver more valuable software to your customers, quicker.
Let’s look at a real pipeline at Amazon. This pipeline deploys code to production.
Our goal is release software to production quickly and safely. The pipeline is designed to gain increased confidence that our change will be safe.
The first check completed is to ensure the change builds and the unit tests pass.
The next change is to ensure the new artifact works when integrated with other, dependent services. We use integration tests on the isolated stack to achieve this goal. Engineers use this stack to debug integration issues.
We then check that each of the services to be deployed to production works against the current production stack. He we combine the new code with the production configuration to identify both code and config bugs.
We then deploy to a subset of production, a OneBox. We always run more than one server in production so that we have redundancy. We always strive to provide the best experience for our customers.
Once we gain confidence we then deploy to the rest of the production fleet within the region.
Repeat the onebox and production deployments for each region.
AWS has wide pipelines as we incrementally rollout to
CI servers have increased their features beyond simple compilation and unit tests execution. Build Servers, or Continuous Integration servers, have pluggable deployment functionality too.
But… CI systems made it hard for us to see the root cause of a release failure because it was all hidden inside the build logs.
When our AWS customers asked what we do that may be useful to them, we looked internally and realized that we had a lot of great tools that allowed us to move quickly and safely. We realized that we’d made something very special with Pipelines and we wanted to make it available to you. CodePipeline is the externalization of Pipelines and it allow you to release software like Amazon does.
CodePipeline builds, tests and deploys your code, every time there is a code change, based on the release process you define.
We have partnered with popular source, build, test and deployment systems to provide out-of-the-box integrations.
Jenkins, CloudBees and Solano offer CI services for build stages
BlazeMeter, Apica, HP StormRunner and Runscope are load testing partners.
GhostInspector is a User Interface Testing partner
GitHub is a source code partner
Xebia Labs is a deployment partner.
(only show this screen briefly while I bring up the console)
https://console.aws.amazon.com/codepipeline/home?region=us-east-1#/view/SampleAppPipeline
OPEN BROWSER
Dashboard page:
The CodePipeline homepage shows the pipelines that your team has already built. You can also create a new Pipeline from this page. Let’s take a closer look at a pipeline.
(only show this screen briefly while I bring up the console)
https://console.aws.amazon.com/codepipeline/home?region=us-east-1#/view/SampleAppPipeline
OPEN BROWSER
Dashboard page:
The CodePipeline homepage shows the pipelines that your team has already built. You can also create a new Pipeline from this page. Let’s take a closer look at a pipeline.
Pipelines:
A pipeline represent the workflow of your release process. We’ve build CodePipeline to be very flexible in the way you can configure your workflow.
Artifact:
Artifacts are the files that are passed through a pipeline. For instance, when a pipeline is first triggered, a source artifact is created an placed in an S3 bucket. I’ll talk more about these when we extend CodePipeline.
Pipeline Revisions/Run:
Each time a new changes is committed to your source location a new revision is triggered. The new code change passes through all steps in the pipeline. A pipeline can have multiple revisions flowing through it at the same time. Pipelines runs can also be manually started by releasing a change.
Stage:
A stage is a collection of one or more actions.
Transitions:
Stages in a pipeline are connected by transitions and are represented by arrows on the console. Transitions can be disabled or enabled between stages.
Action:
An action, or plugin, is a task that will act upon the current revision running through the pipeline. You can configure action to be executed in a specific order either in serial or in parallel.
Each action has two links. The first link is underneath your action name and is a link back to the action webpage. CodePipeline will provide a summary of the actions information, but for a more detailed look into the configuration of your action you can go to the action’s page. For example if I had a test action then this link would take me to my test suite definition.
The second link shows the details of the last pipeline run. Here you can get details on what occurred the last time the action performed it’s task. Keeping with the previous example, if I had a test action then this link would be to the results of the last execution of my test suite.
Configurable Workflow
CodePipeline is also easy to configure. We can edit this pipeline and modify an existing action or add in a new one. You’ll see we have actions categorized into source, build, test and deployment actions with many partner to choose from. You can also add in your own actions into these lists as I’ll show when we extend CodePipelines.
I just showed you what CodePipeline looks like from the outside
Lets look inside and see how CodePipeline processed a run.
Let’s take a look at an example Pipeline. I’ve created a simple 3 stage Pipeline to talk though my example.
Source actions are special actions. They continuously poll the source providers, such as GitHub and S3, in order to detect changes. Once a change is detected, the new pipeline run is created and the new pipeline begins its run. The source actions retrieve a copy of the source information and place it into a customer owned S3 bucket.
Once the source action is completed, the Source stage is marked as successful and we transition to the Build stage.
In the Build Stage we have one action, Jenkins. Jenkins was integrated into CodePipeline as a CustomAction and has the same lifecycle as all custom actions. Talk through interaction
Once the build action is completed, the Build stage is marked as successful and we transition to the Deploy stage
The Deploy stage contains one action, an AWS Elastic Beanstalk deployment action. The Beanstalk action retrieves the build artifact from the customer’s S3 bucket and deploys it to the Elastic Beanstalk web container.
Talk about why we want to extend CodePipeline – provide some reasoning.
This is the user experience
Why?
Shows how easy it is to integrate into AWS CodePipeline
Its fun
The user experience that we’ll get when using a Lambda Function
Let’s quickly run through what occurs in a custom action.
Here is a Custom Action as shown in CodePipeline.
CLICK.
Here is an EC2 instance with a service that processes an artifact in a pipeline
Poll for job
Ack job
Do custom logic, the magic.
Put Success
When the custom action polls for a job, the job contains information on the input and output artifacts, if there are any. The Custom Action can then download a copy of the input artifact and product an output artifact as specified in the action definition through the console.
That’s a quick run through. We’ll revisit this again in a few moments when we build our own custom action.
What do we need to do in order to build a custom action?
Register an Action
Write the code to post a message to our messaging app
Deploy the code
We’re going to add a new custom action in the deploy stage that will send a message to our messaging App, Slack.
The Custom Action will
poll for jobs
acknowledge the job
send a motivational message to our messaging app
and then return successfully.
This is the high level architecture of what we’re going to build.
What do we need to do in order to build a custom action that keeps our boys at Bespoke Suits for Dogs?
Let’s talk through what we’re going to build
Register an Action
Write the code to post a message to our messaging app
Deploy the code
OPEN BROWSER – start codepipeline run
aws codepipeline create-custom-action-type --cli-input-json file://lib/custom_action/RegisterCustomAction.json
Go to CodePipeline and add in the custom action
Start a run.
What do we need to do in order to build a custom action that keeps our boys at Bespoke Suits for Dogs?
Let’s talk through what we’re going to build
Register an Action
Write the code to post a message to our messaging app
Deploy the code
The dogbot_says_hi method contains the heart of the logic of the custom action. It’s not important what occurs in here, so I’m going to skip over it and keep our focus on the work that needed for CodePipeline integration.
The execution_id is passed back to CodePipeline and is used to render the URL for the pipeline run on the action.
Deployed to an Elastic Beanstalk worker. I’m using the EB worker because it has build in pollers and is a good fit for custom actions that do not need a UI.
I’m actually deploying the custom action with another pipeline that I’ve previously setup. I won’t show it today as I don’t want to introduce more complexity to the talk.
(reword) Wrap it up
Showed you how easy it is to extend code pipeline
How straight forward it is to integrate with an existing service
We’ve had one customer write their custom action in cron and bash.
What we learned as we evolved our release processes
Overview of release processes
A tour of AWS CodePipeline
Look under the hood of AWS CodePipeline
Extending AWS CodePipeline
Give CodePipeline a try. The first pipeline is free.
We have good documentation online on how our product works, getting started and diving deeper into building custom actions
Come and talk to us in the forums. WE’re active in the AWS forums and we’re always happy to help.
We have more code sample in the awslabs including a custom action example.