I presented this at JavaOne 2011 on October 6th. It discusses some of the problems related to environment provisioning that enterprise Java developers face and how the new Platform-as-a-Service (PaaS) product from Amazon Web Services called Elastic Beanstalk can solve some of those problems.
Amazon Webservices for Java Developers - UCI WebinarCraig Dickson
Amazon Web Services (AWS) offers IT infrastructure services to businesses in the form of web services - now commonly known as cloud computing. AWS is an ideal platform to develop on and host enterprise Java applications, due to the zero up front costs and virtually infinite scalability of resources. Learn basic AWS concepts and work with many of the available services. Gain an understanding of how existing JavaEE applications can be migrated to the AWS environment and what the advantages are. Discover how to architect a new JavaEE application from the ground up to leverage the AWS environment for maximum benefit.
Java PaaS Vendor Survey - September 2011Craig Dickson
Cloud computing is revolutionizing the software development industry, no more so than in the Java application space.
The first generation of cloud computing has been focused on virtualizing and managing infrastructure resources such as machines, networks, operating systems and servers.
The emerging 2nd generation of cloud computing brings an abstraction layer over that 1st generation where we see a movement away from low level system resources and instead focus on the application layer. The Platform-as-a-Service model allows developers to concentrate more on application development and then deploy that application to a managed application execution environment in the cloud without needing to deal with provisioning and configuring machines, operating systems and application servers.
The Platform-as-a-Service market for Java applications has exploded in 2011 with a flurry of vendors announcing offerings and a lot of merger and acquisition activity.
Let take a look at where Java Platform-as-a-Service stands today.
Best Practices for Large-Scale Web SitesCraig Dickson
This document outlines best practices for designing large-scale websites based on lessons learned from eBay's architecture and operations. The key principles discussed are: (1) Partition everything into manageable chunks by data, load, or usage to improve scalability, availability, and manageability; (2) Use asynchrony wherever possible to improve scalability, availability, and latency; (3) Automate everything to improve scalability, availability, and reduce costs; (4) Assume everything will fail and design for resilience, rapid failure detection and recovery, and graceful degradation.
There are several different deployment services on Amazon Web Services including OpsWorks, ECS and Elastic Beanstalk. The speaker will share the company's experience with these services and some real world use cases.
Eric Holmes from Remind discussed building an internal Platform as a Service (PaaS) called Empire using Docker and Amazon EC2 Container Service (ECS). Remind started on Heroku but encountered issues with scaling and visibility. Empire provides a management layer on top of ECS for deploying and scaling microservices. It implements a subset of the Heroku API and provides a single binary and CLI. Empire is running 15 of Remind's production services on ECS with improved performance over Heroku. A demo was shown of deploying a sample app with Empire.
The “Twelve-Factor” application model has come to represent twelve best practices for building modern, cloud-native applications. With guidance on things like configuration, deployment, runtime, and multiple service communication, the Twelve-Factor model prescribes best practices that apply to everything from web applications to APIs to data processing applications. Although serverless computing and AWS Lambda have changed how application development is done, the “Twelve-Factor” best practices remain relevant and applicable in a serverless world. In this talk, we’ll apply the “Twelve-Factor” model to serverless application development with AWS Lambda and Amazon API Gateway and show you how these services enable you to build scalable, low cost, and low administration applications.
Building a data warehouse with Amazon Redshift … and a quick look at Amazon ...Julien SIMON
This document provides a summary of a presentation about building data warehouses with Amazon Redshift and using Amazon Machine Learning. The presentation discusses how Amazon Redshift can be used to build a petabyte-scale data warehouse with SQL and no system administration. Case studies are presented showing companies saving on total cost of ownership by migrating to Amazon Redshift. It also briefly introduces Amazon Machine Learning for building predictive models with managed services. Demo examples are shown of loading data into Redshift and using ML to train a regression model and create a real-time prediction API.
Amazon Webservices for Java Developers - UCI WebinarCraig Dickson
Amazon Web Services (AWS) offers IT infrastructure services to businesses in the form of web services - now commonly known as cloud computing. AWS is an ideal platform to develop on and host enterprise Java applications, due to the zero up front costs and virtually infinite scalability of resources. Learn basic AWS concepts and work with many of the available services. Gain an understanding of how existing JavaEE applications can be migrated to the AWS environment and what the advantages are. Discover how to architect a new JavaEE application from the ground up to leverage the AWS environment for maximum benefit.
Java PaaS Vendor Survey - September 2011Craig Dickson
Cloud computing is revolutionizing the software development industry, no more so than in the Java application space.
The first generation of cloud computing has been focused on virtualizing and managing infrastructure resources such as machines, networks, operating systems and servers.
The emerging 2nd generation of cloud computing brings an abstraction layer over that 1st generation where we see a movement away from low level system resources and instead focus on the application layer. The Platform-as-a-Service model allows developers to concentrate more on application development and then deploy that application to a managed application execution environment in the cloud without needing to deal with provisioning and configuring machines, operating systems and application servers.
The Platform-as-a-Service market for Java applications has exploded in 2011 with a flurry of vendors announcing offerings and a lot of merger and acquisition activity.
Let take a look at where Java Platform-as-a-Service stands today.
Best Practices for Large-Scale Web SitesCraig Dickson
This document outlines best practices for designing large-scale websites based on lessons learned from eBay's architecture and operations. The key principles discussed are: (1) Partition everything into manageable chunks by data, load, or usage to improve scalability, availability, and manageability; (2) Use asynchrony wherever possible to improve scalability, availability, and latency; (3) Automate everything to improve scalability, availability, and reduce costs; (4) Assume everything will fail and design for resilience, rapid failure detection and recovery, and graceful degradation.
There are several different deployment services on Amazon Web Services including OpsWorks, ECS and Elastic Beanstalk. The speaker will share the company's experience with these services and some real world use cases.
Eric Holmes from Remind discussed building an internal Platform as a Service (PaaS) called Empire using Docker and Amazon EC2 Container Service (ECS). Remind started on Heroku but encountered issues with scaling and visibility. Empire provides a management layer on top of ECS for deploying and scaling microservices. It implements a subset of the Heroku API and provides a single binary and CLI. Empire is running 15 of Remind's production services on ECS with improved performance over Heroku. A demo was shown of deploying a sample app with Empire.
The “Twelve-Factor” application model has come to represent twelve best practices for building modern, cloud-native applications. With guidance on things like configuration, deployment, runtime, and multiple service communication, the Twelve-Factor model prescribes best practices that apply to everything from web applications to APIs to data processing applications. Although serverless computing and AWS Lambda have changed how application development is done, the “Twelve-Factor” best practices remain relevant and applicable in a serverless world. In this talk, we’ll apply the “Twelve-Factor” model to serverless application development with AWS Lambda and Amazon API Gateway and show you how these services enable you to build scalable, low cost, and low administration applications.
Building a data warehouse with Amazon Redshift … and a quick look at Amazon ...Julien SIMON
This document provides a summary of a presentation about building data warehouses with Amazon Redshift and using Amazon Machine Learning. The presentation discusses how Amazon Redshift can be used to build a petabyte-scale data warehouse with SQL and no system administration. Case studies are presented showing companies saving on total cost of ownership by migrating to Amazon Redshift. It also briefly introduces Amazon Machine Learning for building predictive models with managed services. Demo examples are shown of loading data into Redshift and using ML to train a regression model and create a real-time prediction API.
Running Microservices and Docker on AWS Elastic Beanstalk - August 2016 Month...Amazon Web Services
In this session, we introduce you to a solution for easily running a Docker-powered microservices architecture on AWS using Elastic Beanstalk. We will also cover the fundamentals of Elastic Beanstalk and how it benefits developers looking for a quick and scalable way to get their applications running on AWS with no infrastructure work required.
Building a microservices architecture using Docker can require a lot of work, from launching and operating the underlying infrastructure to installing and maintaining cluster management software. With AWS Elastic Beanstalk’s multicontainer support feature, many of these tasks are simplified and abstracted away so you can focus on your application code. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker."
Learning Objectives:
• Learn the basics of AWS Elastic Beanstalk
• Understand how to use Elastic Beanstalk to run containerized applications
• Learn how to use Elastic Beanstalk to start architecting microservices-based applications
Microservices Architecture for Web Applications using Amazon AWS CloudMitoc Group
Large Web Applications are by nature resource intensive, expensive to customize, and difficult to manage at scale. What if we can change this perception and help developers architect a web application that is high performance and low cost, high security and low maintenance? This talk will focus on 3 key topics: 1) serverless infrastructure, 2) microservices architecture and 3) hands-on demos. We will describe a serverless solution and propose a scalable architecture that will help Generator Hub community to adopt cloud-native approach without huge efforts or expensive resources allocation.
The document provides best practices and recommendations for securing resources in AWS. It advises that users should:
1) Grant least privilege to IAM roles and policies, use private subnets, and avoid public buckets or open security groups.
2) Rely on managed AWS services instead of maintaining resources like databases on EC2 instances directly.
3) Implement infrastructure as code and immutable infrastructure to ensure consistency and reliability of deployments.
4) Keep application state in services like ElastiCache instead of on individual instances to ensure high availability.
5) Leverage AWS services, documentation, and community resources to continuously improve security practices.
Scale Your Application while Improving Performance and Lowering Costs (SVC203...Amazon Web Services
Scaling your application as you grow should not mean slow to load and expensive to run. Learn how you can use different AWS building blocks such as Amazon ElastiCache and Amazon CloudFront to “cache everything possible” and increase the performance of your application by caching your frequently-accessed content. This means caching at different layers of the stack: from HTML pages to long-running database queries and search results, from static media content to application objects. And how can caching more actually cost less? Attend this session to find out!
Deploy, scale and manage your application with AWS Elastic BeanstalAmazon Web Services
AWS Elastic Beanstalk provides an easy way to quickly deploy, manage, and scale applications in the AWS cloud. Through interactive demos, this session will discuss the best practices for deploying and scaling your application, provisioning additional AWS resources and performance tuning.
(DVO306) AWS CodeDeploy: Automating Your Software DeploymentsAmazon Web Services
So you’ve written some code. Now what? How do you make it available to your customers in an efficient and reliable manner? Learn how you can use AWS CodeDeploy to easily and quickly push your application updates. This talk will introduce you to the basics of CodeDeploy: key concepts, how it works, where it fits in your release process, and some deployment strategies to get you started on the right foot. We’ll walk through several demos, going from a basic sample deployment to a live update of a large multi-instance fleet, giving you a sense for how CodeDeploy can grow with your needs.
Webinar - Big Data: Let's SMACK - Jorg SchadCodemotion
The document discusses big data processing and the SMACK stack. It introduces Mesosphere and Apache Mesos as enabling distributed applications by multiplexing workloads across servers. It then covers components of the SMACK stack - including Apache Kafka for ingestion, Apache Spark for storage and analysis, Apache Cassandra for analytics, and Akka for acting on data. It discusses choosing messaging and stream processing systems and highlights Mesos support.
This document discusses options for running SQL Server workloads on AWS, including using Amazon RDS and Amazon EC2. It provides a high-level overview of the features and capabilities of SQL Server when used with each AWS service. Key points include:
- Amazon RDS provides a managed service for deploying SQL Server, handling tasks like maintenance, patching, backups and high availability. EC2 provides an unmanaged option where the customer handles these tasks.
- Both RDS and EC2 support multiple versions of SQL Server. RDS automates tasks while EC2 gives more control over the SQL instance.
- High availability options with RDS include multi-AZ deployments for automatic failover. With EC2
Continuous Delivery to Amazon ECS - AWS August Webinar SeriesAmazon Web Services
Keeping consistent environments across your development, test, and production systems can be a complex task. Docker containers offer a way to develop and test your application in the same environment in which it runs in production. You can use tools such as Docker Compose for local testing of applications; Jenkins and AWS CodePipeline for building and workflow orchestration; and Amazon EC2 Container Service to manage and scale containers. In this session, you will learn how to build containers into your continuous deployment workflow and orchestrate container deployments using Amazon ECS. Join us to: - Learn to integrate containers into CI/CD flows - Orchestrate continuous delivery workflows using AWS CodePipeline - Schedule containers on production clusters using Amazon ECS Who should attend: Developers, DevOps, Admins who wants to understand how to integrate containers in a CI/CD workflow. Working knowledge of containers and Docker is required. Knowledge of AWS Services is preferred, but not required.
Managing WorkSpaces at Scale | AWS Public Sector Summit 2016Amazon Web Services
Amazon WorkSpaces provides businesses with secure, managed desktops in the Amazon cloud, and offers an enhanced security posture, the ability to support the needs of a modern mobile workforce, and the flexibility to scale globally. In this session, you’ll hear about how organizations can simplify end user computing by moving desktops to the cloud. The session will cover identity and access management, network access and design, integration with on-premises IT infrastructure, application delivery, and the end user experience. Generalized deployment model and office in the box with a deconstructed network. You will also hear first-hand from customers who have implemented WorkSpaces and best practices for deploying Amazon WorkSpaces at scale. Topics will include security and network access, identity and access management, application delivery, and end user experience.
Deep Dive on AWS Lambda - January 2017 AWS Online Tech TalksAmazon Web Services
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. In this session, we dive deep into AWS Lambda to learn about capabilities, features and benefits.
Learning Objectives:
• Dive deep into AWS Lambda
• Learn about the capabilities, features and benefits of AWS Lambda
• Learn about the different use cases
• Learn how to get started using AWS Lambda
Presented at ServerlessConf NYC 2016.
There are a number of open source projects built around closed platforms like AWS Lambda/Google Cloud Functions and open serverless projects like OpenWhisk and LeverOS. In this talk we'll cover what motivates contributors, what sends them running the other direction, and how you can help your project grow. Building a project on top of closed technology is an extra challenge without insight into where it's going. Learn how to manage continuous integration with your project against your (closed) dependencies and make sure bugs stay fixed.
(DVO313) Building Next-Generation Applications with Amazon ECSAmazon Web Services
Two trends are driving app development: The shift from the server-based web to rich applications that run on a diverse set of mobile devices and modern browsers, and the growth of microservices running in the cloud that serve these clients. The results are “connected clients” - apps with the processing power of the device that are statefully connected and scaled to the cloud. In this session, you will learn about the architecture for Meteor's JavaScript app platform, Galaxy, which uses Amazon ECS, Elastic Load Balancing, and AWS CloudFormation to provide highly available, scalable, isolated environments for stateful apps across browsers and devices. We will discuss the essential characteristics of the platform, how those are provided for, and why we decided to use Amazon ECS instead of alternatives, such as Kubernetes. We will also demonstrate the Galaxy system in production.
A 60-mn tour of AWS compute (March 2016)Julien SIMON
This document summarizes a 60-minute talk on AWS compute technologies including EC2, ECS, Lambda, and Elastic Beanstalk. The talk provides an introduction to each service, demos of launching EC2 instances, deploying apps with Elastic Beanstalk and ECS, and implementing APIs with Lambda. It also lists upcoming user group events and a new book on AWS Lambda.
(DVO305) Turbocharge YContinuous Deployment Pipeline with ContainersAmazon Web Services
This document outlines best practices for using containers in a continuous delivery pipeline. It recommends using containers with tools like Docker, Docker Compose, Amazon ECS, Jenkins, and AWS CodePipeline to build, test, and deploy applications. The workflow involves developing code in a source code repository, building Docker images, running tests inside containers, and deploying containers to production using Amazon ECS and AWS services for automation and orchestration of the pipeline. Demo applications and architectures are presented to illustrate container-based continuous delivery.
This document discusses the rise of serverless architectures. It begins by defining serverless computing and functions as a service (FaaS), where code is deployed and automatically scales in response to events or triggers, with the vendor handling provisioning and management of servers. Examples of uses cases for FaaS include APIs, bots, file processing, and more. While advantages include scalability and paying only for usage, limitations include statelessness and cold starts. The document outlines the serverless ecosystem and frameworks and how serverless is changing business models, architectures, and operations practices in a more distributed, event-driven way.
Continuous Delivery with AWS Lambda - AWS April 2016 Webinar SeriesAmazon Web Services
Managing the deployment of code to multiple AWS Lambda functions and updating your API Gateway methods can be manual and time consuming.
In this webinar, we will show you how to build a deployment pipeline to AWS Lambda using AWS CodePipeline. We will discuss how to use versioning, allowing you to better manage the different variations of your Lambda function and API Gateway methods in your development workflow, such as development, staging, and production. We will walk through how to automate the entire release process of your application from development to staging and finally to production, performing automated integration tests at each stage.
Learning Objectives:
Understand the basics of AWS CodePipeline
Learn how to version AWS Lambda functions and API Gateway methods
Build a deployment pipeline to AWS Lambda
Building a Python Serverless Applications with AWS Chalice - AWS Online Tech...Amazon Web Services
AWS Lambda makes it easy for you to run your code in the cloud, without managing servers. In this session, we will show you how to build a development pipeline for a serverless application using AWS Chalice and AWS Lambda. Using Chalice, we will show you how to author a Restful service, and deploying the application to multiple stages using AWS CodePipline, AWS CodeBuild and the Serverless Application Model. We will teach you how to test your code and troubleshoot issues. By the end of the session, you will have enough information to build a solid continuous delivery pipeline for your Python serverless application.
AWS Architecting Cloud Apps - Best Practices and Design Patterns By Jinesh VariaAmazon Web Services
Jinesh Varia, Technology Evangelist, Discusses AWS architecture best practices and design patterns at the AWS Enterprise Tour - SF - 2010
http://jineshvaria.s3.amazonaws.com/public/cloudbestpractices-jvaria.pdf
This document summarizes a technical briefing on development and testing in the cloud by Jeff Barr, Senior Web Services Evangelist at AWS. The briefing discusses using AWS services like EC2, RDS, EBS, and CloudFormation for development, continuous integration, load testing, and compatibility testing. Key points include spinning up dev environments on demand, treating infrastructure as code, and leveraging services like EC2 Spot Instances and snapshots to repeatably test applications at low cost under varying conditions. Q&A followed the briefing.
Running Microservices and Docker on AWS Elastic Beanstalk - August 2016 Month...Amazon Web Services
In this session, we introduce you to a solution for easily running a Docker-powered microservices architecture on AWS using Elastic Beanstalk. We will also cover the fundamentals of Elastic Beanstalk and how it benefits developers looking for a quick and scalable way to get their applications running on AWS with no infrastructure work required.
Building a microservices architecture using Docker can require a lot of work, from launching and operating the underlying infrastructure to installing and maintaining cluster management software. With AWS Elastic Beanstalk’s multicontainer support feature, many of these tasks are simplified and abstracted away so you can focus on your application code. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker."
Learning Objectives:
• Learn the basics of AWS Elastic Beanstalk
• Understand how to use Elastic Beanstalk to run containerized applications
• Learn how to use Elastic Beanstalk to start architecting microservices-based applications
Microservices Architecture for Web Applications using Amazon AWS CloudMitoc Group
Large Web Applications are by nature resource intensive, expensive to customize, and difficult to manage at scale. What if we can change this perception and help developers architect a web application that is high performance and low cost, high security and low maintenance? This talk will focus on 3 key topics: 1) serverless infrastructure, 2) microservices architecture and 3) hands-on demos. We will describe a serverless solution and propose a scalable architecture that will help Generator Hub community to adopt cloud-native approach without huge efforts or expensive resources allocation.
The document provides best practices and recommendations for securing resources in AWS. It advises that users should:
1) Grant least privilege to IAM roles and policies, use private subnets, and avoid public buckets or open security groups.
2) Rely on managed AWS services instead of maintaining resources like databases on EC2 instances directly.
3) Implement infrastructure as code and immutable infrastructure to ensure consistency and reliability of deployments.
4) Keep application state in services like ElastiCache instead of on individual instances to ensure high availability.
5) Leverage AWS services, documentation, and community resources to continuously improve security practices.
Scale Your Application while Improving Performance and Lowering Costs (SVC203...Amazon Web Services
Scaling your application as you grow should not mean slow to load and expensive to run. Learn how you can use different AWS building blocks such as Amazon ElastiCache and Amazon CloudFront to “cache everything possible” and increase the performance of your application by caching your frequently-accessed content. This means caching at different layers of the stack: from HTML pages to long-running database queries and search results, from static media content to application objects. And how can caching more actually cost less? Attend this session to find out!
Deploy, scale and manage your application with AWS Elastic BeanstalAmazon Web Services
AWS Elastic Beanstalk provides an easy way to quickly deploy, manage, and scale applications in the AWS cloud. Through interactive demos, this session will discuss the best practices for deploying and scaling your application, provisioning additional AWS resources and performance tuning.
(DVO306) AWS CodeDeploy: Automating Your Software DeploymentsAmazon Web Services
So you’ve written some code. Now what? How do you make it available to your customers in an efficient and reliable manner? Learn how you can use AWS CodeDeploy to easily and quickly push your application updates. This talk will introduce you to the basics of CodeDeploy: key concepts, how it works, where it fits in your release process, and some deployment strategies to get you started on the right foot. We’ll walk through several demos, going from a basic sample deployment to a live update of a large multi-instance fleet, giving you a sense for how CodeDeploy can grow with your needs.
Webinar - Big Data: Let's SMACK - Jorg SchadCodemotion
The document discusses big data processing and the SMACK stack. It introduces Mesosphere and Apache Mesos as enabling distributed applications by multiplexing workloads across servers. It then covers components of the SMACK stack - including Apache Kafka for ingestion, Apache Spark for storage and analysis, Apache Cassandra for analytics, and Akka for acting on data. It discusses choosing messaging and stream processing systems and highlights Mesos support.
This document discusses options for running SQL Server workloads on AWS, including using Amazon RDS and Amazon EC2. It provides a high-level overview of the features and capabilities of SQL Server when used with each AWS service. Key points include:
- Amazon RDS provides a managed service for deploying SQL Server, handling tasks like maintenance, patching, backups and high availability. EC2 provides an unmanaged option where the customer handles these tasks.
- Both RDS and EC2 support multiple versions of SQL Server. RDS automates tasks while EC2 gives more control over the SQL instance.
- High availability options with RDS include multi-AZ deployments for automatic failover. With EC2
Continuous Delivery to Amazon ECS - AWS August Webinar SeriesAmazon Web Services
Keeping consistent environments across your development, test, and production systems can be a complex task. Docker containers offer a way to develop and test your application in the same environment in which it runs in production. You can use tools such as Docker Compose for local testing of applications; Jenkins and AWS CodePipeline for building and workflow orchestration; and Amazon EC2 Container Service to manage and scale containers. In this session, you will learn how to build containers into your continuous deployment workflow and orchestrate container deployments using Amazon ECS. Join us to: - Learn to integrate containers into CI/CD flows - Orchestrate continuous delivery workflows using AWS CodePipeline - Schedule containers on production clusters using Amazon ECS Who should attend: Developers, DevOps, Admins who wants to understand how to integrate containers in a CI/CD workflow. Working knowledge of containers and Docker is required. Knowledge of AWS Services is preferred, but not required.
Managing WorkSpaces at Scale | AWS Public Sector Summit 2016Amazon Web Services
Amazon WorkSpaces provides businesses with secure, managed desktops in the Amazon cloud, and offers an enhanced security posture, the ability to support the needs of a modern mobile workforce, and the flexibility to scale globally. In this session, you’ll hear about how organizations can simplify end user computing by moving desktops to the cloud. The session will cover identity and access management, network access and design, integration with on-premises IT infrastructure, application delivery, and the end user experience. Generalized deployment model and office in the box with a deconstructed network. You will also hear first-hand from customers who have implemented WorkSpaces and best practices for deploying Amazon WorkSpaces at scale. Topics will include security and network access, identity and access management, application delivery, and end user experience.
Deep Dive on AWS Lambda - January 2017 AWS Online Tech TalksAmazon Web Services
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service - all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app. In this session, we dive deep into AWS Lambda to learn about capabilities, features and benefits.
Learning Objectives:
• Dive deep into AWS Lambda
• Learn about the capabilities, features and benefits of AWS Lambda
• Learn about the different use cases
• Learn how to get started using AWS Lambda
Presented at ServerlessConf NYC 2016.
There are a number of open source projects built around closed platforms like AWS Lambda/Google Cloud Functions and open serverless projects like OpenWhisk and LeverOS. In this talk we'll cover what motivates contributors, what sends them running the other direction, and how you can help your project grow. Building a project on top of closed technology is an extra challenge without insight into where it's going. Learn how to manage continuous integration with your project against your (closed) dependencies and make sure bugs stay fixed.
(DVO313) Building Next-Generation Applications with Amazon ECSAmazon Web Services
Two trends are driving app development: The shift from the server-based web to rich applications that run on a diverse set of mobile devices and modern browsers, and the growth of microservices running in the cloud that serve these clients. The results are “connected clients” - apps with the processing power of the device that are statefully connected and scaled to the cloud. In this session, you will learn about the architecture for Meteor's JavaScript app platform, Galaxy, which uses Amazon ECS, Elastic Load Balancing, and AWS CloudFormation to provide highly available, scalable, isolated environments for stateful apps across browsers and devices. We will discuss the essential characteristics of the platform, how those are provided for, and why we decided to use Amazon ECS instead of alternatives, such as Kubernetes. We will also demonstrate the Galaxy system in production.
A 60-mn tour of AWS compute (March 2016)Julien SIMON
This document summarizes a 60-minute talk on AWS compute technologies including EC2, ECS, Lambda, and Elastic Beanstalk. The talk provides an introduction to each service, demos of launching EC2 instances, deploying apps with Elastic Beanstalk and ECS, and implementing APIs with Lambda. It also lists upcoming user group events and a new book on AWS Lambda.
(DVO305) Turbocharge YContinuous Deployment Pipeline with ContainersAmazon Web Services
This document outlines best practices for using containers in a continuous delivery pipeline. It recommends using containers with tools like Docker, Docker Compose, Amazon ECS, Jenkins, and AWS CodePipeline to build, test, and deploy applications. The workflow involves developing code in a source code repository, building Docker images, running tests inside containers, and deploying containers to production using Amazon ECS and AWS services for automation and orchestration of the pipeline. Demo applications and architectures are presented to illustrate container-based continuous delivery.
This document discusses the rise of serverless architectures. It begins by defining serverless computing and functions as a service (FaaS), where code is deployed and automatically scales in response to events or triggers, with the vendor handling provisioning and management of servers. Examples of uses cases for FaaS include APIs, bots, file processing, and more. While advantages include scalability and paying only for usage, limitations include statelessness and cold starts. The document outlines the serverless ecosystem and frameworks and how serverless is changing business models, architectures, and operations practices in a more distributed, event-driven way.
Continuous Delivery with AWS Lambda - AWS April 2016 Webinar SeriesAmazon Web Services
Managing the deployment of code to multiple AWS Lambda functions and updating your API Gateway methods can be manual and time consuming.
In this webinar, we will show you how to build a deployment pipeline to AWS Lambda using AWS CodePipeline. We will discuss how to use versioning, allowing you to better manage the different variations of your Lambda function and API Gateway methods in your development workflow, such as development, staging, and production. We will walk through how to automate the entire release process of your application from development to staging and finally to production, performing automated integration tests at each stage.
Learning Objectives:
Understand the basics of AWS CodePipeline
Learn how to version AWS Lambda functions and API Gateway methods
Build a deployment pipeline to AWS Lambda
Building a Python Serverless Applications with AWS Chalice - AWS Online Tech...Amazon Web Services
AWS Lambda makes it easy for you to run your code in the cloud, without managing servers. In this session, we will show you how to build a development pipeline for a serverless application using AWS Chalice and AWS Lambda. Using Chalice, we will show you how to author a Restful service, and deploying the application to multiple stages using AWS CodePipline, AWS CodeBuild and the Serverless Application Model. We will teach you how to test your code and troubleshoot issues. By the end of the session, you will have enough information to build a solid continuous delivery pipeline for your Python serverless application.
AWS Architecting Cloud Apps - Best Practices and Design Patterns By Jinesh VariaAmazon Web Services
Jinesh Varia, Technology Evangelist, Discusses AWS architecture best practices and design patterns at the AWS Enterprise Tour - SF - 2010
http://jineshvaria.s3.amazonaws.com/public/cloudbestpractices-jvaria.pdf
This document summarizes a technical briefing on development and testing in the cloud by Jeff Barr, Senior Web Services Evangelist at AWS. The briefing discusses using AWS services like EC2, RDS, EBS, and CloudFormation for development, continuous integration, load testing, and compatibility testing. Key points include spinning up dev environments on demand, treating infrastructure as code, and leveraging services like EC2 Spot Instances and snapshots to repeatably test applications at low cost under varying conditions. Q&A followed the briefing.
The document discusses architectural patterns and best practices for building scalable and resilient applications on Amazon Web Services (AWS). It provides examples of how to design for failure, implement loose coupling between components, and build elasticity into applications using AWS services like Auto Scaling, Elastic Load Balancing, and Amazon EC2. The document also outlines three approaches for creating standardized technology stacks and managed development environments on AWS.
NWCloud Cloud Track - Best Practices for Architecting in the Cloudnwcloud
The document discusses best practices for cloud architecture based on lessons learned from Amazon Web Services customers. It provides guidance on designing systems for failure, loose coupling, elasticity, security, leveraging constraints, parallelism, and different storage options. The key lessons are applied to migrating a sample web application architecture to AWS.
This presentation compares three modern architecture patterns that startups are building their businesses around. It includes a realistic analysis of cost, team management, and security implications of each approach. It covers AWS Elastic Beanstalk, Amazon ECS, Amazon API Gateway, AWS Lambda, Amazon DynamoDB, and Amazon CloudFront. Attendees will also hear from venture capital investor Third Rock Ventures (TRV) who has launched 40+ biotech startups over the last 10 years. TRV will outline how it launches cloud native startups that turn bleeding edge science into new treatments across the spectrum of disease, with highlights drawn Relay Therapeutics and Tango Therapeutics.
Immutable pattern in IT infrastructure architecture. Building own OS'es and containers to deliver software.
Examples for delivery pipelines. Pros and cons for containers and configuration managers. Docker, Ansible, Chef, AWS CloudFormation, GCE, Terraform.
This document provides an overview of topics that will be covered at a Microsoft Dev Camp in 2015. The topics include introductions to ASP.NET, Visual Studio web tools, ASP.NET Web API, building real-time web applications with SignalR, and Azure services. Sessions will cover web front-end development, ASP.NET updates, consuming and building Web APIs, and real-world scenarios for scaling, updating, and deploying applications on Azure.
by Itzik Paz, Solutions Architect & Rich Cowper, Solutions Architect Manager, AWS
This presentation compares three modern architecture patterns that startups are building their businesses around. It includes a realistic analysis of cost, team management, and security implications of each approach. It covers AWS Elastic Beanstalk, Amazon ECS, Docker, Amazon API Gateway, AWS Lambda, Amazon DynamoDB, and Amazon CloudFront.
Leo Zhadanovsky - Building Web Apps with AWS CodeStar and AWS Elastic Beansta...Amazon Web Services
Developers need to quickly develop, build, and deploy web applications. In this session, we show you how AWS CodeStar makes it easy for you to set up a continuous delivery toolchain and start developing on AWS in minutes. We also share best practices for managing and deploying web applications using AWS Elastic Beanstalk.
Speaker: Leo Zhadanovsky
This document discusses how to deploy a Java web application to Windows Azure Cloud Services. It covers:
- Setting up the development environment with Java, Eclipse, and the Azure SDK.
- Creating a dynamic web project and adding the Azure deployment project.
- Configuring the deployment to include the JDK, Tomcat, and WAR files.
- Testing the application locally using the Azure emulator.
- Publishing the application to the Azure cloud.
- Additional topics like remote debugging, managing the cloud service, and using Azure services like SQL, storage, caching and CDN.
Developers need to quickly develop, build, and deploy web applications. In this session, we show you how AWS CodeStar makes it easy for you to set up a continuous delivery toolchain and start developing on AWS in minutes. We also share best practices for managing and deploying web applications using AWS Elastic Beanstalk.
Len Henry
Sr. Solutions Architect, AWS
This document provides an overview of AWS Elastic Beanstalk, including:
- Elastic Beanstalk is a PaaS service that makes it easy to deploy and manage applications in the AWS cloud.
- It allows developers to focus on coding instead of managing infrastructure. Elastic Beanstalk automatically handles scaling and maintenance of the application.
- The key components of Elastic Beanstalk include applications, versions, environments, and tiers. Environments run specific versions of an application and can include a web server tier and worker tier.
Sponsored Session: Please touch that dial!Edward Burns
Enterprise Java on Azure, from PaaS to IaaS and everything in between. Join Java Champion and Principal Architect Ed Burns to learn how to select the right Enterprise Java on Azure solution for your needs. Whether you are moving your Java enterprise to the cloud, evolving once you get it there, or starting fully cloud native, there are many factors to consider. Of course, there are the usual suspects of price, time, and effort. But there are also additional factors such as balancing complexity and maintainability, staffing (the level of involvement of systems integrators, contractors, and in-house staff), license portability. Don't forget functional factors such as high availability and disaster recovery, and quality-of-service guarantees. Azure offers a complete range of enterprise Java solutions, like turning a dial. For maximum ease, let Azure manage all the complexity for you with Azure Spring Apps, Azure App Service, or Azure Functions Java. If you want more control, consider Jakarta EE solution templates, or running Spring on App Service. For maximum control, run your enterprise Java directly on Azure runtimes like Kubernetes, Open Shift, or Virtual Machines. Ed examines the tradeoffs in these choices from an enterprise architect's perspective.
AWS Webcast - Best Practices in Architecting for the CloudAmazon Web Services
Join us to get a better understanding around architecting scalable, reliable applications for the cloud. You'll learn about monitoring, alarming, automatic scaling, load balancing, replication, and more, direct from AWS Senior Evangelist Jeff Barr.
AWS Summit 2013 | Auckland - Continuous Deployment Practices, with Production...Amazon Web Services
With AWS companies now have the ability to develop and run their applications with speed and flexibility like never before. Working with an infrastructure that can be 100% API driven enables businesses to use lean methodologies and realize these benefits. This in turn leads to greater success for those who make use of these practices. In this session we'll talk about some key concepts and design patterns for Continuous Deployment and Continuous Integration, two elements of lean development of applications and infrastructures.
In this session, we introduce you to a solution for easily running a Docker-powered microservices architecture on AWS using Elastic Beanstalk. We will also cover the fundamentals of Elastic Beanstalk and how it benefits developers looking for a quick and scalable way to get their applications running on AWS with no infrastructure work required. In the second half of the session Sean O’Brien, engineer at Prezi, will share how Prezi is using Elastic Beanstalk to build microservices for its entire development team.
Building a microservices architecture using Docker can require a lot of work, from launching and operating the underlying infrastructure to installing and maintaining cluster management software. With AWS Elastic Beanstalk’s multicontainer support feature, many of these tasks are simplified and abstracted away so you can focus on your application code. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. Elastic Beanstalk leverages Amazon EC2 Container Service for its container management capabilities.
AWS DevDay San Francisco, June 21, 2016.
Presenter: Adhiraj Singh, Sr. Product Manager
Developers need to quickly develop, build, and deploy web applications. In this session, we show you how AWS CodeStar makes it easy for you to set up a continuous delivery toolchain and start developing on AWS in minutes. We also share best practices for managing and deploying web applications using AWS Elastic Beanstalk.
Similar to Dead-Simple Deployment: Headache-Free Java Web Applications in the Cloud (20)
Rapid RESTful Web Applications with Apache Sling and JackrabbitCraig Dickson
This is the presentation from JavaOne 2011 that Ruben Reusser and I worked on. The presentation was heavily demonstration based, so there are not as many slides.
The document discusses JDBC (Java Database Connectivity) basics including connecting to a database, executing SQL statements to create, update, query, and delete data. It covers establishing a connection using the DriverManager, executing SQL using Statement objects, and includes examples for creating a table, inserting rows, updating rows, selecting rows, and deleting rows. The document is intended as a 20 minute introduction to JDBC fundamentals.
How to test drive development using LinuxCraig Dickson
This is a lightning presentation given by Cardell Rice that demonstrates how easy it is to test drive an Ubuntu Linux install from a USB drive, without disrupting the main OS on your machine.
This is a lightning presentation given by Anita Barabe to our team introducing the new Google Wave tool and got us talking about how we might leverage it to the team's benefit.
Flex 4 focused on design, developer productivity, and framework evolution. It included updates to Flash Builder, Flash Catalyst, new Spark components, improved layout and animation engines, 3D capabilities, FXG vector graphics, updated MXML, states functionality, ASDoc support, binding updates, and text engine improvements. Flash Builder provided an improved debugger and profiling support. Flash Catalyst allowed designing user interfaces without coding. Spark included around 30 new components. The layout model was decoupled from individual components and gained 2D rotations, scalability, and 3D capabilities. The animation engine improved effects, transitions, and complex animations.
This is a lightning presentation given by Gorkey Vemulapalli to our team introducing the basics of Palm's new WebOS platform being used on the Palm Pre device.
Java Persistence API (JPA) - A Brief OverviewCraig Dickson
This is a lightning presentation given by Scott Rabon, a member of my development team. He presents a high level overview of the JPA based on his first exposure to it.
Fast and Free SSO: A Survey of Open-Source Solutions to Single Sign-onCraig Dickson
This document provides a summary of an presentation on single sign-on (SSO) solutions. It begins with an overview of the goals of presenting on open source SSO solutions and providing a comparison. The agenda then covers what SSO is, a survey of major open source SSO players like OpenSSO, JOSSO and CAS, head-to-head comparisons of the solutions, and leaves time for questions. Specific points covered include configurations, architectures, integration capabilities and customization options for each solution.
Building Social Applications using ZemblyCraig Dickson
Zembly allows users to easily create and host social applications like widgets and Facebook apps directly in the browser. It provides an IDE-like editor for writing code in HTML, CSS, JavaScript and other languages to build applications that can tap into social networks. Users can also create reusable services containing business logic to be published and used by other applications. Once built, applications can be automatically published and hosted on Zembly for use on social platforms.
This is a lightning presentation given by Nhan Nguyen to our team for the purpose of knowledge sharing in support of our efforts to create a culture of learning.
Performance Analysis and Monitoring with Perf4jCraig Dickson
This is a lightning presentation given by Sudhan Kanade to our team for the purpose of knowledge sharing in support of our efforts to create a culture of learning.
This is a lightning presentation given by Sean Chung to our team to summarize a presentation he saw at JavaOne 2009. Sean also adds a slight spin to the original presentation by including Adobe Flex as an additional comparison axis.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
2. Abstract The cloud has promised a lot to Java Web developers but has delivered on only some of the hype. Many issues still exist that have the ability to kill many a project. Elastic Beanstalk, a Web service announced by Amazon in early 2011, takes the cloud to the next level for Java Web applications. It aims to eliminate the remaining issues the cloud presents. No hardware purchases? Check! Low setup costs? Check! No software installation? Check! Automatic resource scaling? Check! Resource monitoring? Check! This presentation takes a deep dive into Amazon's Elastic Beanstalk service, including what problems it can help solve and opportunities it provides to deliver better Java Web applications. craigsdickson.com
3. Speaker Bio Craig S. Dickson is a software engineering professional with over 15 years of experience. He has proven leadership experience in both domestic and multi-national start-up and Fortune 500 corporations in the United States, Australia and Europe. Craig specializes in enterprise Java development and cloud architecture and holds multiple certifications including Sun Certified Architect for JavaEE and Certified ScrumMaster. Craig brings specific expertise in enterprise software architecture and design, refining development processes and building development teams around Agile software engineering principles. Educated in Australia, Craig holds a BSc(Hons) in Computer Science. He is based in Huntington Beach, CA, and Brisbane, Australia. craigsdickson.com
4. Presentation Outline What are the common problems related to developing and deploying a Java web app? How does Amazon Elastic Beanstalk (EB) attempt to address those problems? Do I have to change how I write my Java apps? What existing applications will work on EB? How much does EB cost? What are the alternatives to EB? craigsdickson.com
5. The Downsides of being an Enterprise Java Engineer Topic craigsdickson.com
6. Early Stage Development Goal: isolated self-contained local development, quick code-test-code cycles How to demonstrate an individual’s work? How to demonstrate combined work? What about clients outside your firewall? How to push quick (automated) updates? craigsdickson.com
7. What about Test and Stage? Goal: quick, consistent, production-like environment provisioning Enough resources to set up 2 more environments? Enough resources to make them production-like? What if your testers are not local? A clean environment for each round of tests? Will QA get angry if I run performance tests? craigsdickson.com
8. What about Production? Goal: minimize cost, high availability, high reliability, monitoring, automated zero downtime upgrades How do I migrate from staging to the production environment? What if it’s a success, how do I handle the traffic? What if it is a bomb, what do I do with all this hardware? It’s 2am, is my application up and running? The network card on one of my servers just died, how long will it take to replace? craigsdickson.com
9. How does Elastic Beanstalk address these problems? Topic craigsdickson.com
10. What is Elastic Beanstalk? Platform-as-a-Service (PaaS) offering from Amazon Web Services (AWS) Built on top of existing AWS infrastructure Launched January 2011, still officially Beta First (and currently only) supported application architecture is Java Web Applications on Tomcat Ruby rumored to be next language to be supported craigsdickson.com
16. Currently Supported Platforms Application Servers Tomcat 6 Tomcat 7 OS Amazon Linux 32 and 64bit versions RHEL / CentOS based binary compatibility with RHEL 5 Only interesting if you intend to use SSH craigsdickson.com
17. Logging Configured to use java.util.logging by default Can use a meta-framework like SLF4J Standard Tomcat logs are created (catalina.out etc) Can view snapshot of logs from management console Can be rotated out to S3 hourly Can SSH to instances to view live logs craigsdickson.com
18. Database Integration EB does not directly manage your databases Can use the Amazon SimpleDB Can also use Amazon RDS Can also use an RDBMS running on an Amazon EC2 instance Can also use a non-Amazon based database Pass JDBC connection info to app through console craigsdickson.com
19. Monitoring and Notification Uses Amazon CloudWatch (CW) to monitor your environment Can trigger scaling events based on metrics cpu utilization network traffic, etc. EB adds a health check feature on top of CW Must have at least 1 URL that can be accessed without security Uses Amazon SNS to publish events scaling up / down health check failures craigsdickson.com
20. Auto-Scale Based on CW events, your environment can be scaled up or down automatically Can configure what CW events trigger scaling Can set minimum and maximum instance counts Can control up / down instance increment Also can control how scaling events are generated craigsdickson.com
21. Environment Hot Swap Imagine you have an existing EB production environment Now you have a new version ready to go live sitting in another EB environment (e.g. staging) EB supports URL Swapping staging becomes production and production becomes staging Results in zero application downtime craigsdickson.com
23. How much do I have to change my development process? Topic craigsdickson.com
24. Developing for EB Java WAR files only Stick to the standard JavaEE rules don’t read/write the local file system etc Code for a clustered environment Connections to external resources configure via AWS management console craigsdickson.com
25. Eclipse Integration Eclipse plugin to integrate with EB Create new Applications and Environments right from Eclipse Deploy new Versions just by deploying to EB instead of a local Tomcat First time deployment can take several minutes Rapid code-test-code cycles should still be done locally Careful about deleting the wrong environment craigsdickson.com
29. Command Line & Maven Options All EB functions are available through a command line API Maven plugin written by Aldrin Leal http://beanstalker.ingenieux.com.br/ Wraps entire EB command line API Excellent for faking EB support in other IDEs (Netbeans etc) Excellent for command-line builds, continuous integration (Jenkins) etc craigsdickson.com
30. Which of my Existing Applications will work on Elastic Beanstalk? Topic craigsdickson.com
32. Custom Amazon Machine Image (AMI) Category D applications from the previous slide, need / want to access local resources and/or files Jenkins is an excellent example of this type Needs Maven, Ant and other tools to be available Needs to write build logs, archive artifacts etc Start with base EB AMI, then customize Specify your own AMI ID in the Environment Configuration craigsdickson.com
33. How much is it going to cost me? Topic craigsdickson.com
34. Cost Sources EB itself is FREE! Pay as you go - for all other AWS services EC2 for running environments S3 for storing the versions of the application Elastic Load Balancer instance and data charges EBS to back the EC2 instance plus for any custom AMIs you create Outbound traffic craigsdickson.com
35. Example Price Breakdown Adapted from: http://aws.amazon.com/elasticbeanstalk/#pricing craigsdickson.com
36. Are there alternatives toElastic Beanstalk thatI should consider? Topic craigsdickson.com
37. Virtualization Can’t I just get the same benefits with an in-house virtualization solution? Still need to setup the OS and Tomcat environment at least once. What about security and other updates? Can the in-house scale appropriately / automatically? craigsdickson.com
38. Other PaaS Vendors Isn’t this just the same thing that other PaaS vendors already do? EB allows low level system access Some PaaS vendors are running on AWS infrastructure anyway If you need more than a Tomcat environment right now, then other vendors may be the way to go JavaEE server, Ruby, PHP etc craigsdickson.com
39. Summary Amazon Elastic Beanstalk is a PaaS solution built on top of the existing Amazon IaaS solutions Currently offers Tomcat 6 & 7 environments Often requires no modification to existing code Has good tooling support Can be extended through low-level system access Solves many day-to-day problems for developers craigsdickson.com
40. Next Steps Sign up for an EB account and deploy the sample application http://aws.amazon.com/elasticbeanstalk/ Join the EB community forum https://forums.aws.amazon.com/forum.jspa?forumID=86 Check EB related resources on my blog http://craigsdickson.com/tag/elastic-beanstalk/ craigsdickson.com
41. Questions? Craig S. Dickson Email - craig@craigsdickson.com Blog - http://craigsdickson.com LinkedIn – http://bit.ly/csd-li Twitter – http://bit.ly/csd-tw craigsdickson.com
Editor's Notes
Good morning, thank you for making it here this morning after last nights activities I certainly appreciate the effort.My name is Craig Dickson and I am an independent enterprise java consultant and I am also an adjunct professor at the university of California in Irvine.
In today’s presentation we will quickly look at some of the common problems that enterprise Java engineers face in their day to day lives when it comes to dealing with setting up environments for development time, testing time and production time scenarios.Then we will take a deep look at how Amazon Web Services’ new tool Elastic Beanstalk can help with the problems we identified.Next we will examine the impact using EB will have on your development process and how you write your Java applicationsWe will try to answer the question as to what it will cost you to get access to those fancy cloud features.And, finally we will take a quick look at some of the alternatives to Elastic Beanstalk in the Java Platform-as-a-Service space.
Being an enterprise Java engineer is an excellent job. However there are many little annoyances that slow us down and make our jobs harder than they need to be on a day to day basis. Lets take a quick look at a few of these that are related to provisioning and managing runtime environments and deploying applications into those environments.
Let’s talk about the activities that an enterprise Java developer has to complete during the early parts of the development process.The ultimate scenario is a self-contained development environment hosted entirely on your laptop which will allow you to write code, test the code and write more code in a tight loop. The tighter that loop the better. Possibly you are doing it more than one hundred times in a typical day. So any steps in that loop that make the loop take longer can really be devastating to your productivity. So any solution has to preserve your ability to do local development.But once you have written your code how do you demonstrate your code? If you want to demo it to your buddy in the next cube, you will probably just shout at him to come over and look at the coding miracle you have just performed.But what if you need to demonstrate your work to someone further away, like a client in another city or even another country. You could start by asking your sys admin to punch a hole in the firewall and just forward a port to your development machine. After they stop laughing at you, the next option might be to get them to setup a “demo” environment, which takes somewhere between 3 days and never to happen, because they are busy too and they may have slightly different goals. Next you have to give them your WAR file, which is of course too big to send by email, so then you have to get the right credentials so that you can FTP the file up to some box and then also send an email to the sys admin to let them know it is there. Then the sys admin will deploy it, just as soon as they get back from lunch. At this point hours or even days have passed by. If the deployment fails, you get to start the whole process all over again.What if another developer on your team needs to demonstrate some of their work to an external client at the same time? Maybe the two applications can be installed side by side in your demo environment, but more often than not they can’t, especially when you two are working on the same project. Rare is the engineer that hasn’t heard the cry from their sales team of “no one touch the demo environment for the rest of the day as Mega Client X is going to be onsite”.Of course, you work for a company that values quality software, so you have a continuous integration strategy in place, so your Jenkins server also needs a clean environment to push updates to and run your integration tests on. Just ask your friendly local sys admin to sent up a continuous integration environment for you and get ready to do a lot of explaining.
Depending on what kind of development process you are following the Testing stage may start earlier or later in your project. But no matter when it starts, there are some common problems that often arise.So while you were in the early stage of development, your application server was running locally on your dev machine, you were probably using it just as it came configured and had it nicely integrated to your IDE.But when it comes to testing, you are going to need a whole separate environment, and hopefully one that is representative or identical to your production environment. Hopefully your sys admin has an environment ready for your QA team.What if you need to run some performance tests, can you run those at the same time that your QA team is working or do you need yet another environment? What if you are testing more than one version at a time? How do you make sure all of the QA environments are configured the same?Of course at some point once the application hopefully reaches a certain level of quality that you might want to make use of a staging environment, where you can do final customer demonstrations or maybe even some kind of beta release testing. How do you make sure the Staging environment is up and running quickly? How do you make sure it is also like production and like QA?
And finally we reach production.Once you have functionally tested and performance tested and end user tested your application to death, not to mention private and public beta releases, how to you ensure that the production environment is identical to your testing and staging environments. Every little OS setting, every application server setting, every database setting, there is lot to keep track of.But lets say you do get your production environment up and running, what happens if you get slashdotted, or become the next Twitter trending topic, can your environment handle a massive and quick scale up? Do you have the hardware available? How quick can your sys admin get that hardware installed, get the operating system running, get the application server installed and configured, get the application deployed into that environment etc etc, the list is quite long. Your 15 minutes of fame might be over before any of that can happen.Of course the more common problem is that your shiny new product is dead on arrival. If you spent a lot of your budget on hardware, now all you have is a really expensive set of room heaters. And to make things worse, you don’t have any money left to hire the developers you need to fix the problems with your product.What if you are in the middle, with some mild success, enough to keep going at least. You probably whipped your team to within an inch of their life to get your product out there, so now they want to take a whole weekend off and get some sleep, play some World of Warcraft, work on their own Facebook killer app side project etc. Is that really possible, can your team walk away from your production environment for any length of time? How do you know if your application and your servers are running and performing as expected? Does someone need to monitor the log files and look for menacing looking stack traces, or maybe log in every couple of hours to see if the site is up? How long can that last?Finally, your accountant might consider your physical hardware as an asset, but when it breaks on a Saturday afternoon, it is going to feel a lot more like a liability to your iT team than an asset.
OK, that’s a quick look at some of the problems related to deploying and managing Java environments that engineering teams face during the normal development, test and release lifecycle of a software product.Now lets look at how Amazon Web Service’s Elastic Beanstalk tool can help minimize or even eliminate these issues so you can concentrate more on developing your product as opposed to managing your environments in a professional way.
Firstly, what is Elastic Beanstalk?Elastic Beanstalk is a Platform-as-a-Service solution for Java development from Amazon Web Services. For clarity, at a minimum, a Java Platform-as-a-Service solution provides a JVM based execution environment, built on top of production-ready cloud infrastructure, that allows for the easy deployment of applications, without having to provision or configure low level system resources.There are a few vendors in this space and I will mention them in a later slide.Not surprisingly, Amazon has built their PaaS solution on top of their existing proven infrastructure – services like EC2 and S3 are used to provide the PaaS functionality that Elastic Beanstalk delivers.Right now Elastic Beanstalk only supports Java Web applications that can be run in either a Tomcat 6 or Tomcat 7 environment.Amazon is an investor in Engine Yard which is a Platform-as-a-Service provider in the Ruby space among other things, and I had it confirmed by an Engine Yard employee at their booth at JavaOne this year that yes they are working with Amazon to bring Ruby to Elastic Beanstalk, but there are no dates, at least no public dates.
Amazon Web Services pioneered the Infrastructure-as-a-Service market with their initial EC2 offering and the subsequent additional services that have been added over the last several years.The Platform-as-a-Service model is an abstraction layer over the Infrastructure-as-a-Service layer that hides many of the details of provisioning individual services. When working in the IaaS layer, developers must still talk in terms of servers and load balancers and redundancy and availability and things of that nature, even though it is all virtualized. When working in the PaaS layer, developers now simply talk about applications and environments to run those applications on.Elastic Beanstalk makes use of many of Amazon’s existing Infrastructure-as-a-Service products to deliver the kind of robust functionality necessary to create a reliable Platform-as-a-Service product.In addition to the services that Elastic Beanstalk uses directly to implement the PaaS functionality, developers are able to make use of other Amazon Web Service products such as SimpleDB and the Relational Database Service.
The main component in Elastic Beanstalk is the Application.You can deploy multiple versions of your WAR file into a single Elastic Beanstalk Application. In this diagram there are currently four different versions of the same WAR file available.To run a Version of your application, you need to create an Environment. An Elastic Beanstalk environment consists of a set of configuration options that control how the resources within your application will be created, how they will work and how they will react to certain events, such as heavy load for example. In this example, there are four environments configured for this application. The Environment represents all of the Infrastructure-as-a-Service resources that your application will use. This is different to many of the other Paas vendors, where you do not get to see and manipulate the underlying resources directly. Much of the time the default settings are acceptable, however you always have the option to manually change the configuration options at any time.In this example, there are currently four versions of the application available. Version 1.0 is not deployed anywhere and presumably obsolete. Version 2.0 is running in the Environment labeled Production. Version 3.0 Beta is running in the Staging Environment as well as the Performance Test environment and Version 4.0 Alpha is running in the Environment labeled Test.All four environments may be configured identically, however that is not a requirement. You can have environments with very different configuration settings within the same Application. This is useful if you need to support more than one runtime configuration, or you would like to test out changes to your configuration before making those changes to your production environment.
Each of the Environments on the previous slide, represent many resources that have been configured to work together to create a scalable and reliable runtime environment.The public face of an Elastic Beanstalk environment is an Elastic Load Balancer instance. This is where all of the public traffic comes through to get to your application. The load balancer does common load balancing tasks like spreading requests across the available EC2 instances to spread the workload. However it is also responsible for monitoring the health of each EC2 instance and can terminate an instance if it fails – we will look at how that works later.Behind the load balancer are the actual EC2 instances that have the Tomcat environments on them. The EC2 instances belong to what is called an Auto Scaling Group. Currently there are 3 instances in the Auto Scaling Group, and that number will increase or decrease based on certain conditions that the Elastic Beanstalk system watches for. If there is a high load on all 3 servers, Elastic Beanstalk can spin up more servers, up to a maximum of 10,000 currently. If the servers are under light load, Elastic Beanstallk can also start shutting down instances automatically. As an administrator you do have some control over those scaling up and scaling down events, and that is part of the Environment configuration settings.Technically speaking the Elastic Load Balancer and the Auto Scaling Group are the end of the Elastic Beanstalk story. However, it is quite common for a Java Web Application to need to connect to some kind of datastore, traditionally a relational database. It is very easy from your Java War file to use Amazon’s SimpleDB service, which is Amazon’s NoSQL like service, or you can use the Relational Database Service where you can connect to MySQL or even Oracle databases. However, your application does not have to use a database at all, and if it does, it does not have to use any of Amazon’s database services.
So now that you have a rough idea of the main parts that make up Elastic Beanstalk, lets take a look at the web based management console that Amazon provides for managing your Elastic Beanstalk resources.
Now lets deploy our first Java Web Application into our Elastic Beanstalk environment.
As we have seen in the demonstrations, Elastic Beanstalk currently supports both Tomcat 6 and Tomcat 7.In some PaaS environments like Google App Engine you have no interest in what is running under the covers because the vendor goes to great length to hide it from you. And there are some very good reasons for doing it that way. On the other hand Elastic Beanstalk does give you the option to look under the covers if you want to. The default configuration is going to be sufficient for a large majority of applications, however if you need to tweak the low level resources you can.So if you do dig in you will find that Elastic Beanstalk is made up of EC2 instances and they are all running what is called Amazon Linux and you have both 32 and 64bit options there. If you are wondering what Amazon Linux is, it is basically a custom linuxdistro based on a combination of Red Hat Enterprise Linux and CentOS, with some settings and configuration that make the os play nicely in Amazon’s EC2 world. So if you are familiar with any of the RedHat based linux flavors, you will right at home.
Logging seems like a benign thing, but when you want to debug an issue and you don’t have access to log files, things get pretty tough pretty quickly. If you follow the EB community forums you will see that questions about logging are some of the most common.So the Elastic Beanstalk environment supports logging in a variety of ways.By default, the Tomcat environment is configured to use the JDK Logging API. Obviously coding directly to that API is a probably a bad idea, so using something like SLF4J to keep your code a little more portable is recommended.The Tomcat installations do create the standard log files like catalina.out, but unless you want to SSH to each of the EC2 instances all the time to see them, that can be a bit of a pain.From the management console as we saw in the demonstration you can request to have snapshots taken and you can easily view those from a browser.Alternatively you can have those snapshots posted to an S3 bucket where you could be downloading them automatically and running some kind of parser on them for example.Keep in mind though that the snapshots that EB creates, whether manually from the console or automatically for the S3 dumps, they only include the standard Tomcat logs. So if your application is creating its own log files, then you will need to either change to log to the standard Tomcat logs, or you will have to come up with your own system for rotating them out to S3 or some other location.
EB does not manage databases.So you can connect to any database you like, including those that are part of other AWS products like SimpleDB and RDS – there are some nice sinergies between Elastic Beanstalk and the RDS service in terms of reliability and failover that can add a lot of robustness to your application with very little work. Or you can connect to your own database running on another vanilla EC2 instance, or even a database running outside of the Amazon environment altogether. EB does not restrict your database options in any way.Whatever database you choose, you can inject the connection settings into your application using the Container Parameters section of the Environment Configuration Dialog.
Elastic Beanstalk uses the existing CloudWatch service from Amazon for monitoring your EB environments.You can configure with a high degree of precision which CloudWatch events trigger scaling up or scaling down to happen in your environment.In addition to the CloudWatch scaling checks, Elastic Beanstalk also supports a health check feature. The Elastic Load Balancer monitors each EC2 instance by hitting a specified URL and waiting for a response. If the request times out or the response does not indicate success, like a 404 for example, then the Elastic Beanstalk system will shutdown that EC2 instance and replace it with a new healthy one.This health check URL feature does mean that you have to have at least 1 URL in your application that the Load Balancer can request. This is a common issue that comes up in the community forums as well, where people have a non-existent URL configured, or they have a URL configured that requires authentication. What ends up happening is that EB starts up the EC2 instance, is never able to get a successful health check performed, and so the EC2 instance is shutdown. To the user it looks like their application failed to start up at all as the status icon for the environment never turns green.
Auto Scaling is another service that EB makes use of from the underlying Amazon infrastructure services.Through the Environment configuration dialog you can control which CloudWatch events trigger the auto scaling functionality.You can also control the minimum number of instances to avoid a lot of thrashing of your environment, and you can also set the maximum number of instances to help control costs.Each up or down scaling event can add or remove multiple EC2 instances at a time, and this is also configurable by the user.And finally for Auto-Scale you can also control the time between scaling events to avoid having too many instances added for short bursts of heavy traffic for example.
So imagine you have an existing Elastic Beanstalk environment up and running as your customer facing production environment.But now you have a new version of the application and it has been through QA, performance testing and a round of beta testing and is finally ready to go live. Traditionally you might have taken your production system down, put up a maintenance page, deployed the new version of your application to the production environment, started up everything again and hoped everything worked. Because in reality, you have never tested your new version in this environment. There are many things that could go wrong. There could be subtle differences between your staging and your production environment or somebody could grab the wrong version of the code to deploy and the list goes on.So if you have a stable tested staging environment, why not just make that your new production environment. EB supports this idea with what it calls URL Swapping.Because you are working at the PaaS layer and are not dealing with low level resources, it is reasonably easy through a simple DNS change to make the production URL point at your staging environment and vice versa. Elastic Beanstalk does all of the work for you and it results in no downtime of your production URL.
Lets take a look at all of the options we have for managing and controlling our Elastic Beanstalk environments.
OK, so we have seen that Elastic Beanstalk brings a lot of benefits to the development process, particularly in the area of provisioning and managing runtime environments. But if you have to change your whole development process or your application architecture, are those benefits really worth that much?Lets take a look at how working with Elastic Beanstalk might change the way you go about writing your applications.
First of all, because Elastic Beanstalk only supplies Tomcat environments currently, that means you are restricted to just applications that are packaged as Java WAR files. So if you are one of these developers that loves EJBs, then Elastic Beanstalk is not currently for you.If your application does a lot of things that the Servlet specification says you shouldn’t, then Elastic Beanstalk also might not be for you. For example if you write a lot of files to the file system as a permanent storage mechanism, then the default Elastic Beanstalk setup will not work for you, as you have no control over when a server will be terminated or started, so you will lose your file system based data.While Elastic Beanstalk is not technically a clustered environment as many of the current Java application servers would define it, it is still a good idea to code as if it were a clustered environment. For example, don’t assume the application context is global across your entire environment.So generally speaking, if you have written your war file in a standards compliant way and have been careful not to make too many assumptions about your runtime environment, then your application should be able to be deployed onto an Elastic Beanstalk environment with no code changes being required.The one area that might actually require some minor code changes is how EB provides environment specific settings, for example the JDBC connection string to be used by your application. This can be easily abstracted away, particularly if you are already using a tool like Spring, so that you don’t accidentally end up locked in to one vendor.
So we have seen how to manage Elastic Beanstalk resources through the web based management console, but that is probably a little slow and cumbersome for real developers. The good news is there are some other options available. The biggest of which is the Eclipse plugin that Amazon provides that not only allows you to manage your Elastic Beanstalk applications, but also some of the other Amazon services as well.From within Eclipse you can create brand new applications and also push new versions of existing applications. However, there is some overhead in pushing new versions out to the EB environment, so you should still plan on doing most of your work locally, which is very practical and doable since Elastic Beanstalk does not require any special code or environment setup to run so all of your normal tools will still work.One thing from the community forums that has come up is that the Eclipse interface does make it easy to accidentally delete your EB environment, when you had intended to delete a local Tomcat instance instead, so keep an eye out for that.
Lets take a look a first look at getting setup and and creating our first EB application in Eclipse.
OK, so now we have created an Elastic Beanstalk application, lets make sure we can test it locally just like we always have with our other applications.
Ok, so we created the application and have tested it locally, lets now push that application out to our Elastic Beanstalk environment.
If you are not an Eclipse person, and I am not, then you might be looking for other options.Amazon provides an extensive command line interface to elastic beanstalk and like with many of their other command line interfaces for their other products you can often do more with the command line than you can with the web management console.In addition there is a pretty good Maven plugin available now that was written by one of the community members and seems to be getting positive feedback in the forums. Once you have integration with a tool like Maven of course, you can now include Elastic Beanstalk into your whole development process, especially good for things like Continuous Integration – it is very easy to start stop and reset environments and upload and deploy new code versions for running unit tests.
So unless you are a lucky engineer and only work on brand new projects, you probably have some existing projects that you will need to maintain for a while longer and you would like to get them onto Elastic Beanstalk. Lets take a look at some of the issues you might face.
From a migration to Elastic Beanstalk point of view, there are really 4 categories of applications and depending on which category your application falls into you may have more or less trouble making the migration happen.Category A is the easiest – a war file that does not need access to any external resources like a database or the files system. Category A applications can be deployed as is.Category B are the next easiest as long as the connection information to the other AWS services, like your AWS credentials can be passed in through the EB console, then these applications may only require slight code changes if any at all.Category C applications are similar to Category B, in that they access external services, but in this case the services are hosted outside of the Amazon environment. Like Category B, as long as the information needed by your application to connect to these services can be passed in through the elastic beanstalk console, then your application may only require slight code changes if any at all.Category D are the hardest applications to migrate to elastic beanstalk. These applications are often implemented in a way that potentially violates the servlet specification and do things like read and write to the local file system for persistent storage. Alternatively, they may just rely on certain binaries or resources being available on the host machine that are not available as part of the standard elastic beanstalk server image. So it is fair to say that out-of-the-box, elastic beanstalk does not support Category D applications, however if you are prepared to get your hands a little dirty, there is an officially supported way to solve this problem.
In Amazon EC2 speak, an Amazon Machine Image is the basis for a virtual machine – think of it as a snapshot of a virtual machine.Amazon publishes 4 different AMIs related to elastic beanstalk – combination of 32 and 64 bit linux, plus Tomcat 6 and Tomcat 7. So depending on what container you pick when configuring your environment, you will be selecting one of these AMIs to base all of the EC2 instances in your environment on.However, Amazon also supports you providing your own custom AMI for elastic beanstalk to use. There is a trick to this and you cannot just use any old AMI. There are specific things that elastic beanstalk needs to be installed and configured on your EC2 instances that allow it to monitor the instance for health and doing scaling activities. So if you use one of your existing AMIs you will not have these elastic beanstalk specific pieces on your machine and your AMI will not work with elastic beanstalk.Instead what you do is take a snapshot of one of the official 4 AMIs that Amazon has published and use it as a starting point. From there you can add any additional software you need to it for example.Then when you are configuring your environment, you can pass in the unique ID of your custom AMI, instead of using one of the officially published AMIs.
So how much is all of this cloud coolness going to cost.
The good news is that Elastic Beanstalk itself is completely free.However, you do pay for the underlying infrastrucutre services that elastic beanstalk makes use of.So you will have EC2 charges for all of the EC2 instances you spin up and shutdown.You will have S3 storage charges for each version of the application you have uploaded.The elastic load balancer that sits in front of your EC2 instances also has charges associated.There will also be Elastic Block Storage charges for backing your live EC2 instances as well as any custom AMIs you create to use with elastic beanstalkAnd finally of course, there are charges for the outbound traffic from your application to the internet.
This table is available on the elastic beanstalk website.As you can see there are actually a lot of places where you can incur charges, however most of these charges are very small and some will barely increase at all even if you have a hugely successful website – for example the storage charges for your war file in S3 won’t increase just because your application is successful.If you compared these charges to buying a lot of physical hardware or even licenses for tools like VMWare to run your own virtualization infrastructure, from what I have experienced the pricing is very competitive.
There are many ways to achieve some or all of the functionality provided by elastic beanstalk with tools from other vendors, lets take a quick look.
Obviously some of the benefits that elastic beanstalk provides can also be achieved through plain virtualization tools – like being able to replicate environment settings easily.But remember you are still going to need to provision and manage the operating systems and application servers on those virtual machines. Also can you automatically scale your environment to meet increased demand or scale it back when demand is low – at some point you have to have some physical hardware to run your virtual machines on and as a result you have a finite amount of resources which may turn out to not be enough if your application takes off.
There are certainly other Platform-as-a-Service vendors for Java environments.CloudBees, Heroku, OpenShift from RedHat, Google AppEngine are all examples of Platform-as-a-Service solutions, and there are many more.Each vendor has their own quirks about how to go about things, or what a Platform-as-a-Service solution does or doesn’t provide.Many of these other vendors actually run on top of Amazon’s infrastructure anyway, so keep that in mind when looking at alternatives.Elastic Beanstalk does provide that low level system access if you need it, which most other vendors do not provide, so if that is a must have for you, then elastic beanstalk is probably the way to go.However, elastic beanstalk currently only supports the Java web app on Tomcat architecture, so if you are looking for a solution that supports other technologies like a full JavaEE stack or Ruby, then some of the other vendors might be a better selection, at least for now.
So just a quick recap.Elastic Beanstalk falls in to the Platform-as-a-Service category and is built on top of the infrastructure services that Amazon already provides.For the moment you get a choice of either Tomcat 6 or Tomcat 7 and you also get a 32bit or 64bit operating system choice if that is important to you.Because it is simply a managed Tomcat environment, most well written web applications can be moved over to elastic beanstalk with little to no code modifications or restrictions, which is an important differentiator when looking at some of the other solutions in this space.There is already good tooling out there including the eclipse plugin, the maven plugin and the command line API provided by AmazonIf you really need to you can get your hands dirty and access your EC2 instances via SSH and you can modify the actual EC2 instances being used by elastic beanstalk if you need to add additional features. Keep in mind though, this really is a last resort, your life will be a lot simpler if you can work with the out-of-the-box features that elastic beanstalk provides.And last but not least, hopefully I have shown that elastic beanstalk can solve many of the common problems that developers have with managing various runtime environments during the development process.