This document provides information about Amazon EC2 instance types. It begins by explaining the naming convention for EC2 instances, which includes the family name, generation number, and size. It then discusses the different sizes and types of instances available, including general purpose, compute optimized, memory optimized, accelerated computing, and storage optimized instances. Specific examples are given for each type. The document also covers pricing options like on-demand instances, reserved instances, and spot instances. Dedicated instance and host options are explained. Finally, it discusses best practices for tagging instances to help manage resources.
The document discusses cost optimization when using AWS. It begins with an overview of total cost of ownership (TCO) and how AWS addresses some of the issues that lead to higher TCO with on-premises infrastructure, such as overprovisioning. It then discusses various methods for optimizing costs on AWS, including right-sizing instances, using reserved instances and spot instances, enabling auto-scaling, and matching data to the appropriate storage classes. The presentation emphasizes measuring and monitoring costs and designing infrastructure with costs in mind from the beginning.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key features, and the concept of instance generations.
This document provides an overview of Amazon Web Services storage options, including scalable object storage with Amazon S3, inexpensive archive storage with Amazon Glacier, persistent block storage with Amazon EBS, and a shared file system with Amazon EFS. It discusses the growth of data production across industries and how AWS storage services provide scalable, cost-effective solutions. Key features and use cases are described for each storage service.
The document discusses cloud cost optimization strategies. It identifies key pillars for cost optimization including right-sizing resources, leveraging different pricing models, using appropriate storage classes, measuring usage, and designing architectures for cost efficiency. The optimization process involves monitoring usage and costs, identifying unnecessary resources, and establishing a tagging strategy. Key recommendations include turning off idle instances, deleting unused volumes, stopping paid services when not in use, using consolidated billing for discounts, and automating processes. Latest trends discussed are 1ms billing granularity for Lambda and independent provisioning of performance and capacity for EBS volumes.
AWS provides a range of Compute Services, Amazon EC2, Amazon ECS, AWS Lambda, and AWS Elastic Beanstalk – allowing you to build everything from web applications, mobile backends to data processing applications.
In this session, we will provide an intro level overview of these services and highlight suitable use cases. We will discuss which service to choose to best get your applications up and running on AWS.
In this popular session, discover how Amazon EBS can take your application deployments on Amazon EC2 to the next level. Learn about Amazon EBS features and benefits, how to identify applications that are appropriate for use with Amazon EBS, best practices, and details about its performance and volume types. The target audience is storage administrators, application developers, applications owners, and anyone who wants to understand how to optimize performance for Amazon EC2 using the power of Amazon EBS.
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business.
The document discusses cost optimization when using AWS. It begins with an overview of total cost of ownership (TCO) and how AWS addresses some of the issues that lead to higher TCO with on-premises infrastructure, such as overprovisioning. It then discusses various methods for optimizing costs on AWS, including right-sizing instances, using reserved instances and spot instances, enabling auto-scaling, and matching data to the appropriate storage classes. The presentation emphasizes measuring and monitoring costs and designing infrastructure with costs in mind from the beginning.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key features, and the concept of instance generations.
This document provides an overview of Amazon Web Services storage options, including scalable object storage with Amazon S3, inexpensive archive storage with Amazon Glacier, persistent block storage with Amazon EBS, and a shared file system with Amazon EFS. It discusses the growth of data production across industries and how AWS storage services provide scalable, cost-effective solutions. Key features and use cases are described for each storage service.
The document discusses cloud cost optimization strategies. It identifies key pillars for cost optimization including right-sizing resources, leveraging different pricing models, using appropriate storage classes, measuring usage, and designing architectures for cost efficiency. The optimization process involves monitoring usage and costs, identifying unnecessary resources, and establishing a tagging strategy. Key recommendations include turning off idle instances, deleting unused volumes, stopping paid services when not in use, using consolidated billing for discounts, and automating processes. Latest trends discussed are 1ms billing granularity for Lambda and independent provisioning of performance and capacity for EBS volumes.
AWS provides a range of Compute Services, Amazon EC2, Amazon ECS, AWS Lambda, and AWS Elastic Beanstalk – allowing you to build everything from web applications, mobile backends to data processing applications.
In this session, we will provide an intro level overview of these services and highlight suitable use cases. We will discuss which service to choose to best get your applications up and running on AWS.
In this popular session, discover how Amazon EBS can take your application deployments on Amazon EC2 to the next level. Learn about Amazon EBS features and benefits, how to identify applications that are appropriate for use with Amazon EBS, best practices, and details about its performance and volume types. The target audience is storage administrators, application developers, applications owners, and anyone who wants to understand how to optimize performance for Amazon EC2 using the power of Amazon EBS.
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business.
AWS Lambda is Amazon's serverless computing platform that allows you to run code without provisioning or managing servers. Code is run in response to events and AWS automatically manages the computing resources. Key advantages are only paying for the compute time used and not having to manage servers. Lambda supports Node.js, Python, Java, and C# and functions can be triggered by events from services like S3, DynamoDB, and API Gateway. Functions are configured and coded within the Lambda management console. Pricing is based on number of requests and compute time used, with the first million requests and 400,000 GB-seconds of compute time being free each month.
This document discusses cost optimization strategies on AWS. It provides examples of cost savings achieved by companies that migrated applications to AWS including a 14 million dollar annual savings for GE. It outlines approaches for architecting efficiently for cost, optimizing usage costs over time, and taking advantage of AWS pricing benefits like reserved instances, spot instances, and different storage options. The document emphasizes optimizing through proactive monitoring and billing tools, leveraging the various EC2 pricing plans, and combining options for further savings.
Amazon Relational Database Service (RDS) provides a managed relational database in the cloud. It supports several database engines including Amazon Aurora, MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL. Key features of RDS include automated backups, manual snapshots, multi-AZ deployment for high availability, read replicas for scaling reads, and encryption options. DynamoDB is AWS's key-value and document database that delivers single-digit millisecond performance at any scale. It is a fully managed NoSQL database and supports both document and key-value data models. Redshift is a data warehouse service and is used for analytics workloads requiring fast queries against large datasets.
스폰서 발표 세션 | KINX와 함께 하는 AWS Direct Connect 도입
남시우 매니저, KINX
AWS Direct Connect는 AWS와 온프레미스 사이에 프라이빗 연결을 설정해 일관된 네트워크 성능, 비용 절감, 대역폭 처리량 증대 등의 이점을 제공하는 서비스입니다. 2016년 AWS 서울 리전 오픈부터 함께해 온 인터넷 인프라 전문기업 (주)케이아이엔엑스(KINX)는 AWS Direct Connect를 도입하고자 하는 기업을 위한 핵심 노하우를 공유하고자 합니다. 본 세션에서는 AWS Direct Connect에 손쉽게 연결하는 방법과 함께 AWS Direct Connect의 네트워크 구성을 기반으로 한 실제 활용사례, AWS Direct Connect를 바탕으로 한 제조업 분야의 중국지사 연결 방안을 소개합니다.
This document provides information about Amazon S3, Amazon EBS, and storage classes in AWS. It discusses key concepts of S3 including objects, buckets, and keys. It describes the different S3 storage classes like STANDARD, STANDARD_IA, GLACIER and their use cases. The document also covers S3 features like access control, versioning, lifecycle management and managing access. Finally, it provides an overview of Amazon EBS volumes, volume types, snapshots and EBS optimized instances.
AWS Cloud Design Patterns (a.k.a. CDP) are generally repeatable solutions to commonly occurring problems in cloud architecting. In this session, we introduce CDP and explain how you can apply CDPs in practical scenarios such as photo sharing, e-commerce, and web site campaigns.
DAT302_Deep Dive on Amazon Relational Database Service (RDS)Amazon Web Services
Amazon RDS enables customers to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon RDS provides you six database engines to choose from, including Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we take a closer look at the capabilities of the RDS service and review the latest features available. We do a deep dive into how RDS works and the best practices to achieve the optimal performance, flexibility, and cost saving for your databases.
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents.
This presentation provides an overview of Amazon Elastic Block Store (EBS) and key performance concepts. EBS provides persistent block level storage volumes for use with EC2 instances. It discusses the different volume types (standard and provisioned IOPS), factors that impact performance like block size and queue depth, and best practices for architecting storage for performance and availability. The presentation also provides examples of how enterprises and applications use EBS and guidelines for minimum, production and large-scale usage.
AWS Transit Gateway를 통한 Multi-VPC 아키텍처 패턴 - 강동환 솔루션즈 아키텍트, AWS :: AWS Summit ...Amazon Web Services Korea
AWS Transit Gateway를 통한 Multi-VPC 아키텍처 패턴
강동환 솔루션즈 아키텍트, AWS
고객의 조직, 서비스 구조에 따라 함께 늘어나는 VPC를 효과적으로 통합, 관리, 운영하기 위한 서비스와 아키텍처 패턴을 소개합니다. Peering의 한계를 넘어 VPC간 자유로운 연동을 제공하는 Transit Gateway(TGW), 조직내 다양한 Account간의 VPC 공유를 위한 Multi-Account VPC(MAVPC), 그리고 AWS 자원의 안전한 공유를 제공하기 위한 Resource Access Manager(RAM)를 활용하는 다양한 아키텍처 패턴을 살펴봅니다.
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...Simplilearn
This presentation AWS S3 will help you understand what is cloud storage, types of storage, life before Amazon S3, what is S3 ( Amazon Simple Storage Service ), benefits of S3, objects and buckets, how does Amazon S3 work along with the explanation on features of AWS S3. Amazon S3 is a storage service for the Internet. It is a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data storage infrastructure at a relatively low cost. Amazon S3 gives a simple web service interface that can be used to store and restore any amount of data. Using this, developers can build applications that make use of Internet storage with ease. Amazon S3 is designed to be highly flexible and scalable. Now, lets deep dive into this presentation and understand what Amazon S3 actually is.
Below topics are explained in this AWS S3 presentation:
1. What is Cloud storage?
2. Types of storage
3. Before Amazon S3
4. What is S3
5. Benefits of S3
6. Objects and buckets
7. How does Amazon S3 work
8. Features of S3
This AWS certification training is designed to help you gain in-depth understanding of Amazon Web Services (AWS) architectural principles and services. You will learn how cloud computing is redefining the rules of IT architecture and how to design, plan, and scale AWS Cloud implementations with best practices recommended by Amazon. The AWS Cloud platform powers hundreds of thousands of businesses in 190 countries, and AWS certified solution architects take home about $126,000 per year.
This AWS certification course will help you learn the key concepts, latest trends, and best practices for working with the AWS architecture – and become industry-ready aws certified solutions architect to help you qualify for a position as a high-quality AWS professional.
The course begins with an overview of the AWS platform before diving into its individual elements: IAM, VPC, EC2, EBS, ELB, CDN, S3, EIP, KMS, Route 53, RDS, Glacier, Snowball, Cloudfront, Dynamo DB, Redshift, Auto Scaling, Cloudwatch, Elastic Cache, CloudTrail, and Security. Those who complete the course will be able to:
1. Formulate solution plans and provide guidance on AWS architectural best practices
2. Design and deploy scalable, highly available, and fault tolerant systems on AWS
3. Identify the lift and shift of an existing on-premises application to AWS
4. Decipher the ingress and egress of data to and from AWS
5. Select the appropriate AWS service based on data, compute, database, or security requirements
6. Estimate AWS costs and identify cost control mechanisms
This AWS course is recommended for professionals who want to pursue a career in Cloud computing or develop Cloud applications with AWS. You’ll become an asset to any organization, helping leverage best practices around advanced cloud-based solutions and migrate existing workloads to the cloud.
Learn more at: https://www.simplilearn.com/
The document discusses various backup and archival strategies using AWS services like Amazon S3, EBS, Glacier, and Snowball. It provides examples of using S3 lifecycle policies to transition data between storage tiers, taking EBS snapshots for EC2 instance backups, and using Snowball for large-scale data transfers to the cloud. Backup and archival solutions can provide durability, scalability, cost savings, and reduce risks compared to on-premises options.
This document outlines an agenda for an AWS Cost Management workshop. The agenda includes introductions and sessions on AWS Cost Explorer, AWS Budgets, AWS Reservations, and AWS Cost & Usage Reports. It provides overviews of AWS cost management products and highlights recent features including budget redesigns, forecasting enhancements, and reserved instance management updates.
by Robbie Wright, HEad of Amazon S3 & Amazon Glacier Product Marketing, AWS
Learn from AWS on how we've designed S3 and Glacier to be durable, available, and massively scalable. Hear how customers are using these services to enhance the accessibility and usability of their data. We will also dive into the benefits of object storage, its applications, and some best practices to follow.
Introduction to AWS VPC, Guidelines, and Best PracticesGary Silverman
I crafted this presentation for the AWS Chicago Meetup. This deck covers the rationale, building blocks, guidelines, and several best practices for Amazon Web Services Virtual Private Cloud. I classify it as a somewhere between a 101 and 201 level presentation.
If you like the presentation, I would appreciate you clicking the Like button.
The document summarizes an AWS user group presentation by Shaimaa Esmaeil on AWS101. The presentation introduced cloud computing concepts, AWS global infrastructure and services, and demonstrated EC2 and S3. It discussed on-premises vs cloud, cloud models (IaaS, PaaS, SaaS), AWS regions and availability zones. It provided overviews of EC2 instances, AMIs, types, EBS, security groups and S3 buckets and objects. Useful training and practice exam resources were also shared.
For more training on AWS, visit: https://www.qa.com/amazon
AWS Loft | London - Amazon Virtual Private Cloud by Andrew Kane, Solution Architect
April 18, 2016
AWS Webcast - Webinar Series for State and Local Government #2: Discover the ...Amazon Web Services
The document provides an overview and agenda for a training on Amazon Web Services (AWS). It discusses setting up an AWS account, an overview of key AWS services like Amazon EC2, S3, and others. It also includes demos of setting up an AWS account, using EC2 to launch virtual servers, and uploading and downloading objects from S3 storage. The training aims to help participants get started with AWS and understand its global infrastructure and capabilities.
This document provides best practices for deploying Microsoft SQL Server on Amazon EC2. It discusses using multiple Amazon EBS volumes for tempdb and data files to improve performance. It also covers high availability options like AlwaysOn Availability Groups across Availability Zones and failover cluster instances. The document recommends configuring security groups and network access control lists for security in a VPC.
AWS Lambda is Amazon's serverless computing platform that allows you to run code without provisioning or managing servers. Code is run in response to events and AWS automatically manages the computing resources. Key advantages are only paying for the compute time used and not having to manage servers. Lambda supports Node.js, Python, Java, and C# and functions can be triggered by events from services like S3, DynamoDB, and API Gateway. Functions are configured and coded within the Lambda management console. Pricing is based on number of requests and compute time used, with the first million requests and 400,000 GB-seconds of compute time being free each month.
This document discusses cost optimization strategies on AWS. It provides examples of cost savings achieved by companies that migrated applications to AWS including a 14 million dollar annual savings for GE. It outlines approaches for architecting efficiently for cost, optimizing usage costs over time, and taking advantage of AWS pricing benefits like reserved instances, spot instances, and different storage options. The document emphasizes optimizing through proactive monitoring and billing tools, leveraging the various EC2 pricing plans, and combining options for further savings.
Amazon Relational Database Service (RDS) provides a managed relational database in the cloud. It supports several database engines including Amazon Aurora, MariaDB, Microsoft SQL Server, MySQL, Oracle, and PostgreSQL. Key features of RDS include automated backups, manual snapshots, multi-AZ deployment for high availability, read replicas for scaling reads, and encryption options. DynamoDB is AWS's key-value and document database that delivers single-digit millisecond performance at any scale. It is a fully managed NoSQL database and supports both document and key-value data models. Redshift is a data warehouse service and is used for analytics workloads requiring fast queries against large datasets.
스폰서 발표 세션 | KINX와 함께 하는 AWS Direct Connect 도입
남시우 매니저, KINX
AWS Direct Connect는 AWS와 온프레미스 사이에 프라이빗 연결을 설정해 일관된 네트워크 성능, 비용 절감, 대역폭 처리량 증대 등의 이점을 제공하는 서비스입니다. 2016년 AWS 서울 리전 오픈부터 함께해 온 인터넷 인프라 전문기업 (주)케이아이엔엑스(KINX)는 AWS Direct Connect를 도입하고자 하는 기업을 위한 핵심 노하우를 공유하고자 합니다. 본 세션에서는 AWS Direct Connect에 손쉽게 연결하는 방법과 함께 AWS Direct Connect의 네트워크 구성을 기반으로 한 실제 활용사례, AWS Direct Connect를 바탕으로 한 제조업 분야의 중국지사 연결 방안을 소개합니다.
This document provides information about Amazon S3, Amazon EBS, and storage classes in AWS. It discusses key concepts of S3 including objects, buckets, and keys. It describes the different S3 storage classes like STANDARD, STANDARD_IA, GLACIER and their use cases. The document also covers S3 features like access control, versioning, lifecycle management and managing access. Finally, it provides an overview of Amazon EBS volumes, volume types, snapshots and EBS optimized instances.
AWS Cloud Design Patterns (a.k.a. CDP) are generally repeatable solutions to commonly occurring problems in cloud architecting. In this session, we introduce CDP and explain how you can apply CDPs in practical scenarios such as photo sharing, e-commerce, and web site campaigns.
DAT302_Deep Dive on Amazon Relational Database Service (RDS)Amazon Web Services
Amazon RDS enables customers to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon RDS provides you six database engines to choose from, including Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB. In this session, we take a closer look at the capabilities of the RDS service and review the latest features available. We do a deep dive into how RDS works and the best practices to achieve the optimal performance, flexibility, and cost saving for your databases.
Learn about the new AWS Database Migration Service, which helps you migrate databases with minimal downtime from on-premises and Amazon EC2 environments to Amazon RDS, Amazon Redshift, Amazon Aurora and EC2 databases. We discuss homogeneous (e.g. Oracle-to-Oracle, PostgreSQL-to-PostgreSQL, etc.) and heterogeneous (e.g. Oracle to Aurora, SQL Server to MariaDB) database migrations. We also talk about the new AWS Schema Conversion Tool that saves you development time when migrating your Oracle and SQL Server database schemas, including PL/SQL and T-SQL procedural code, to their MySQL, MariaDB and Aurora equivalents.
This presentation provides an overview of Amazon Elastic Block Store (EBS) and key performance concepts. EBS provides persistent block level storage volumes for use with EC2 instances. It discusses the different volume types (standard and provisioned IOPS), factors that impact performance like block size and queue depth, and best practices for architecting storage for performance and availability. The presentation also provides examples of how enterprises and applications use EBS and guidelines for minimum, production and large-scale usage.
AWS Transit Gateway를 통한 Multi-VPC 아키텍처 패턴 - 강동환 솔루션즈 아키텍트, AWS :: AWS Summit ...Amazon Web Services Korea
AWS Transit Gateway를 통한 Multi-VPC 아키텍처 패턴
강동환 솔루션즈 아키텍트, AWS
고객의 조직, 서비스 구조에 따라 함께 늘어나는 VPC를 효과적으로 통합, 관리, 운영하기 위한 서비스와 아키텍처 패턴을 소개합니다. Peering의 한계를 넘어 VPC간 자유로운 연동을 제공하는 Transit Gateway(TGW), 조직내 다양한 Account간의 VPC 공유를 위한 Multi-Account VPC(MAVPC), 그리고 AWS 자원의 안전한 공유를 제공하기 위한 Resource Access Manager(RAM)를 활용하는 다양한 아키텍처 패턴을 살펴봅니다.
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...Simplilearn
This presentation AWS S3 will help you understand what is cloud storage, types of storage, life before Amazon S3, what is S3 ( Amazon Simple Storage Service ), benefits of S3, objects and buckets, how does Amazon S3 work along with the explanation on features of AWS S3. Amazon S3 is a storage service for the Internet. It is a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data storage infrastructure at a relatively low cost. Amazon S3 gives a simple web service interface that can be used to store and restore any amount of data. Using this, developers can build applications that make use of Internet storage with ease. Amazon S3 is designed to be highly flexible and scalable. Now, lets deep dive into this presentation and understand what Amazon S3 actually is.
Below topics are explained in this AWS S3 presentation:
1. What is Cloud storage?
2. Types of storage
3. Before Amazon S3
4. What is S3
5. Benefits of S3
6. Objects and buckets
7. How does Amazon S3 work
8. Features of S3
This AWS certification training is designed to help you gain in-depth understanding of Amazon Web Services (AWS) architectural principles and services. You will learn how cloud computing is redefining the rules of IT architecture and how to design, plan, and scale AWS Cloud implementations with best practices recommended by Amazon. The AWS Cloud platform powers hundreds of thousands of businesses in 190 countries, and AWS certified solution architects take home about $126,000 per year.
This AWS certification course will help you learn the key concepts, latest trends, and best practices for working with the AWS architecture – and become industry-ready aws certified solutions architect to help you qualify for a position as a high-quality AWS professional.
The course begins with an overview of the AWS platform before diving into its individual elements: IAM, VPC, EC2, EBS, ELB, CDN, S3, EIP, KMS, Route 53, RDS, Glacier, Snowball, Cloudfront, Dynamo DB, Redshift, Auto Scaling, Cloudwatch, Elastic Cache, CloudTrail, and Security. Those who complete the course will be able to:
1. Formulate solution plans and provide guidance on AWS architectural best practices
2. Design and deploy scalable, highly available, and fault tolerant systems on AWS
3. Identify the lift and shift of an existing on-premises application to AWS
4. Decipher the ingress and egress of data to and from AWS
5. Select the appropriate AWS service based on data, compute, database, or security requirements
6. Estimate AWS costs and identify cost control mechanisms
This AWS course is recommended for professionals who want to pursue a career in Cloud computing or develop Cloud applications with AWS. You’ll become an asset to any organization, helping leverage best practices around advanced cloud-based solutions and migrate existing workloads to the cloud.
Learn more at: https://www.simplilearn.com/
The document discusses various backup and archival strategies using AWS services like Amazon S3, EBS, Glacier, and Snowball. It provides examples of using S3 lifecycle policies to transition data between storage tiers, taking EBS snapshots for EC2 instance backups, and using Snowball for large-scale data transfers to the cloud. Backup and archival solutions can provide durability, scalability, cost savings, and reduce risks compared to on-premises options.
This document outlines an agenda for an AWS Cost Management workshop. The agenda includes introductions and sessions on AWS Cost Explorer, AWS Budgets, AWS Reservations, and AWS Cost & Usage Reports. It provides overviews of AWS cost management products and highlights recent features including budget redesigns, forecasting enhancements, and reserved instance management updates.
by Robbie Wright, HEad of Amazon S3 & Amazon Glacier Product Marketing, AWS
Learn from AWS on how we've designed S3 and Glacier to be durable, available, and massively scalable. Hear how customers are using these services to enhance the accessibility and usability of their data. We will also dive into the benefits of object storage, its applications, and some best practices to follow.
Introduction to AWS VPC, Guidelines, and Best PracticesGary Silverman
I crafted this presentation for the AWS Chicago Meetup. This deck covers the rationale, building blocks, guidelines, and several best practices for Amazon Web Services Virtual Private Cloud. I classify it as a somewhere between a 101 and 201 level presentation.
If you like the presentation, I would appreciate you clicking the Like button.
The document summarizes an AWS user group presentation by Shaimaa Esmaeil on AWS101. The presentation introduced cloud computing concepts, AWS global infrastructure and services, and demonstrated EC2 and S3. It discussed on-premises vs cloud, cloud models (IaaS, PaaS, SaaS), AWS regions and availability zones. It provided overviews of EC2 instances, AMIs, types, EBS, security groups and S3 buckets and objects. Useful training and practice exam resources were also shared.
For more training on AWS, visit: https://www.qa.com/amazon
AWS Loft | London - Amazon Virtual Private Cloud by Andrew Kane, Solution Architect
April 18, 2016
AWS Webcast - Webinar Series for State and Local Government #2: Discover the ...Amazon Web Services
The document provides an overview and agenda for a training on Amazon Web Services (AWS). It discusses setting up an AWS account, an overview of key AWS services like Amazon EC2, S3, and others. It also includes demos of setting up an AWS account, using EC2 to launch virtual servers, and uploading and downloading objects from S3 storage. The training aims to help participants get started with AWS and understand its global infrastructure and capabilities.
This document provides best practices for deploying Microsoft SQL Server on Amazon EC2. It discusses using multiple Amazon EBS volumes for tempdb and data files to improve performance. It also covers high availability options like AlwaysOn Availability Groups across Availability Zones and failover cluster instances. The document recommends configuring security groups and network access control lists for security in a VPC.
AWS Webcast - AWS Webinar Series for Education #2 - Getting Started with AWSAmazon Web Services
This webinar will cover the basics of getting started with AWS. After a brief overview, this session will dive into core AWS services with live demonstrations of how to set up and utilize compute, storage, and other services. The focus will be on the ease of use and the ability to clone environments that largest customers are running to highlight AWS’ versatility and ease of use as a cloud platform.
Basic ppt on cloud computing on amazon webRahulBhole12
Amazon Web Services (AWS) is a collection of cloud computing services offered by Amazon that includes computing power, database storage, content delivery and other functionality. It provides a platform for customers to build and host their applications and services through its infrastructure that Amazon manages and maintains. Some key AWS services include Elastic Compute Cloud (EC2) for virtual servers, Simple Storage Service (S3) for object storage, Elastic Block Store (EBS) for virtual disk storage, Relational Database Service (RDS) for database hosting, and Elastic MapReduce for big data processing.
Let’s get started. Join this session to continue your journey through the core AWS services with live demonstrations of how to set up and use the services.
The document provides information about Amazon EC2 instances, including:
- EC2 instances are virtual computing environments that run in the AWS cloud. They are launched using Amazon Machine Images which contain the operating system and software.
- Instance types determine the hardware specifications of an instance and there are different types optimized for compute, memory, storage or accelerated computing.
- Security groups act as virtual firewalls that control inbound and outbound traffic using rules.
- Instances have private IP addresses for communication within a VPC and may be assigned public IP addresses for internet access.
The iot academy_awstraining_part1_aws_introductionThe IOT Academy
Amazon Web Services (AWS) is a cloud computing platform offering that provides computing resources and services on demand. AWS offers infrastructure services like EC2 (Elastic Compute Cloud) for virtual servers, S3 (Simple Storage Service) for cloud storage, and EBS (Elastic Block Store) for virtual disk volumes. It also provides platform services like databases, analytics, and developer tools. AWS has data centers globally and charges customers based on pay-as-you-go usage of resources without long term commitments.
1) Amazon EC2 provides scalable compute capacity in the cloud via virtual machine instances. Instances are launched from templates called AMIs and are categorized into different types based on their compute, memory, and storage capabilities.
2) EC2 offers benefits like elasticity, full control and configuration of instances, a wide variety of options for operating systems and software, high reliability through rapid provisioning of replacement instances, and manageability via AWS management consoles and APIs.
3) Key EC2 concepts include AMIs, instance types, EBS for persistent storage, security groups for access control, and billing based on hourly or per-second usage of instances and storage.
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? In this webinar we will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Amazon EC2 forms the backbone of the compute platform for hundreds of thousands of AWS customers, but understanding how to fully utilize EC2 and related services can be challenging.
In this webinar, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Learning Objectives:
Understand how to use Amazon EC2 beyond a simple single instance use case
Learn about instance bootstrapping, AMIs and Elastic IPs
Discover how to create an Elastic Load Balancer and integrate it with Auto Scaling
Learn how to create Auto Scaling configurations and the tools you need to drive Auto Scaling policies
Find out how to create an Amazon RDS database and how to test failover between Availability Zones
Who Should Attend:
Existing Amazon EC2 users, Developers, Engineers and Solutions Architects
Best Practices for Managing Hadoop Framework Based Workloads (on Amazon EMR) ...Amazon Web Services
Learning Objectives:
- Learn how to use Amazon EMR for easy, fast, and cost-effective processing of vast amounts of data across dynamically scalable Amazon EC2 instances.
- Learn how using EC2 Spot can significantly reduce the cost of running your clusters.
- Learn how Amazon EMR Instance Fleets can make it easier to quickly obtain and maintain your desired capacity for your clusters.
Amazon is an American multinational technology company that focuses on e-commerce, cloud computing, digital streaming, and artificial intelligence. It started as an online bookstore in 1995 and has since expanded to sell a wide variety of products and services to customers worldwide. In 2006, Amazon launched Amazon Web Services (AWS), which offers cloud computing services and has become one of the largest providers of these services. AWS provides access to computing resources and services through application programming interfaces, allowing customers to take advantage of elastic and scalable resources without having to manage their own hardware infrastructure.
Let’s get started. Join this session to continue your journey through the core AWS services with live demonstrations of how to set up and use the services.
This document provides an overview of Amazon Web Services including EC2, S3, and EMR. It discusses regions and availability zones in EC2, how to set up VPCs, different EC2 instance types, AMIs, key pairs, and the differences between EBS and instance store. It also covers S3 concepts like buckets, objects, storage classes, and access controls. Finally, it briefly introduces EMR and how it provides a managed Hadoop framework on EC2 instances with integration to S3 for storage. The document includes demos of working with EC2 instances and EBS volumes, S3 buckets, and creating an EMR cluster.
Amazon EC2 provides a broad selection of instance types to deliver high performance for a diverse mix of applications. In this session, we overview the drivers of system performance and discuss in depth how Amazon EC2 instances deliver system performance while also providing elasticity and complete control over your infrastructure. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? This presentation will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Access a recorded version of the webinar based on this presentation on YouTube here: http://youtu.be/jLVPqoV4YjU
You can find the rest of the Masterclass webinar series for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
If you are interested in learning about how you apply variety of different AWS services to specific challenges, please check out the Journey Through the Cloud series, which you can find here: http://aws.amazon.com/campaigns/emea/journey/
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.This slide describes about features of EC2, EC2 Options, family type, storage, EBS Volumes, EC2 Instance Store, Security Groups, Volumes and Snapshots, Amazon Machine Image (AMI), Elastic load balancer, Classic load balancer, Application load balancer, Network load balancer, AWS CLI and EC2 Metadata
___________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
This document discusses running BSD operating systems on Amazon Web Services (AWS). It provides an overview of AWS infrastructure including regions, availability zones, and edge locations. It then covers AWS EC2 instances, virtual machines, and operating system options. The document demonstrates how to build BSD AMIs, benchmarks the 'buildworld' process on different instance types, and discusses tools for baking your own AMIs like Packer. It concludes with benchmark results and takeaways about running DevOps processes for AMIs and security.
For more training on AWS, visit: https://www.qa.com/amazon
AWS Loft | London - Amazon EC2:Masterclass by Ian Massingham, Chief Evangelist EMEA, April 18, 2016
The document provides instructions for launching an M-Pin Core service instance on Amazon EC2. It describes choosing an Amazon Machine Image, instance type, storage options, and configuring security groups. The steps also cover accessing the M-Pin Core trial demo and configuring the instance host and port. Once launched, the M-Pin Core service can be accessed in a browser to create identities and pins for strong authentication testing.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
2. EC2 Instances – What’s in a
Name?
m is the family name
5 Is the generation number
large is the size of the instance
m5.large
Examples
t2.large
c5.xlarge
p3.2xlarge
4. EC2 Instances – Types
Choosing the correct type is very important for:
Efficient utilization of
your instances
Reducing
unneeded cost
5. EC2 Instances – Types
General Purpose
Compute Optimized
Memory Optimized
Accelerated Computing
Storage Optimized
7 available selections
3 available Selections
7 available Selections
4 available Selections
4 available Selections
6. EC2 – General Purpose Example
Good for burstable workloads like
website and web applications
Model vCPU CPUCredits / hour Mem (GiB) Storage
t3.nano
2 6
0.5 EBS-Only
t3.micro
2 12
1
EBS-Only
t3.small
2 24
2
EBS-Only
t3.medium 2
24
4
EBS-Only
t3.large 2
36
8 EBS-Only
t3.xlarge 4
96
16 EBS-Only
t3.2xlarge 8
192
32 EBS-Only
7. EC2 – Compute Optimized
Example
Optimized for compute-
intensive workloads
Model vCPU Mem (GiB) Storage EBS Bandwidth (Mbps)
c5.large 2 4 EBS-Only Up to 2,250
c5.xlarge 4 8 EBS-Only Up to 2,250
c5.2xlarge 8 16 EBS-Only Up to 2,250
c5.4xlarge 16 32 EBS-Only
2,250
c5.9xlarge 36 72 EBS-Only
4,500
c5.18xlarge 72 144 EBS-Only 9,000
8. EC2 – Memory Optimized Example
Memory heavy applications or
when you need more RAM
than CPU
Model vCPU Mem (GiB) Storage (GiB)
Dedicated EBS
Bandwidth (Mbps)
Networking
Performance
(Gbps)
r5.large 2 16 EBS-Only up to 3,500 Up to 10
r5.xlarge 4 32 EBS-Only up to 3,500 Up to 10
r5.2xlarge 8 64 EBS-Only up to 3,500 Up to 10
r5.4xlarge 16 128 EBS-Only 3,500 Up to 10
r5.12xlarge 48 384 EBS-Only 7,000 10
r5.24xlarge 96 768 EBS-Only 14,000 25
9. EC2 – Accelerated Computing
Example
Model GPUs vCPU Mem (GiB) GPU Mem (GiB) GPU P2P
p3.2xlarge 1 8 61 16 -
p3.8xlarge 4 32 244 64 NVLink
p3.16xlarge 8 64 488 128 NVLink
p3.dn24x 12 96 768 256 NVLink
Performant GPU based
instances
Commonly used for
Machine/Deep Learning
10. EC2 – Storage Optimized Example
Model vCPU
Mem
(GiB)
Networking
Performance
Instance
Storage (GB)
h1.2xlarge 8 32 Up to 10 Gigabit 1 x 2,000 HDD
h1.4xlarge 16 64 Up to 10 Gigabit 2 x 2,000 HDD
h1.8xlarge 32 128 10 Gigabit 4 x 2,000 HDD
h1.16xlarge 64 256 25 Gigabit 8 x 2,000 HDD
Up to 16 TB of HDD-based
local storage with high disk
throughput.
11. Intel® Xeon CPUs and EC2 Instances
All current EC2 instance types include:
• IntelAES-NI: Reduces performance hit due to encryption
• IntelAVX (AVX2, AVX-512): Improve floating-point
performance. Only available on HVM deployments.
12. Intel® Xeon CPUs and EC2 Instances
Some EC2 instance types include:
• IntelTurbo Boost: Runs cores faster than base clock speed
when needed
• IntelTSX: Uses multiple threads or single thread depending on
need
• P state and C state control: Fine-tune performance and sleep
state of each core
13. Intel® Xeon Scalable Processors
Latest generation of Intel Xeon processors
Up to:
• 28 cores per CPU
• 6 memory channels
• 48 PCIe lanes of bandwidth/throughput
• 100 Gbps network bandwidth (C5n.16xlarge)
Intel AVX-512:
• Twice the floating-point performance of AVX2
• 512-bit instructions (vs 256 for AVX/AVX2)
16. On-Demand Instances
• Pay for compute capacity per second (Amazon Linux and Ubuntu) or by the hour
(all other OS)
• No long-term commitments
• No upfront payments
• Increase or decrease your compute capacity depending on the demands of your
application
Solves the need for immediate compute capacity
17. Reserved Instances
Can provide a significant discount for your architectures.
• Pre-pay for capacity
• Standard RI, Convertible RIs, Scheduled RIs
• Three upfront payment methods
• Can be shared between multiple accounts (within a billing family)
Provides the ability to reserve capacity ahead of time, reducing cost
18. Spot Instances
• Purchase unused Amazon EC2 capacity
• Prices controlled by AWS based on supply and demand
• Termination notice provided 2 minutes prior to termination
• Spot Blocks: Launch Spot Instances with a duration lasting 1 to 6 hours.
Can provide the steepest discounts as long as your workloads withstand
starting and stopping
20. Amazon EC2 Dedicated Instances
Dedicated instances are physically isolated
from other AWS accounts
Dedicated Instances
Helps meet requirements for regulatory
compliance or software license use
21. Amazon EC2 Dedicated Hosts
A dedicated host is a full physical server with EC2
instance capacity fully dedicated to your use.
Dedicated Hosts
Helps meet strict requirements for regulatory
compliance or software license use
Host ID: h-039725dyhe980010
22. Amazon EC2 Tenancy
Only yourAWS
account on the
hardware?
Description
Default No Your instance runs on shared hardware.
Dedicated
Instance
Yes Runs on a non-specific piece of hardware.
Dedicated Host Yes
Runs on a specific piece of hardware of your choosing,
which you receive greater control.
23. Keeping Track of your Instances
Assign metadata tags to your AWS resources to help you:
Manage Search Filter
24. Tagging Best Practices
• Standardized, case-sensitive format for tags
• Implement automated tools to help manage resource tags
• Favor using too many tags rather than too few
• Remember, it’s easy to modify tags
• Examples: App Version, ENV, DNS Name, App Stack Identifier
Helps you to understand what your resources are doing and their cost impact.
26. Knowledge Check 1
What is an AMI? 1. An AMI is an object that stores data about the
instance such as Local Hostname, Instance ID, or
Public IP address.
2. It provides block-level storage that will disappear on
instance shutdown.
3. AMIs are used to create new EC2 instances and
contain a template for the root volume.
4. A type of storage bucket for Amazon S3.
27. Knowledge Check 1: Answer
What is an AMI? 1. An AMI is an object that stores data about the
instance such as Local Hostname, Instance ID, or
Public IP address.
2. It provides block-level storage that will disappear on
instance shutdown.
3. AMIs are used to create new EC2 instances and
contain a template for the root volume.
4. A type of storage bucket for Amazon S3.
28. Knowledge Check 2
If you wanted to select the host on which an instance would run, which
option should you use?
1. Default
2. Dedicated instance
3. Dedicated Host
29. Knowledge Check 2 : Answer
If you wanted to select the host on which an instance would run, which
option should you use?
1. Default
2. Dedicated instance
3. Dedicated Host
30. Knowledge Check 3
What is Amazon EBS? 1. Object storage solution that can scale to incredible
sizes to meet demand and storage requirements
2. Block storage device that can connect to multiple
instances at the same time.
3. File storage system that can connect to multiple
instances at the same time.
4. Block storage device that connects to one instance
at a time. Can be backed up to Amazon S3.
31. Knowledge Check 3 : Answer
What is Amazon EBS? 1. Object storage solution that can scale to incredible
sizes to meet demand and storage requirements
2. Block storage device that can connect to multiple
instances at the same time.
3. File storage system that can connect to multiple
instances at the same time.
4. Block storage device that connects to one
instance at a time. Can be backed up to Amazon
S3.
When looking at an instance type, you will see that the model has a few parts to its name—as an example, take the M type.
M is the family name, which is then followed up by a number. Here, that number is 5. The number is the generation number of that type. So, an M5 instance is the 5th generation of the M family. In general, instances of a higher generation are more powerful and provide a better value for the price.
The next part of the name is the size portion of the instance. When comparing sizes, it’s important to look at the coefficient portion of the size category.
For example a m5.2xlarge is twice as big as a m5.xlarge. This m5.xlarge is in turn twice as big as the m5.large.
You will notice later on in the chart there is a m5.12xlarge. This instance is 12 times as powerful as the m5.xlarge.
It is also important to note that network bandwidth is also tied to the size of your ec2 instance. If you are performing a task that is very network intensive you might be required to increase your instance specs in order to meet those needs.
Choosing the correct instance type is very important for reducing unneeded cost and increasing utilization of an instance.
Each instance family has its own positives that need to be addressed when deciding how you are going to architect your solution.
Let’s take a look at all of the instance families and see what their recommended workloads are.
T2 instances are burstable performance instances that provide a baseline level of CPU performance with the ability to burst above the baseline.
Use cases for this type of instance include websites and web applications, development environments, build servers, code repositories, micro services, test and staging environments, and line of business applications.
C5 instances are optimized for compute-intensive workloads and deliver very cost-effective high performance at a low price per compute ratio.
Use cases include high-performance web servers, scientific modelling, batch processing, distributed analytics, high-performance computing (HPC), machine/deep learning inference, ad serving, highly scalable multiplayer gaming, and video encoding.
Consider using the Elastic Fabric Adapter for your HPC workloads: https://aws.amazon.com/about-aws/whats-new/2018/11/introducing-elastic-fabric-adapter/
Elastic Fabric Adapter, or EFA, is a network adapter for Amazon EC2 instances that delivers the performance of on-premises HPC clusters with the elasticity and scalability of AWS. You can run HPC applications that require high levels of inter-instance communications, such as computational fluid dynamics, weather modeling, and reservoir simulation. In addition, HPC applications use popular HPC technologies, such as Message Passing Interface (MPI), which can scale to thousands of CPU cores. EFA supports industry-standard libfabric APIs, so applications that use a supported MPI library can be migrated to AWS with little or no modification.
(Note: EFA is available as an optional Amazon EC2 networking feature that you can enable on C5n.9xl, C5n.18xl, and P3dn.24xl instances. Additional instance types will be supported in the coming months.)
R4 instances are optimized for memory-intensive applications.
Use cases include high-performance databases, data mining and analysis, in-memory databases, distributed web scale in-memory caches, applications performing real-time processing of unstructured big data, Hadoop/Spark clusters, and other enterprise applications.
P3 instances are intended for general-purpose GPU compute applications.
Use cases include machine learning, deep learning, high-performance computing, computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles, and drug discovery.
H1 instances feature up to 16 TB of HDD-based local storage, deliver high disk throughput, and a balance of compute and memory.
Use cases include Amazon EMR-based workloads, distributed file systems such as HDFS and MapR-FS, network file systems, log or data processing applications such as Apache Kafka, and big data workload clusters.
All current EC2 instance types that use Intel processors include Intel's Advanced Encryption Standard New Instructions (AES-NI), which reduces the performance hit your processor takes when you enable encryption.
All instance types also include some form of Intel Advanced Vector Extension (AVX), which is Intel's instructions custom-built for floating-point intensive workloads. AVX2 provides twice the floating point performance of AVX, and AVX-512, available only on the new Intel Xeon Scalable Processor family of CPUs, doubles the performance of AVX2.
Intel Transactional Synchronization Extensions (TSX): Provides workload optimized performance specific to the applications, multi-threaded when needed and single threaded when needed
Some instance types also include Intel Turbo Boost, Intel TSX, and P State and C State control.
Intel Turbo Boost intelligently boosts the clock speed of cores based on need.
Intel Transactional Synchronization Extensions (TSX): Provides workload optimized performance specific to the applications, multi-threaded when needed and single threaded when needed.
P state and C state control allows you to tune the performance and sleep state of each core to your own needs.
To find out which instance types currently support these options, see the AWS instance types page: https://aws.amazon.com/ec2/instance-types/
The latest generation of Intel Xeon processors is the Intel Xeon Scalable Processor Family. This group provides substantial performance improvement over the prior generation, with up to 28 cores delivering enhanced per core performance, and significant increases in memory bandwidth (6 memory channels) and I/O bandwidth and throughput (48 PCIe lanes), your most data-hungry, latency-sensitive applications such as in-memory databases and high-performance computing will see notable improvements enabled by denser compute and faster access to large data volumes.
This family also includes the latest version of Intel's AVX instructions, which double the floating point performance of processors using AVX2.
As part of the Free Tier from AWS, new AWS customers can get started with Amazon EC2 t2.micro instances, S3 bucket capacity, and many other AWS service offerings for free for up to one year after sign-up. What’s available in the free tier varies from service to service. Please visit https://aws.amazon.com/free/ for details.
Amazon EC2 usage of Amazon Linux- and Ubuntu-based instances that are launched in On-Demand, Reserved and Spot form will be billed on one-second increments, with a minimum of 60 seconds. All other operating systems are billed in one-hour increments, and are billed hour forward, that is, billed at the start of the hour whether you use the full hour or not. Note that Reserved Instances are launched as, and indistinguishable from, On-Demand Instances until the bill is processed.
For more information about how AWS pricing works, see https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
Reserve Instances (RI) are a great tool to help reduce cost in your architecture. If you know what the baseline level of usage is going to be for your EC2 instances, an RI can provide significant discounts.
You can set up an RI in multiple ways:
Standard RIs: Provide the most significant discount (up to 75% off the On-Demand price) and are best suited for ready state usage
Convertible RIs: Provide a discount (up to 54% off On-Demand price) and are able to change the attributes of the RI as long as the change results in the creation of Ris of equal or greater value
Schedule RIs: These RIs launch in the time window of your choice, allowing you to match your capacity needs.
Term: AWS offers Standard RIs for 1-year or 3-year terms. Reserved Instance Marketplace sellers also offer RIs with shorter terms. AWS offers Convertible RIs for 1-year or 3-year terms.
Payment option: You can choose between three payment options: All Upfront, Partial Upfront, and No Upfront. If you choose the Partial or No Upfront payment option, the remaining balance will be due in monthly increments over the term.
For more information, see https://docs.aws.amazon.com/aws-technical-content/latest/cost-optimization-reservation-models/introduction.html
With Amazon EC2 Spot Instances, you don’t have to bid for Spot Instances in the new pricing model, and you just pay the Spot price that’s in effect for the current hour for the instances that you launch. You can request Spot capacity just like you would request On-Demand capacity, without having to spend time analyzing market prices or setting a maximum bid price.
In addition to these dedicated options, you might want to consider AWS License Manger for your license requirements:
AWS License Manager makes it easier for users to manage licenses in AWS and on-premises servers from various different software vendors (Microsoft, SAP, Oracle, etc...) It will let admins create customized licensing rules when an EC2 instance gets launched, and can use these rules to limit licensing violations such as using more licenses than an agreement allows or being able to reassign licenses to different servers on a short-term basis.
Admins gain control and visibility of all their licenses with the AWS License Manager dashboard.
https://aws.amazon.com/about-aws/whats-new/2018/11/announcing-aws-license-manager/
Dedicated Instances are Amazon EC2 instances that run in a VPC on hardware that's dedicated to a single customer. Your Dedicated Instances are physically isolated at the host hardware level from instances that belong to other AWS accounts. Dedicated Instance pricing has two components:
An hourly per instance usage fee
A dedicated per-region fee (note that you pay this once per hour, regardless of how many Dedicated Instances you're running)
A Dedicated Host is a physical EC2 server with instance capacity fully dedicated for your use. Dedicated Hosts can help you reduce costs by allowing you to use your existing server-bound software licenses, including Windows Server, SQL Server, and SUSE Linux Enterprise Server (subject to your license terms), and can also help you meet compliance requirements. Dedicated Hosts can be purchased On-Demand (hourly). Reservations can provide up to a 70% discount compared to the On-Demand price.
Dedicated Host benefits:
Save money on licensing costs: Dedicated Hosts can enable you to save money by using your own per-socket or per-core software licenses in Amazon EC2.
Help meet compliance and regulatory requirements: Dedicated Hosts allow you to place your instances in a VPC on a specific, physical server. This enables you to deploy instances using configurations that help address corporate compliance and regulatory requirements
For more information about Dedicated Hosts, see https://aws.amazon.com/ec2/dedicated-hosts/
After you launch an instance, there are some limitations to changing its tenancy.
You cannot change the tenancy of an instance from default to dedicated or host after you've launched it.
You cannot change the tenancy of an instance from dedicated or host to default after you've launched it.
You can change the tenancy of an instance from dedicated to host, or from host to dedicated, after you've launched it.
For more information, see Changing the Tenancy of an Instance.
AWS allows customers to assign metadata to their AWS resources in the form of tags. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources.
Although there are no inherent types of tags, they enable customers to categorize resources by purpose, owner, environment, or other criteria. This webpage describes commonly used tagging categories and strategies to help AWS customers implement a consistent and effective tagging strategy. The following sections assume basic knowledge of AWS resources, tagging, detailed billing, and IAM.
For more information about AWS tagging strategies, see https://aws.amazon.com/answers/account-management/aws-tagging-strategies/.
Always use a standardized, case-sensitive format for tags, and implement it consistently across all resource types.
Consider tag dimensions that support the ability to manage resource access control, cost tracking, automation, and organization.
Implement automated tools to help manage resource tags. The Resource Groups Tagging API enables programmatic control of tags, making it easier to automatically manage, search, and filter tags and resources. It also simplifies backups of tag data across all supported services with a single API call per AWS Region.
Err on the side of using too many tags rather than too few tags.
Remember that it is easy to modify tags to accommodate changing business requirements, but make sure to consider the ramifications of future changes, especially in relation to tag-based access control, automation, or upstream billing reports.