This slide covers all the basics of cloud computing with AWS -popular IAAS provider.Each AWS components are explained with a real time example like how NETFLIX using AWS components.
This AWS Tutorial ( Amazon AWS Blog Series: https://goo.gl/qQwZLz ) will give you an introduction to AWS and its domains. This AWS tutorial is ideal for those who want to become AWS Certified Solutions Architect.
Below are the topics covered in this tutorial:
1. What is Cloud?
2. What is AWS?
3. Different Domains in AWS
4. AWS Pricing
5. Migrate Your Application to AWS Infrastructure
6. Use case
#awstraining #cloudcomputing #awstutorial
1. AWS (Amazon Web Services) is a cloud computing platform that provides scalable computing, storage, database, and application services.
2. AWS offers advantages like eliminating the need to purchase and maintain physical hardware, ability to scale instantly, and pay only for resources used.
3. Key AWS services include compute, storage, databases, networking, and security services like EC2, S3, RDS, VPC, and IAM.
4. AWS has a global infrastructure of data centers across 26 regions for fault tolerance and low latency access worldwide.
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates.
This document provides an introduction to Amazon Web Services (AWS). It discusses the history and evolution of AWS over 10+ years, from enabling sellers on Amazon to building a scalable deployment environment. It outlines AWS's mission to provide services that allow businesses and developers to build scalable applications using web services. The document then provides an overview of AWS's global infrastructure and the broad range of computing, storage, database, analytics and other services it offers. It also highlights examples of how various organizations are using AWS.
Amazon Web Services or simply known as AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 40 services.
For more details - http://www.i2k2.com/services/amazon-web-services/aws/
Amazon AWS
What is EC2?
EC2 zones
How to create instance on EC2?
SSH access of EC2
Public vs Internal Vs Elastic IP
EC2 Security Group
EC2 demo app (Ruby)
What is S3 bucket?
S3 demo app (Ruby)
Amazon Web Services (AWS) is a comprehensive cloud computing platform that provides infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). AWS offers global compute, storage, database, analytics, application, and deployment services to help organizations increase agility and lower costs. Key advantages of AWS include cost efficiency, reliability with 24/7 access and redundancy, unlimited storage, easy backup and recovery, and easy access to information from anywhere via the internet. AWS training in Bangalore teaches skills like using EC2, S3, load balancers, and VPC to deploy and manage applications in the cloud. With Bangalore's large IT industry and growing demand for AWS
The document discusses how to protect web applications using AWS Shield and AWS WAF. It provides an overview of common attack vectors and how AWS services like Shield, WAF, CloudFront, and ELB can help mitigate risks. Specific techniques are described for using WAF to address the OWASP Top 10 vulnerabilities through rules, and how to implement strategies like blacklisting, whitelisting, and enforcing request hygiene. Monitoring attacks and using managed rule sets from security vendors are also covered.
This AWS Tutorial ( Amazon AWS Blog Series: https://goo.gl/qQwZLz ) will give you an introduction to AWS and its domains. This AWS tutorial is ideal for those who want to become AWS Certified Solutions Architect.
Below are the topics covered in this tutorial:
1. What is Cloud?
2. What is AWS?
3. Different Domains in AWS
4. AWS Pricing
5. Migrate Your Application to AWS Infrastructure
6. Use case
#awstraining #cloudcomputing #awstutorial
1. AWS (Amazon Web Services) is a cloud computing platform that provides scalable computing, storage, database, and application services.
2. AWS offers advantages like eliminating the need to purchase and maintain physical hardware, ability to scale instantly, and pay only for resources used.
3. Key AWS services include compute, storage, databases, networking, and security services like EC2, S3, RDS, VPC, and IAM.
4. AWS has a global infrastructure of data centers across 26 regions for fault tolerance and low latency access worldwide.
Intended for customers who have (or will have) thousands of instances on AWS, this session is about reducing the complexity of managing costs for these large fleets so they run efficiently. Attendees will learn about common roadblocks that prevent large customers from cost optimizing, tools they can use to efficiently remove those roadblocks, and techniques to monitor their rate of cost optimization. The session will include a case study that will talk in detail about the millions of dollars saved using these techniques. Customers will learn about a range of templates they can use to quickly implement these techniques, and also partners who can help them implement these templates.
This document provides an introduction to Amazon Web Services (AWS). It discusses the history and evolution of AWS over 10+ years, from enabling sellers on Amazon to building a scalable deployment environment. It outlines AWS's mission to provide services that allow businesses and developers to build scalable applications using web services. The document then provides an overview of AWS's global infrastructure and the broad range of computing, storage, database, analytics and other services it offers. It also highlights examples of how various organizations are using AWS.
Amazon Web Services or simply known as AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 40 services.
For more details - http://www.i2k2.com/services/amazon-web-services/aws/
Amazon AWS
What is EC2?
EC2 zones
How to create instance on EC2?
SSH access of EC2
Public vs Internal Vs Elastic IP
EC2 Security Group
EC2 demo app (Ruby)
What is S3 bucket?
S3 demo app (Ruby)
Amazon Web Services (AWS) is a comprehensive cloud computing platform that provides infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). AWS offers global compute, storage, database, analytics, application, and deployment services to help organizations increase agility and lower costs. Key advantages of AWS include cost efficiency, reliability with 24/7 access and redundancy, unlimited storage, easy backup and recovery, and easy access to information from anywhere via the internet. AWS training in Bangalore teaches skills like using EC2, S3, load balancers, and VPC to deploy and manage applications in the cloud. With Bangalore's large IT industry and growing demand for AWS
The document discusses how to protect web applications using AWS Shield and AWS WAF. It provides an overview of common attack vectors and how AWS services like Shield, WAF, CloudFront, and ELB can help mitigate risks. Specific techniques are described for using WAF to address the OWASP Top 10 vulnerabilities through rules, and how to implement strategies like blacklisting, whitelisting, and enforcing request hygiene. Monitoring attacks and using managed rule sets from security vendors are also covered.
[AWS Dev Day] 앱 현대화 | AWS Fargate를 사용한 서버리스 컨테이너 활용 하기 - 삼성전자 개발자 포털 사례 - 정영준...Amazon Web Services Korea
삼성전자 개발자 포탈은 SmartThings Cloud, Bixby 와 같은 삼성전자의 어플리케이션 에코시스템에 개발자 도구를 활용하여 어플리케이션을 개발할 수 있게 해주는 플랫폼입니다. 이 플랫폼을 컨테이너로 개발하고, 컨테이너에 패키징하는 어플리케이션 로직에만 집중 할 수 있다면 배포와 관리가 얼마나 손쉬워 질까요? 삼성전자의 실제 사례를 통하여 Fargate 를 활용한 컨테이너 환경의 장점에 대해서 알아봅니다.
Cloud computing allows companies to outsource their infrastructure needs to large cloud providers like Amazon Web Services (AWS). This reduces costs and provides scalability. AWS offers services like S3 for storage, EC2 for virtual servers, SQS for messaging, and SimpleDB for databases. Companies pay for only the resources they use, allowing them to scale up or down as needed. However, companies must ensure their applications and data are secure when using cloud services.
Running Microsoft SharePoint On AWS - Smartronix and AWS - WebinarAmazon Web Services
Miles Ward, Solution Architect, AWS
Robert Groat, Chief Technology Officer, Smartronix
discuss how you can run microsoft Enterprise Applications like SharePoint on AWS Cloud, Architecture. Recovery.gov
This document provides an overview of architecting applications for the AWS cloud. It discusses key AWS cloud computing attributes like scalability, on-demand provisioning, and efficiency of experts. It also outlines best practices like designing for failure, loose coupling, dynamism, and security. Specific AWS services are mapped to common application needs like compute, storage, content delivery, databases, and more. Overall the document aims to educate readers on how to leverage AWS architectural principles and services.
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? This presentation will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Access a recorded version of the webinar based on this presentation on YouTube here: http://youtu.be/jLVPqoV4YjU
You can find the rest of the Masterclass webinar series for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
If you are interested in learning about how you apply variety of different AWS services to specific challenges, please check out the Journey Through the Cloud series, which you can find here: http://aws.amazon.com/campaigns/emea/journey/
Amazon Web Services (AWS) is a subsidiary of Amazon.com that provides on-demand cloud computing platforms operated from server farms located across 16 geographical regions worldwide. AWS allows organizations to access shared computing and storage resources over the internet rather than building and maintaining their own infrastructure. Some benefits of AWS include lower costs, easy management, portability, and no direct coupling between hardware and software. Large companies like Netflix, Adobe, and General Electric utilize AWS for its scalable and reliable cloud services.
What is Cloud Computing | Cloud Computing Tutorial | AWS Tutorial | AWS Train...Edureka!
This "What is Cloud Computing" tutorial will give you an introduction to the cloud computing world. Further, we will discuss a use case and understand the cloud computing benefits. Towards the end, we shall see how AWS is a leader in the Cloud industry by analyzing the market demand for all the cloud players. This tutorial is ideal for those who want to become an AWS Certified Solutions Architect.
Amazon AWS Tutorial Blog Series: https://goo.gl/qQwZLz
Below are the topics covered in this tutorial:
1. Why Cloud Computing?
2. What is Cloud Computing?
3. Cloud Service Models
4. Cloud Advantages
5. Cloud Use Case
6. Various Cloud Providers
7. Market Demand for AWS
#cloudcomputing #awstraining #awstutorial #awscertification
Cloud computing involves delivering computing services over the Internet. Instead of running programs locally, users access software and storage that resides on remote servers in the "cloud." The concept originated in the 1950s but Amazon launched the first major public cloud in 2006. Cloud computing has three main components - clients that access the cloud, distributed servers that host applications and data, and data centers that house these servers. There are different types of clients, deployment models for clouds, service models, and cloud computing enables scalability, reliability, and efficiency for applications accessed over the Internet like email, social media, and search engines.
In this session we will talk through deployment scenarios, design considerations and introduce AWS Active Directory Service. AWS Directory Service is a managed service that allows you to connect your AWS resources with an existing on-premises Microsoft Active Directory or to set up a new, stand-alone directory in the AWS cloud.
Cloud computing allows users to access computer resources like data storage and computing power over the internet rather than maintaining those resources locally. There are different service models of cloud computing including Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Cloud computing also has various deployment models such as public clouds, private clouds, hybrid clouds, and community clouds that offer cloud services to different user groups. Migrating to the cloud can provide businesses with mobility, flexibility, and reduced costs compared to maintaining local computing resources.
This document provides an introduction to Amazon Web Services (AWS), including:
- AWS is a cloud provider that offers on-demand servers and services that can easily scale. It powers websites like Amazon and Netflix.
- AWS has a global infrastructure spanning 69 availability zones across 22 regions worldwide, with plans to expand to 3 more regions.
- AWS regions contain multiple, isolated availability zones to protect against disasters. Services are scoped to regions except for a few like S3.
- AWS offers a variety of computing, networking, database, storage and security services according to different responsibility models.
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.This slide describes about cloud watch key concepts, workflow, dashboard, metrics, cloud watch agent, alarms, events and logs.
___________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
AWS Elastic Beanstalk is a service that allows developers to quickly deploy and manage applications in the AWS cloud without worrying about the underlying infrastructure. It provides an easy way to launch applications developed in Java or other languages and have them automatically scaled across Amazon EC2 instances. Key features include automated provisioning and deployment, easy management of settings, built-in monitoring, and troubleshooting tools. Developers retain full control over their AWS resources while taking advantage of Elastic Beanstalk's management capabilities.
- Azure Service Bus is a multi-tenant cloud messaging service that provides a fully managed enterprise integration message broker.
- It includes queues for point-to-point messaging and topics with subscriptions for publish-subscribe messaging patterns.
- Messages in Service Bus can contain user-defined properties and a body, and subscriptions can apply filters on properties and actions like routing.
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.This slide describes about the features of storage gateway and its types, file gateway, volume gateway and tape gateway
__________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business.
The document provides an overview of Apache Spark and Hadoop ecosystem tools on Amazon EMR including Spark, Hive on Tez, and Presto. It discusses building data lakes with Amazon EMR and S3, running jobs and security options, and customer use cases. The demo shows Zeppelin and Hue interfaces. Examples are given of Netflix using Presto on EMR with a 25PB dataset and FINRA saving 60% costs by moving to HBase on EMR.
AWS Summit London 2014 | Scaling on AWS for the First 10 Million Users (200)Amazon Web Services
This mid-level technical session will provide an overview of the techniques that you can use to build high-scalabilty applications on AWS. Take a journey from 1 user to 10 million users and understand how your application's architecture can evolve and which AWS services can help as you increase the number of users that you serve.
Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud.
[AWS Dev Day] 앱 현대화 | AWS Fargate를 사용한 서버리스 컨테이너 활용 하기 - 삼성전자 개발자 포털 사례 - 정영준...Amazon Web Services Korea
삼성전자 개발자 포탈은 SmartThings Cloud, Bixby 와 같은 삼성전자의 어플리케이션 에코시스템에 개발자 도구를 활용하여 어플리케이션을 개발할 수 있게 해주는 플랫폼입니다. 이 플랫폼을 컨테이너로 개발하고, 컨테이너에 패키징하는 어플리케이션 로직에만 집중 할 수 있다면 배포와 관리가 얼마나 손쉬워 질까요? 삼성전자의 실제 사례를 통하여 Fargate 를 활용한 컨테이너 환경의 장점에 대해서 알아봅니다.
Cloud computing allows companies to outsource their infrastructure needs to large cloud providers like Amazon Web Services (AWS). This reduces costs and provides scalability. AWS offers services like S3 for storage, EC2 for virtual servers, SQS for messaging, and SimpleDB for databases. Companies pay for only the resources they use, allowing them to scale up or down as needed. However, companies must ensure their applications and data are secure when using cloud services.
Running Microsoft SharePoint On AWS - Smartronix and AWS - WebinarAmazon Web Services
Miles Ward, Solution Architect, AWS
Robert Groat, Chief Technology Officer, Smartronix
discuss how you can run microsoft Enterprise Applications like SharePoint on AWS Cloud, Architecture. Recovery.gov
This document provides an overview of architecting applications for the AWS cloud. It discusses key AWS cloud computing attributes like scalability, on-demand provisioning, and efficiency of experts. It also outlines best practices like designing for failure, loose coupling, dynamism, and security. Specific AWS services are mapped to common application needs like compute, storage, content delivery, databases, and more. Overall the document aims to educate readers on how to leverage AWS architectural principles and services.
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? This presentation will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Access a recorded version of the webinar based on this presentation on YouTube here: http://youtu.be/jLVPqoV4YjU
You can find the rest of the Masterclass webinar series for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
If you are interested in learning about how you apply variety of different AWS services to specific challenges, please check out the Journey Through the Cloud series, which you can find here: http://aws.amazon.com/campaigns/emea/journey/
Amazon Web Services (AWS) is a subsidiary of Amazon.com that provides on-demand cloud computing platforms operated from server farms located across 16 geographical regions worldwide. AWS allows organizations to access shared computing and storage resources over the internet rather than building and maintaining their own infrastructure. Some benefits of AWS include lower costs, easy management, portability, and no direct coupling between hardware and software. Large companies like Netflix, Adobe, and General Electric utilize AWS for its scalable and reliable cloud services.
What is Cloud Computing | Cloud Computing Tutorial | AWS Tutorial | AWS Train...Edureka!
This "What is Cloud Computing" tutorial will give you an introduction to the cloud computing world. Further, we will discuss a use case and understand the cloud computing benefits. Towards the end, we shall see how AWS is a leader in the Cloud industry by analyzing the market demand for all the cloud players. This tutorial is ideal for those who want to become an AWS Certified Solutions Architect.
Amazon AWS Tutorial Blog Series: https://goo.gl/qQwZLz
Below are the topics covered in this tutorial:
1. Why Cloud Computing?
2. What is Cloud Computing?
3. Cloud Service Models
4. Cloud Advantages
5. Cloud Use Case
6. Various Cloud Providers
7. Market Demand for AWS
#cloudcomputing #awstraining #awstutorial #awscertification
Cloud computing involves delivering computing services over the Internet. Instead of running programs locally, users access software and storage that resides on remote servers in the "cloud." The concept originated in the 1950s but Amazon launched the first major public cloud in 2006. Cloud computing has three main components - clients that access the cloud, distributed servers that host applications and data, and data centers that house these servers. There are different types of clients, deployment models for clouds, service models, and cloud computing enables scalability, reliability, and efficiency for applications accessed over the Internet like email, social media, and search engines.
In this session we will talk through deployment scenarios, design considerations and introduce AWS Active Directory Service. AWS Directory Service is a managed service that allows you to connect your AWS resources with an existing on-premises Microsoft Active Directory or to set up a new, stand-alone directory in the AWS cloud.
Cloud computing allows users to access computer resources like data storage and computing power over the internet rather than maintaining those resources locally. There are different service models of cloud computing including Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Cloud computing also has various deployment models such as public clouds, private clouds, hybrid clouds, and community clouds that offer cloud services to different user groups. Migrating to the cloud can provide businesses with mobility, flexibility, and reduced costs compared to maintaining local computing resources.
This document provides an introduction to Amazon Web Services (AWS), including:
- AWS is a cloud provider that offers on-demand servers and services that can easily scale. It powers websites like Amazon and Netflix.
- AWS has a global infrastructure spanning 69 availability zones across 22 regions worldwide, with plans to expand to 3 more regions.
- AWS regions contain multiple, isolated availability zones to protect against disasters. Services are scoped to regions except for a few like S3.
- AWS offers a variety of computing, networking, database, storage and security services according to different responsibility models.
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.This slide describes about cloud watch key concepts, workflow, dashboard, metrics, cloud watch agent, alarms, events and logs.
___________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
AWS Elastic Beanstalk is a service that allows developers to quickly deploy and manage applications in the AWS cloud without worrying about the underlying infrastructure. It provides an easy way to launch applications developed in Java or other languages and have them automatically scaled across Amazon EC2 instances. Key features include automated provisioning and deployment, easy management of settings, built-in monitoring, and troubleshooting tools. Developers retain full control over their AWS resources while taking advantage of Elastic Beanstalk's management capabilities.
- Azure Service Bus is a multi-tenant cloud messaging service that provides a fully managed enterprise integration message broker.
- It includes queues for point-to-point messaging and topics with subscriptions for publish-subscribe messaging patterns.
- Messages in Service Bus can contain user-defined properties and a body, and subscriptions can apply filters on properties and actions like routing.
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.This slide describes about the features of storage gateway and its types, file gateway, volume gateway and tape gateway
__________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
Amazon RDS allows you to launch an optimally configured, secure and highly available database with just a few clicks. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you to focus on your applications and business.
The document provides an overview of Apache Spark and Hadoop ecosystem tools on Amazon EMR including Spark, Hive on Tez, and Presto. It discusses building data lakes with Amazon EMR and S3, running jobs and security options, and customer use cases. The demo shows Zeppelin and Hue interfaces. Examples are given of Netflix using Presto on EMR with a 25PB dataset and FINRA saving 60% costs by moving to HBase on EMR.
AWS Summit London 2014 | Scaling on AWS for the First 10 Million Users (200)Amazon Web Services
This mid-level technical session will provide an overview of the techniques that you can use to build high-scalabilty applications on AWS. Take a journey from 1 user to 10 million users and understand how your application's architecture can evolve and which AWS services can help as you increase the number of users that you serve.
Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud.
AWS Summit 2014 Melbourne - Breakout 5
Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud.
Presenter: Craig Dickson, Solutions Architect, Amazon Web Services
Cloud Computing with Amazon Web Services.
AWS Cloud Solutions - Websites, Archiving, Data Lakes and Analytics, Serverless Computing, Internet of Things and more.
Containers in AWS - Amazon Elastic Container Service, Fargate, and EKS
Big Data and the Data lake implementation in AWS
Machine Learning with Amazon SageMaker - Build, train, and deploy machine learning models at scale.
AWS Identity and Access Management (IAM) - Securely manage access to AWS services and resources.
AWS Pricing - How does AWS pricing work?
AWS Summit Auckland 2014 | Scaling on AWS for the First 10 Million UsersAmazon Web Services
You have attended AWS training. Gathered all the relevant information about AWS services but how do you now show the value of the AWS Cloud to your business. This session will run through how you would build a business case for the cloud including TCO and cost comparisons.
AWS Summit Sydney 2014 | Scaling on AWS for the First 10 Million UsersAmazon Web Services
Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud.
This document provides an overview of best practices for scaling infrastructure on AWS from 1 user to 10 million users. It discusses starting with a single EC2 instance, then expanding horizontally by adding more instances and vertically by increasing instance sizes. As users grow from 1,000 to 500,000, the document recommends separating databases from web servers, using read replicas, caching with ElastiCache, and auto scaling. From 500,000 to 1 million users, it suggests moving to a service-oriented architecture and leveraging other AWS services. Scaling from 5 to 10 million users may require database sharding or moving some functions to NoSQL databases.
This document discusses best practices for scaling infrastructure on AWS to support over 10 million users. It begins by recommending using multiple AWS regions and availability zones for redundancy. It then walks through scaling a simple single-instance application to be horizontally and vertically scaled across multiple instances, database read replicas, caching, and content delivery. Key services discussed include EC2, RDS, DynamoDB, ElastiCache, S3, CloudFront, Route 53, and Auto Scaling. Automating management using tools and separating concerns like static assets are also recommended.
What Is Cloud Computing? | Cloud Computing For Beginners | Cloud Computing Tr...Simplilearn
This Cloud Computing presentation will help you understand what is Cloud Computing, benefits of Cloud Computing, types of Cloud Computing and who uses Cloud Computing. In simple words, Cloud Computing is the use of a network of remote servers hosted on the internet to store, manage and process data rather than a local server. With the increased importance of Cloud Computing, qualified Cloud solutions architects and engineers are in great demand. Organizations have moved to cloud platforms for better scalability, mobility, and security. Cloud solutions architects are among the highest paid professionals in the IT industry. With the cloud market set to grow more than ever before the need for IT staff with the appropriate technical and business skills has never been greater. This video will introduce you to Cloud Computing by explaining what it is and how do you get benefited from this Cloud Computing technology.
Below topics are explained in this Cloud Computing presentation:
1. Before Cloud Computing
2. What is Cloud Computing?
3. Benefits of Cloud Computing
4. Types of Cloud Computing
5. Who uses Cloud Computing?
Simplilearn’s Cloud Architect Master’s Program will build your Amazon Web Services (AWS) and Microsoft Azure cloud expertise from the ground up. You’ll learn to master the architectural principles and services of two of the top cloud platforms, design and deploy highly scalable, fault-tolerant applications and develop skills to transform yourself into an AWS and Azure cloud architect.
Why become a Cloud Architect?
With the increasing focus on cloud computing and infrastructure over the last several years, cloud architects are in great demand worldwide. Many organizations have moved to cloud platforms for better scalability, mobility, and security, and cloud solutions architects are among the highest paid professionals in the IT industry.
According to a study by Goldman Sachs, cloud computing is one of the top three initiatives planned by IT executives as they make cloud infrastructure an integral part of their organizations. According to Forbes, enterprise IT architects with cloud computing expertise are earning a median salary of $137,957.
Learn more at: https://www.simplilearn.com
The document discusses architecting highly available applications on AWS. It begins with an overview of AWS services and best practices for scalability. It then walks through scaling an application from 1 user to over 1 million users, starting with a single EC2 instance and gradually introducing services like Auto Scaling, load balancing, database read replicas, caching, and separating components. The document emphasizes loose coupling of services, automation, and monitoring to allow scalability.
This document provides guidance on scaling infrastructure on AWS for handling large numbers of users, from 1 user to over 10 million users. It discusses starting simply with a single EC2 instance and database, then expanding horizontally and vertically by adding more instances, separating tiers, using auto-scaling, and implementing a service-oriented architecture. As the number of users grows from thousands to millions, it recommends techniques like database read replicas, DynamoDB, ElastiCache, SQS/SNS, and database sharding or federation. Monitoring, metrics, and outsourcing management are also emphasized as critical pieces for large-scale applications.
Cloud computing gives you a number of advantages, such as being able to scale your application on demand. As a new business looking to use the cloud, you inevitably ask yourself, "Where do I start?" Join us in this session to understand best practices for scaling your resources from zero to millions of users. We will show you how to best combine different AWS services, make smarter decisions for architecting your application, and best practices for scaling your infrastructure in the cloud.
Kalibrr is a startup that provides an online talent assessment platform. They launched their minimum viable product (MVP) on AWS in March 2013, seeing user growth from 0 to 25,000 in two months. AWS allowed Kalibrr to scale easily and provided reliability with no downtime. Kalibrr uses EC2 instances to host their web servers, SES for email, S3 for content storage, ELB for load balancing, and Route 53 for DNS management. AWS's scalability, ease of use, and reliability helped Kalibrr launch their MVP successfully and support further growth.
Why Scale Matters and How the Cloud is Really Different (at scale)Amazon Web Services
This document discusses how various companies scale their services and applications on AWS to handle large user loads and data volumes. It provides examples of Animoto handling over 1 billion files saved per day and Airbnb having over 9 million guests. It then outlines an approach for scaling an application from 1 user to millions by starting with EC2 instances, adding services like S3, DynamoDB, ElastiCache and auto-scaling groups. The document emphasizes using AWS managed services to avoid re-inventing solutions for tasks like queuing, storage and databases.
Learn about the patterns and techniques a business should be using in building their infrastructure on Amazon Web Services to be able to handle rapid growth and success in the early days. From leveraging highly scalable AWS services, to architecting best patterns, there are a number of smart choices you can make early on to help you overcome some typical infrastructure issues.
Presenter: Chris Munns,Solutions Architect, Amazon Web Services
This document discusses the evolution of a Colombian company called Cachivaches from its founding in 1987 to the present day. It started as a family business with 3 lines of business and has grown to over 200 employees with multiple stores. In 2014-2015 it created an e-commerce website, but in 2016 the website collapsed for a day during Halloween due to high traffic. In 2017 it migrated its infrastructure to AWS EC2 and RDS. In 2018 it implemented auto scaling on AWS to handle traffic spikes without limits.
Introduction to amazon web services for developersCiklum Ukraine
Introduction to Amazon Web Services for developers
About presenter
Roman Gomolko with 11 years of experience in development including 4 years of day-to-day work with Amazon Web Services.
Disclaimer
Cloud-hosting is buzz-word for a while and in my talk I would like to give an introduction to Amazon Web Services (AWS).
We will talk about basic building blocks of AWS like EC2, ELB, ASG, S3, CloudFront, RDS, IAM, VPC and other scary or funny abbreviations.
Then we will discuss how to migrate existing applications to AWS. This topic includes:
• how to design infrastructure and services to use when migrating
• how to choose proper instance types
• how to estimate infrastructure cost
• how it will affect performance of application migrated
Then we will make an overview of services provided by AWS and possible apply in your current of future applications:
• SQS
• DynamoDB
• Kinesis
• CloudSearch
• CodeDeploy
• CloudFormation
And if we survive we will talk a little how to design Cloud applications. That’s mainly about general principles.
My talk mostly targeted towards decision makers and decisions pushers of small and medium size companies which are consider “going cloud” or already moving into this direction. Everyone interested in gaining knowledge in these areas are welcomed as well.
We will spend around 2–3 hours together and you will be able to pitch-in any questions until we totally goes away from original plan.
Escalando hasta sus primeros 10 millones de usuarios discusses best practices for architecting applications on AWS to scale from 1 user to over 1 million users. It recommends starting with basic AWS services like EC2, Route 53, and S3. It then discusses strategies for adding databases, load balancing, caching, and automation as user traffic grows. Key recommendations include leveraging managed AWS services, separating concerns into independent components, optimizing for loose coupling, and using auto scaling to dynamically scale resources with demand.
Yow Conference Dec 2013 Netflix Workshop Slides with NotesAdrian Cockcroft
This document provides an overview and agenda for a workshop on patterns for continuous delivery, high availability, DevOps and cloud native development using NetflixOSS open source tools and frameworks. The presenter introduces himself and his background. The content covers Netflix's architecture evolution from monolithic to microservices, how Netflix scales on AWS, and principles and outcomes that enable cloud native development. The workshop then dives into specific NetflixOSS projects like Eureka, Cassandra, Zuul and Hystrix that help with service discovery, data storage, routing and availability. Tools for deployment, configuration, cost analysis and developer productivity are also discussed.
Human computer interaction-web interface design and mobile eco systemN.Jagadish Kumar
This document discusses various contextual tools and patterns that support virtual pages for designing rich web user interfaces.
It begins by explaining different types of contextual tools like always-visible tools, hover-reveal tools, toggle-reveal tools, and multi-level tools. It then discusses overlays and inlays, describing dialog, detail, and input overlays as well as dialog, list, and detail inlays.
Finally, it covers patterns that support virtual pages like virtual scrolling, inline paging, scrolled paging, panning, and zoomable user interfaces. Virtual scrolling creates the illusion of a larger page by dynamically loading more content as the user scrolls. Inline paging updates only part of
Human computer interaction -Design and software processN.Jagadish Kumar
The document discusses the process of interactive design for human-computer interaction (HCI). It begins by defining design as achieving goals within constraints. It notes that goals for a wireless personal movie player may include young users wanting to watch and share movies on the go, while constraints could be withstanding rain or using existing video standards. The core of HCI design involves understanding users and technology through requirements analysis, prototyping and evaluating designs through iterations to achieve the best possible design within time and budget constraints. The process aims to incorporate user research and usability from the beginning of design through implementation.
Human computer interaction -Input output channel with ScenarioN.Jagadish Kumar
This document discusses input and output channels in human-computer interaction. It describes the five human senses - sight, hearing, touch, taste and smell - and how they provide input. It then discusses the major effectors like limbs, fingers and vocal systems that provide human output. The document focuses on how vision, hearing and touch are used as input channels in interacting with computers, primarily through the eyes, fingers and voice. It provides details on the physiological mechanisms and processing involved in each sense.
This document discusses human-computer interaction (HCI). It defines HCI as the study of how humans interact with computer systems. The history and evolution of HCI is covered, from its origins in the 1970s-1990s to investigate desktop usability, to the modern fields of user experience (UX) design, human-robot interaction, and human data interaction. Key differences between HCI as a field of study and UX as an application of HCI theory are outlined. Finally, potential career paths for HCI graduates such as user researcher, product designer, and interface engineer are presented.
The document compares existing local file systems to HDFS. Local file systems like EXT4 store files in fixed size blocks but have no awareness of or ability to distribute blocks across multiple nodes. This exposes data to loss if a node fails. HDFS addresses this by spreading blocks across multiple nodes and replicating each block for redundancy. It divides files into blocks which are distributed and tracked across the cluster, allowing easy management of large volumes of data in a fault-tolerant manner. HDFS provides a distributed file system view across all nodes while local file systems remain for each node's operating system.
1. The document discusses big data problems faced by various domains like science, government, and private organizations.
2. It defines big data based on the 3Vs - volume, velocity, and variety. Volume alone is not sufficient, and these factors must be considered together.
3. Traditional databases are not suitable for big data problems due to issues with scalability, structure of data, and hardware limitations. Distributed file systems like Hadoop are better solutions as they can handle large and varied datasets across multiple nodes.
This document discusses computer forensic tools and how to evaluate them. It covers the major tasks performed by forensic tools, including acquisition, validation, extraction, reconstruction, and reporting. Acquisition involves making a copy of the original drive, while validation ensures the integrity of copied data. Extraction recovers data through viewing, searching, decompressing, and other methods. Reconstruction recreates a suspect drive. Reporting generates logs and reports on the examination process and findings. The document examines both software and hardware tools, as well as command-line and graphical user interface options. Maintaining and selecting appropriate tools is important for effective computer investigations.
This document discusses data warehousing and data mining. It defines a data warehouse as a subject-oriented, integrated, time-variant collection of data used to support management decision making. Data is extracted from operational systems, transformed, and loaded into the warehouse. Dimensional modeling approaches like Kimball and Inmon are described. The document outlines data mining techniques like clustering, classification, and regression that can be used to analyze warehouse data and predict trends. Overall, the document presents an overview of data warehousing and mining concepts to provide the right data for improved decision making.
This slide will cover details of evidence collection in cyber forensic which will be more useful for CSE & IT department students studying in engineering colleges.
The document provides an overview of SQL and reasons for learning SQL. It discusses what SQL is, why learn SQL, and provides an overview of SQL functions including retrieving data using SELECT statements, arithmetic expressions, null values, column aliases, concatenation operators, literal character strings, and alternative quote operators. It also covers restricting and sorting data using WHERE clauses, comparison operators, logical operators, pattern matching, and null conditions.
The document discusses the differences between packets and frames, and provides details on the transport layer. It explains that the transport layer is responsible for process-to-process delivery and uses port numbers for addressing. Connection-oriented protocols like TCP use three-way handshaking for connection establishment and termination, and implement flow and error control using mechanisms like sliding windows. Connectionless protocols like UDP are simpler but unreliable, treating each packet independently.
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
ELS: 2.4.1 POWER ELECTRONICS Course objectives: This course will enable stude...Kuvempu University
Introduction - Applications of Power Electronics, Power Semiconductor Devices, Control Characteristics of Power Devices, types of Power Electronic Circuits. Power Transistors: Power BJTs: Steady state characteristics. Power MOSFETs: device operation, switching characteristics, IGBTs: device operation, output and transfer characteristics.
Thyristors - Introduction, Principle of Operation of SCR, Static Anode- Cathode Characteristics of SCR, Two transistor model of SCR, Gate Characteristics of SCR, Turn-ON Methods, Turn-OFF Mechanism, Turn-OFF Methods: Natural and Forced Commutation – Class A and Class B types, Gate Trigger Circuit: Resistance Firing Circuit, Resistance capacitance firing circuit.
3rd International Conference on Artificial Intelligence Advances (AIAD 2024)GiselleginaGloria
3rd International Conference on Artificial Intelligence Advances (AIAD 2024) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the area advanced Artificial Intelligence. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the research area. Core areas of AI and advanced multi-disciplinary and its applications will be covered during the conferences.
Particle Swarm Optimization–Long Short-Term Memory based Channel Estimation w...IJCNCJournal
Paper Title
Particle Swarm Optimization–Long Short-Term Memory based Channel Estimation with Hybrid Beam Forming Power Transfer in WSN-IoT Applications
Authors
Reginald Jude Sixtus J and Tamilarasi Muthu, Puducherry Technological University, India
Abstract
Non-Orthogonal Multiple Access (NOMA) helps to overcome various difficulties in future technology wireless communications. NOMA, when utilized with millimeter wave multiple-input multiple-output (MIMO) systems, channel estimation becomes extremely difficult. For reaping the benefits of the NOMA and mm-Wave combination, effective channel estimation is required. In this paper, we propose an enhanced particle swarm optimization based long short-term memory estimator network (PSOLSTMEstNet), which is a neural network model that can be employed to forecast the bandwidth required in the mm-Wave MIMO network. The prime advantage of the LSTM is that it has the capability of dynamically adapting to the functioning pattern of fluctuating channel state. The LSTM stage with adaptive coding and modulation enhances the BER.PSO algorithm is employed to optimize input weights of LSTM network. The modified algorithm splits the power by channel condition of every single user. Participants will be first sorted into distinct groups depending upon respective channel conditions, using a hybrid beamforming approach. The network characteristics are fine-estimated using PSO-LSTMEstNet after a rough approximation of channels parameters derived from the received data.
Keywords
Signal to Noise Ratio (SNR), Bit Error Rate (BER), mm-Wave, MIMO, NOMA, deep learning, optimization.
Volume URL: https://airccse.org/journal/ijc2022.html
Abstract URL:https://aircconline.com/abstract/ijcnc/v14n5/14522cnc05.html
Pdf URL: https://aircconline.com/ijcnc/V14N5/14522cnc05.pdf
#scopuspublication #scopusindexed #callforpapers #researchpapers #cfp #researchers #phdstudent #researchScholar #journalpaper #submission #journalsubmission #WBAN #requirements #tailoredtreatment #MACstrategy #enhancedefficiency #protrcal #computing #analysis #wirelessbodyareanetworks #wirelessnetworks
#adhocnetwork #VANETs #OLSRrouting #routing #MPR #nderesidualenergy #korea #cognitiveradionetworks #radionetworks #rendezvoussequence
Here's where you can reach us : ijcnc@airccse.org or ijcnc@aircconline.com
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
1. AMAZON WEB SERVICES
N.Jagadish Kumar
Assistant Professor
Velammal Institute of technology
Chennai, India
Referred from Udemy AWS introduction course
2.
3.
4. When you think about cloud
• Think in this way
• It is just a computer somewhere else and
someway your utilizing its storage or
processing power. Obviously using internet
connection.
5. Now in Reality
• It is not Just a Single Computer.
• It is a Datacenter. Where you are utilizing
Rows and Rows of Server computers.
6. So companies like
• iCloud ,Drop Box, Microsoft Azure Amazon Web
Services
• Stores your files, your pictures & your documents
on a Server computer which is in one of these
racks which is in one of these datacenters which
are now all over the world.
17. Now here your on a Home computer
• Now you may use iCloud or Dropbox to store
videos, pictures or some personal files that
you may have.
18. For an individual cloud allows to do
Backup and sharing
• Backup: So you use iCloud or Drop box as an additional
backup for the pictures that you have from vacations or
some documents that you have or music or videos that
you want to save and on your home computer on your
home hard drive there's always the riffs that your hard
drive will fail so you use the cloud for backups
so there's always another copy.
19. ….
• Sharing: The cloud is also great for sharing
that I don't necessarily mean that in terms of
sharing your files or your pictures with other
people ,but sharing it across devices so you
can access the same files on your mobile
device or if you're at work from your work
computer and it just means that your files will
be available anywhere that you go.
20.
21. Two pieces of Cloud terminology
• High Availability
• Fault Tolerant
22. High Availability
• If you put a file up into the cloud you can
access it from any type of device or any type
of computer as long as it has an internet
connection.
• So that makes that file highly available.
• You can access it from anywhere.
23. Fault Tolerant
• There are several different ways that you can use the term fault
tolerant here.
• If you have a file only on your home computer and your home hard
drive fails then it's gone.
• So the system that was in place did not account for that fault being
the fault of your hard drive.
• But if the file is up in the cloud and it's backed up on multiple
servers then that file can become corrupt or the cloud server that it
is currently stored on can fail and there will always be another copy
for you to access.
• So if there is a fault in the system you will still always have the
ability to retrieve that file.
• So the terms high availability and fault tolerant really go hand-in-
hand meeting that your files are always available across multiple
devices.
24. Common enterprise uses of cloud Services
• What will happen if a Company not using
Cloud services?
• Let us discuss a Scenario.
25. …..
• A software Company in
• 2016 has 1000 users which need 3 servers to power
the software those users.
• 2017 the company estimating 5000 users because of
their growth ,so they are going to have additional 3
servers to their on premise data center.
• Now the problem is Space to put these servers in on
premise data centre, Investment of Money to buy
these servers and time taken to install operating
systems ,configuring them and test them up all take lot
of time.
• Let us assume their estimates were right.In 2017 their
user base is about 5000 users. All the 6 servers running
in their on premise data centre.
26. Now in 2018 they estimating 20000
users.
• Now again they want 12 More servers .They have to
consider Space, Money ,Time and all other factors to
setup on premise datacentre like before.
• But this time their estimation fails, only 7000 users .
27. …..
• So now a whole segment of the servers that
they just purchased for 2018 aren't being
used.
• So it was a tremendous waste of resources a
tremendous waste of money for something
that is not being used and now they would
have to sell the servers or just let them sit
there until the user base were to increase.
• This is a problem with on-premise data
centers so cloud services needed to solve this
issues.
29. ….
• When a company uses cloud service
providers like AWS…
• In 2016 the company have 1000 users with 2
servers .
• As the user base increases to 4000 users ,the
company no need to worry about Additional
server implementation.
• AWS will automatically and Instantly allocate
additional servers as the user base increases.
30. …
• Another advantage of cloud service providers
is.
• If the user base is dropped from 4000 users to
3000 users as in the example scenario. The
cloud service providers will simply de-connect
the servers and won’t charge amount for it
from the company .
• Using Cloud service providers we are leasing
hardwres only on-demand basis.
31. Two more Cloud Terminology
• Scalability: As user base grows we have the
ability to quickly and easily add more servers.
you can scale – up on demand.
• Elasticity: You can also shrink if you can Grow.
As needed you can also shrink the usage
.Suppose user base reduced from 4000 to
3000 you can reduce the server usage.
32. Flow of AWS Architecture
Popularly used by Face book & Netflix
33. VPC
• Inside AWS you have Services,Networking
architecture …etc.Now let us discuss about
VPC(Virtual Private Cloud).
34. What Is VPC?
• To understand VPC let us discuss with a
Analogy.
• We will take FACEBOOK for our conceptual
understanding.
35. Face book
• In face book you will have your Homepage, Your
friends homepage ,Your Family homepage
• So your Home page is Your own Private section of
Face book in which you can put things like
photos, videos etc that you want to share with
other people.
• You can also have a level of security like giving
access only to certain friends, which friend list
can see certain post, photos, videos that you
share.etc..
37. Now Swap Face book with AWS
• In AWS you will have MY VPC, Your VPC, Friend’s
VPC etc
• Like your home page in face book . In your VPC
you can put your own things like
AmazonEC2,Amazon RDS resources, your files
etc.
• Just like a face book ,you can put a level of
security on your VPC.
• You can restrict access and either allow people to
use your Database, Your EC2 servers or you can
keep them out.
38. VPC
• Your VPC is your private section of AWS,where
you can place your AWS resources and allow
or restrict access to them.
40. How companies like NETFLIX utilize
this AWS resources.
• Netflix: Is world’s no 1 provider of streaming
video contents.
• Netflix uses more services of AWS,But we will
discuss about 3 services like EC2,RDS and S3
41. Amazon EC2
• Amazon EC2 it is virtually equivalent to the computer.
• But the computers is not Basic computer it is a server or
Instance
43. www.netflix.com Amazon EC2 instance is currently serving as a web hosting server.
EC2 instance consist of all the files and codes required to execute the web
page(NETFLIX)
44. EC2
• Think of EC2 is a Virtual computer(Is a like of
computer with its own RAM, HDD, OS, LAN
and everything else your computer have) that
you can use it for what ever you like.
• Most commonly used s web hosting server.
• EC2 (Elastic cloud computer)Instance
45. How many Instance you need to run?
• Is totally up to you. For simple website you
fine with one instance for hundreds or even
thousands of simultaneous requests.
• But if you need something computationally
extensive you can add more instances (more
EC2s) automatically or on demand and scale
your site dramatically.
46. NETFLIX -best example
• No matter how many visitors hit their site 100
or million simultaneously its still fast because
its load balanced on Amazon-EC2.
• If you running your own in-house server and
your site goes viral you will be totally down
under heavy traffic in a few minutes...
48. When your putting your login credentials (or) creating your own
account in NETFLIX-Amazon RDS will comes into picture
49. Amazon RDS
• Is a database provided by Amazon web
services. It holds all the customer Account
information and Inventory catalog that holds
the list of programs and shows.
50. When too many users access NETFLIX web hosting
server, due to heavy traffic the server will crash and no
one can access NETFLIX. So there is no elasticity and
scalability if the server is on –premise server
51. • If the NETFLIX users are increased double
,triple as times go .AWS constantly keep up by
adding new EC2 instances inside the VPC, so
that all can talk to the database.
• So everybody can login ,access their account
and served up the Inventory catalog.
• So this is the example for scalability
52. ….
• So if the instances is no longer in use you can
remove the instances ,and your not going to pay
for them. This is called Elasticity.
• As new users keep coming new instances are
available to offer service to them this is called
Highly available.
• If an instance failed due to some technical
reasons, Users of that instance is redirected to
another instance to continue servicing him.This is
called Fault tolerance
55. …..
• Amazon S3 is the large unlimited storage bucket.
• Limit is unbelievably high ,no individual company
can offer this storage capacity .
• S3 is the perfect place for
Documents,Movies,Music,Applications and
anything you like you can store it in S3 as long as
you like.
• Again you have high availability for anything you
stored in S3 for Backup.
56. ……
• Services like Dropbox is actually using S3 at
the backend for storage.
• So if you use Dropbox for storing your files, it
actually uses Amazon S3.
57. What will happen when you actually
click play on NETFLIX to start
streaming
58. Again Amazon EC2 comes in picture
• So when somebody hits play on NETFLIX
videos. Netflix application has to go to S3 to
find that particular television show or ,movies.
• And then Amazon EC2 either Encode or
Transcode that particular video or movie so
that it is ready to send across the internet
down to the users.
• So that it can be viewed on their device
59. ….
• Encoding and transcoding is very processor
intensive .So it requires something like Amzon
Ec2 to accomplish this task.
60. …
• So Amazon EC2 is good for any type of
“processing” activity.
• Whether it is Web hosting, Whether it is an
Encoding or transcoding, Whether it is
graphically intensive, whether it is a
mathematical equations .so anything that
needs general processing then it is Ec2 your
looking for…..