AWS Step Functions is a serverless orchestration service that defines applications as a series of steps. Each step can trigger a Lambda function or other AWS service. Step Functions handles orchestrating the steps in the correct order based on the application's workflow. It provides features like error handling, parallel processing, and integration with other AWS services like Lambda, Glue, and more. Express workflows are ideal for high processing workloads like streaming data and IoT, while standard workflows are better for long-running and auditable processes.
AWS Step Functions is a serverless orchestration service that converts an application's workflow into a series of steps by combining AWS Lambda functions and other AWS services. It helps to execute each step in an order defined by the business logic of the application and provides features like sequencing, error handling, and removing operational overhead. Amazon EventBridge is a serverless event bus service that connects applications with data from multiple sources. It helps to build event-driven architectures and delivers real-time data streams from sources to different targets without writing custom code.
Amazon EventBridge is a serverless event bus service that connects applications with data from multiple sources. It helps to build loosely coupled and distributed event-driven architectures by connecting applications and delivering events without custom code. EventBridge receives events and matches them to rules attached to an event bus. It delivers a stream of real-time data from various sources and routes it to different targets like EC2 instances, ECS tasks, and CodeBuild projects.
The document summarizes announcements from AWS re:Invent about new and updated AWS services. It describes new EC2 instance types, updates to compute, database, developer tools, machine learning, IoT, marketplace, networking, security, and storage services. Key announcements include new EC2 Graviton processor instances, AWS Step Functions integration, DynamoDB transactions, Amazon Timestream, AWS Global Accelerator, AWS Security Hub, and Amazon S3 storage class updates. The event included sessions on these topics along with networking and pizza.
In this presentation André Faria, CEO at Bluesoft, presented to his team a introduction to the AWS ecosystem and talked about all the new announcements AWS have made in the event AWS re:Invent 2017 that took place in Las Vegas.
1. IAM manages identities and access control for AWS resources by controlling authentication and authorization. It uses users, groups, roles, and access policies.
2. EC2 allows users to launch virtual servers and configure security, networking, and storage. Elastic Block Store provides block-level storage volumes for applications. Elastic Load Balancing distributes traffic across targets. Auto Scaling automatically adjusts capacity based on performance.
3. Database services include RDS for relational databases, DynamoDB for NoSQL, S3 for object storage, and Aurora which is compatible with MySQL and PostgreSQL.
The document discusses Amazon Web Services (AWS), which provides cloud computing services including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It describes key AWS services such as Amazon EC2 for virtual servers, S3 for object storage, EBS for block storage volumes, RDS for SQL databases, and CloudFront for content delivery. It also covers AWS features like scalability, security, and tools for monitoring and messaging.
Data & AI Platforms — Open Source Vs Managed Services (AWS vs Azure vs GCP)Ankit Rathi
While designing and building Data & AI platforms, you may need to evaluate the options available. Whether your platform would be on-premise or you could use cloud/s services or you would take a hybrid approach.
In any case, you may need to look and evaluate various tools & services for your ingestion, storage, process/analysis and serving layers.
In this post, I have mapped open-source and popular managed cloud services to make our evaluation process a bit easier.
The document provides an overview of Amazon Web Services (AWS) and its computing services. It describes Amazon Elastic Compute Cloud (EC2) which allows users to launch virtual servers called instances in AWS data centers. It provides flexibility, cost effectiveness, scalability, security and reliability. EC2 reduces time to obtain servers and allows users to pay only for what they use.
AWS Step Functions is a serverless orchestration service that converts an application's workflow into a series of steps by combining AWS Lambda functions and other AWS services. It helps to execute each step in an order defined by the business logic of the application and provides features like sequencing, error handling, and removing operational overhead. Amazon EventBridge is a serverless event bus service that connects applications with data from multiple sources. It helps to build event-driven architectures and delivers real-time data streams from sources to different targets without writing custom code.
Amazon EventBridge is a serverless event bus service that connects applications with data from multiple sources. It helps to build loosely coupled and distributed event-driven architectures by connecting applications and delivering events without custom code. EventBridge receives events and matches them to rules attached to an event bus. It delivers a stream of real-time data from various sources and routes it to different targets like EC2 instances, ECS tasks, and CodeBuild projects.
The document summarizes announcements from AWS re:Invent about new and updated AWS services. It describes new EC2 instance types, updates to compute, database, developer tools, machine learning, IoT, marketplace, networking, security, and storage services. Key announcements include new EC2 Graviton processor instances, AWS Step Functions integration, DynamoDB transactions, Amazon Timestream, AWS Global Accelerator, AWS Security Hub, and Amazon S3 storage class updates. The event included sessions on these topics along with networking and pizza.
In this presentation André Faria, CEO at Bluesoft, presented to his team a introduction to the AWS ecosystem and talked about all the new announcements AWS have made in the event AWS re:Invent 2017 that took place in Las Vegas.
1. IAM manages identities and access control for AWS resources by controlling authentication and authorization. It uses users, groups, roles, and access policies.
2. EC2 allows users to launch virtual servers and configure security, networking, and storage. Elastic Block Store provides block-level storage volumes for applications. Elastic Load Balancing distributes traffic across targets. Auto Scaling automatically adjusts capacity based on performance.
3. Database services include RDS for relational databases, DynamoDB for NoSQL, S3 for object storage, and Aurora which is compatible with MySQL and PostgreSQL.
The document discusses Amazon Web Services (AWS), which provides cloud computing services including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It describes key AWS services such as Amazon EC2 for virtual servers, S3 for object storage, EBS for block storage volumes, RDS for SQL databases, and CloudFront for content delivery. It also covers AWS features like scalability, security, and tools for monitoring and messaging.
Data & AI Platforms — Open Source Vs Managed Services (AWS vs Azure vs GCP)Ankit Rathi
While designing and building Data & AI platforms, you may need to evaluate the options available. Whether your platform would be on-premise or you could use cloud/s services or you would take a hybrid approach.
In any case, you may need to look and evaluate various tools & services for your ingestion, storage, process/analysis and serving layers.
In this post, I have mapped open-source and popular managed cloud services to make our evaluation process a bit easier.
The document provides an overview of Amazon Web Services (AWS) and its computing services. It describes Amazon Elastic Compute Cloud (EC2) which allows users to launch virtual servers called instances in AWS data centers. It provides flexibility, cost effectiveness, scalability, security and reliability. EC2 reduces time to obtain servers and allows users to pay only for what they use.
Amazon Web Services (AWS) began offering IT infrastructure services to businesses in the form of web services -- now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace up-front capital infrastructure expenses with low variable costs that scale with your business. With the Cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.
AWS re:Invent 2016: IoT Visualizations and Analytics (IOT306)Amazon Web Services
In this workshop, we focus on visualizations of IoT data using ELK, Amazon Elasticsearch, Logstash, and Kibana or Amazon Kinesis. We will dive into how these visualizations can give you new capabilites and understanding when interacting with your device data from the context they provide on the world around them.
AWS Elastic Beanstalk is a service that allows developers to deploy and manage applications in the AWS cloud without worrying about the underlying infrastructure. It provides preconfigured hosting environments for web applications built using popular programming languages and frameworks. Developers can upload their code and Elastic Beanstalk automatically handles tasks like capacity provisioning, load balancing, auto-scaling and application health monitoring. It supports both web and background worker environments.
In this session, you will learn the best practices in identifying, assessing, selecting and migrating your first workload to AWS. The next logical step is a large scale “All in” migration approach to enable enterprises become truly DevOps and Cloud First organization. We will present the building blocks and programs for such large migrations with the AWS Migration Assessment Readiness and Migration Acceleration Program.
Speaker: Ekta Parashar
Enterprise Solution Architect, Amazon India
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Amazon Web Services
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes, and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
Survey of International and Thai Cloud Providers and Cloud Software Projectst b
The document discusses key cloud computing providers including Amazon Web Services (AWS) and Google App Engine. It provides an overview of the various services offered by AWS such as Amazon EC2, S3, VPC, CloudFront, Route 53, RDS, and others. It also discusses Google App Engine and provides examples of applications running on each platform. Overall, the document is an introduction to major public cloud providers and their offerings.
This document discusses monitoring tools for Amazon AWS cloud computing environments. It compares the open source Hyperic HQ tool, which requires installing agents on EC2 instances, to Amazon CloudWatch, which has no installation requirements. While Hyperic HQ enables more comprehensive and automated monitoring, CloudWatch integrates better with AWS and has lower costs for usage-based pricing. The author concludes that using both tools provides the best monitoring solution currently for complex AWS production systems.
Amazon Web Services provides a set of cloud computing services including Amazon EC2 for computing power, Amazon S3 for object storage, and Amazon EBS for block-level storage. The document discusses these services as well as Amazon VPC which allows users to provision a virtual private cloud within AWS. It provides flexibility to customize the network configuration and control the virtual networking environment.
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Amazon Web Services
This document provides a summary of a presentation on building data lakes and analytics on AWS. It discusses:
- The challenges of big data including volume, velocity, variety and veracity.
- How an AWS data lake can address these challenges by quickly ingesting and storing any type of data while providing insights, security and the ability to run the right analytics tools without data movement.
- Key components of a data lake on AWS including storage, data catalog, analytics, machine learning capabilities, and tools for real-time and traditional data movement.
This document provides an overview of architecting applications for the AWS cloud. It discusses key AWS cloud computing attributes like scalability, on-demand provisioning, and efficiency of experts. It also outlines best practices like designing for failure, loose coupling, dynamism, and security. Specific AWS services are mapped to common application needs like compute, storage, content delivery, databases, and more. Overall the document aims to educate readers on how to leverage AWS architectural principles and services.
Serverless Big Data Architectures: Serverless Data AnalyticsKristana Kane
Serverless architectures are evolving to support big data analytics workflows. The document outlines serverless services for ingesting, storing, processing, and visualizing data. It describes how AWS Lambda, DynamoDB, S3, Kinesis, Athena, Glue, and other serverless services can be used without provisioning or managing servers. Serverless design patterns are presented for real-time analytics, interactive queries, and ETL workflows. A demo is promised to illustrate serverless big data architectures.
This document provides an overview of Amazon Elastic Compute Cloud (EC2), a cloud computing service that allows users to launch server instances in Amazon's data centers. EC2 provides templates called Amazon Machine Images (AMIs) that contain pre-configured software. Users can launch instances of AMIs to replicate configurations across multiple servers. EC2 instances can be deployed and terminated on demand, while physical servers require regular maintenance. EC2 offers scalable, on-demand resources that users pay for based on usage, unlike physical servers which incur costs whether used or not. The document also briefly discusses other Amazon cloud services like S3, DynamoDB, and Elastic Beanstalk.
This document provides an overview of architecting applications for the Amazon Web Services (AWS) cloud platform. It discusses key cloud computing attributes like abstract resources, on-demand provisioning, scalability, and lack of upfront costs. It then describes various AWS services for compute, storage, messaging, payments, distribution, analytics and more. It provides examples of how to design applications to be scalable and fault-tolerant on AWS. Finally, it discusses best practices for migrating existing web applications to take advantage of AWS capabilities.
AWS provides a global infrastructure with 11 regions and 52 edge locations to host computing, storage, database, analytics, and application services. It offers virtual servers (EC2), load balancing, virtual desktops, and auto-scaling for compute. Storage options include S3 object storage, EBS block storage, and archival storage (Glacier). Relational databases include RDS for SQL and NoSQL includes DynamoDB. Analytics services include Redshift data warehousing, Kinesis real-time processing, and EMR for big data. Application services include SQS for messaging, SWF for workflows, SNS for notifications, and SES for email. Management tools include IAM for security, CloudWatch for monitoring, Ops
AWS provides a global infrastructure with 11 regions and 52 edge locations to host computing, storage, database, analytics, and application services. It offers virtual servers (EC2), load balancing, virtual desktops, and auto-scaling for compute. Storage options include S3 object storage, EBS block storage, and archival storage (Glacier). Relational databases include RDS for SQL and NoSQL includes DynamoDB. Analytics services include Redshift data warehousing, Kinesis for real-time processing, and EMR for big data. Application services include SQS for messaging, SWF for workflows, SNS for notifications, and SES for email. Management tools include IAM for security, CloudWatch for monitoring,
AWS provides a broad platform of managed services to help you build, secure, and seamlessly scale end-to-end Big Data applications quickly and with ease. Want to get ramped up on how to use Amazon's big data web services? Learn when to use which service? Want to write your first big data application on AWS? Join us in this session as we discuss reference architecture, design patterns, and best practices for pulling together various AWS services to meet your big data challenges.
The document provides descriptions of various AWS services, including compute services like EC2 and Lambda; storage services like S3, Glacier, and EBS; database services like RDS and DynamoDB; security and encryption services like IAM, WAF, and Certificate Manager; developer tools like CodeCommit, CodeDeploy, and CodePipeline; and mobile services like Cognito, Mobile Analytics, and Device Farm. In total, over 30 AWS services are described ranging from infrastructure building blocks to higher-level services.
The document provides descriptions of various AWS services, including compute services like EC2 and Lambda; storage services like S3, Glacier, and EBS; database services like RDS and DynamoDB; security and encryption services like IAM, WAF, and Certificate Manager; developer tools like CodeCommit, CodeDeploy, and CodePipeline; and mobile services like Cognito, Mobile Analytics, and Device Farm. In total, over 30 AWS services are described ranging from infrastructure building blocks to higher-level services.
AWS Webcast - Best Practices in Architecting for the CloudAmazon Web Services
Join us to get a better understanding around architecting scalable, reliable applications for the cloud. You'll learn about monitoring, alarming, automatic scaling, load balancing, replication, and more, direct from AWS Senior Evangelist Jeff Barr.
Amazon Web Services (AWS) began offering IT infrastructure services to businesses in the form of web services -- now commonly known as cloud computing. One of the key benefits of cloud computing is the opportunity to replace up-front capital infrastructure expenses with low variable costs that scale with your business. With the Cloud, businesses no longer need to plan for and procure servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.
AWS re:Invent 2016: IoT Visualizations and Analytics (IOT306)Amazon Web Services
In this workshop, we focus on visualizations of IoT data using ELK, Amazon Elasticsearch, Logstash, and Kibana or Amazon Kinesis. We will dive into how these visualizations can give you new capabilites and understanding when interacting with your device data from the context they provide on the world around them.
AWS Elastic Beanstalk is a service that allows developers to deploy and manage applications in the AWS cloud without worrying about the underlying infrastructure. It provides preconfigured hosting environments for web applications built using popular programming languages and frameworks. Developers can upload their code and Elastic Beanstalk automatically handles tasks like capacity provisioning, load balancing, auto-scaling and application health monitoring. It supports both web and background worker environments.
In this session, you will learn the best practices in identifying, assessing, selecting and migrating your first workload to AWS. The next logical step is a large scale “All in” migration approach to enable enterprises become truly DevOps and Cloud First organization. We will present the building blocks and programs for such large migrations with the AWS Migration Assessment Readiness and Migration Acceleration Program.
Speaker: Ekta Parashar
Enterprise Solution Architect, Amazon India
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Amazon Web Services
In this session, we show you how to understand what data you have, how to drive insights, and how to make predictions using purpose-built AWS services. Learn about the common pitfalls of building data lakes, and discover how to successfully drive analytics and insights from your data. Also learn how services such as Amazon S3, AWS Glue, Amazon Redshift, Amazon Athena, Amazon EMR, Amazon Kinesis, and Amazon ML services work together to build a successful data lake for various roles, including data scientists and business users.
Survey of International and Thai Cloud Providers and Cloud Software Projectst b
The document discusses key cloud computing providers including Amazon Web Services (AWS) and Google App Engine. It provides an overview of the various services offered by AWS such as Amazon EC2, S3, VPC, CloudFront, Route 53, RDS, and others. It also discusses Google App Engine and provides examples of applications running on each platform. Overall, the document is an introduction to major public cloud providers and their offerings.
This document discusses monitoring tools for Amazon AWS cloud computing environments. It compares the open source Hyperic HQ tool, which requires installing agents on EC2 instances, to Amazon CloudWatch, which has no installation requirements. While Hyperic HQ enables more comprehensive and automated monitoring, CloudWatch integrates better with AWS and has lower costs for usage-based pricing. The author concludes that using both tools provides the best monitoring solution currently for complex AWS production systems.
Amazon Web Services provides a set of cloud computing services including Amazon EC2 for computing power, Amazon S3 for object storage, and Amazon EBS for block-level storage. The document discusses these services as well as Amazon VPC which allows users to provision a virtual private cloud within AWS. It provides flexibility to customize the network configuration and control the virtual networking environment.
Building Data Lakes and Analytics on AWS; Patterns and Best Practices - BDA30...Amazon Web Services
This document provides a summary of a presentation on building data lakes and analytics on AWS. It discusses:
- The challenges of big data including volume, velocity, variety and veracity.
- How an AWS data lake can address these challenges by quickly ingesting and storing any type of data while providing insights, security and the ability to run the right analytics tools without data movement.
- Key components of a data lake on AWS including storage, data catalog, analytics, machine learning capabilities, and tools for real-time and traditional data movement.
This document provides an overview of architecting applications for the AWS cloud. It discusses key AWS cloud computing attributes like scalability, on-demand provisioning, and efficiency of experts. It also outlines best practices like designing for failure, loose coupling, dynamism, and security. Specific AWS services are mapped to common application needs like compute, storage, content delivery, databases, and more. Overall the document aims to educate readers on how to leverage AWS architectural principles and services.
Serverless Big Data Architectures: Serverless Data AnalyticsKristana Kane
Serverless architectures are evolving to support big data analytics workflows. The document outlines serverless services for ingesting, storing, processing, and visualizing data. It describes how AWS Lambda, DynamoDB, S3, Kinesis, Athena, Glue, and other serverless services can be used without provisioning or managing servers. Serverless design patterns are presented for real-time analytics, interactive queries, and ETL workflows. A demo is promised to illustrate serverless big data architectures.
This document provides an overview of Amazon Elastic Compute Cloud (EC2), a cloud computing service that allows users to launch server instances in Amazon's data centers. EC2 provides templates called Amazon Machine Images (AMIs) that contain pre-configured software. Users can launch instances of AMIs to replicate configurations across multiple servers. EC2 instances can be deployed and terminated on demand, while physical servers require regular maintenance. EC2 offers scalable, on-demand resources that users pay for based on usage, unlike physical servers which incur costs whether used or not. The document also briefly discusses other Amazon cloud services like S3, DynamoDB, and Elastic Beanstalk.
This document provides an overview of architecting applications for the Amazon Web Services (AWS) cloud platform. It discusses key cloud computing attributes like abstract resources, on-demand provisioning, scalability, and lack of upfront costs. It then describes various AWS services for compute, storage, messaging, payments, distribution, analytics and more. It provides examples of how to design applications to be scalable and fault-tolerant on AWS. Finally, it discusses best practices for migrating existing web applications to take advantage of AWS capabilities.
AWS provides a global infrastructure with 11 regions and 52 edge locations to host computing, storage, database, analytics, and application services. It offers virtual servers (EC2), load balancing, virtual desktops, and auto-scaling for compute. Storage options include S3 object storage, EBS block storage, and archival storage (Glacier). Relational databases include RDS for SQL and NoSQL includes DynamoDB. Analytics services include Redshift data warehousing, Kinesis real-time processing, and EMR for big data. Application services include SQS for messaging, SWF for workflows, SNS for notifications, and SES for email. Management tools include IAM for security, CloudWatch for monitoring, Ops
AWS provides a global infrastructure with 11 regions and 52 edge locations to host computing, storage, database, analytics, and application services. It offers virtual servers (EC2), load balancing, virtual desktops, and auto-scaling for compute. Storage options include S3 object storage, EBS block storage, and archival storage (Glacier). Relational databases include RDS for SQL and NoSQL includes DynamoDB. Analytics services include Redshift data warehousing, Kinesis for real-time processing, and EMR for big data. Application services include SQS for messaging, SWF for workflows, SNS for notifications, and SES for email. Management tools include IAM for security, CloudWatch for monitoring,
AWS provides a broad platform of managed services to help you build, secure, and seamlessly scale end-to-end Big Data applications quickly and with ease. Want to get ramped up on how to use Amazon's big data web services? Learn when to use which service? Want to write your first big data application on AWS? Join us in this session as we discuss reference architecture, design patterns, and best practices for pulling together various AWS services to meet your big data challenges.
The document provides descriptions of various AWS services, including compute services like EC2 and Lambda; storage services like S3, Glacier, and EBS; database services like RDS and DynamoDB; security and encryption services like IAM, WAF, and Certificate Manager; developer tools like CodeCommit, CodeDeploy, and CodePipeline; and mobile services like Cognito, Mobile Analytics, and Device Farm. In total, over 30 AWS services are described ranging from infrastructure building blocks to higher-level services.
The document provides descriptions of various AWS services, including compute services like EC2 and Lambda; storage services like S3, Glacier, and EBS; database services like RDS and DynamoDB; security and encryption services like IAM, WAF, and Certificate Manager; developer tools like CodeCommit, CodeDeploy, and CodePipeline; and mobile services like Cognito, Mobile Analytics, and Device Farm. In total, over 30 AWS services are described ranging from infrastructure building blocks to higher-level services.
AWS Webcast - Best Practices in Architecting for the CloudAmazon Web Services
Join us to get a better understanding around architecting scalable, reliable applications for the cloud. You'll learn about monitoring, alarming, automatic scaling, load balancing, replication, and more, direct from AWS Senior Evangelist Jeff Barr.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...PIMR BHOPAL
Variable frequency drive .A Variable Frequency Drive (VFD) is an electronic device used to control the speed and torque of an electric motor by varying the frequency and voltage of its power supply. VFDs are widely used in industrial applications for motor control, providing significant energy savings and precise motor operation.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
1. Are you Ready for AWS Certified
Cloud Practitioner exam? Self-
assess yourself with “Whizlabs FREE
TEST”
AWS Certified Cloud Practitioner
WhizCard
Quick Bytes for you before the exam!
The information provided in WhizCards is for educational purposes only; created in
our efforts to help aspirants prepare for the AWS Certified Cloud Practitioner
certification exam. Though references have been taken from AWS documentation,
it’s not intended as a substitute for the official docs. The document can be reused,
reproduced, and printed in any form; ensure that appropriate sources are credited
and required permissionsare received.
5. Amazon Athena is an interactive serverless
serviceused to analyze data directly in
Amazon Simple Storage Serviceusing
standard SQL ad-hoc queries.
What is Amazon Athena?
Amazon Athena
Functions of Athena:
⮚ Charges are applied based on the amountof data scanned by
each query at standard S3 rates for storage, requests, and data
transfer.
⮚ Canceled queries are charged based on the amountof data
scanned.
⮚ No charges are applied for Data Definition Language (DDL)
statements.
⮚ Charges are applied for canceled queries also based on the
amount of data scanned.
⮚ Additional costs can be reduced if data gets compressed,
partitioned, or converted into a columnar format.
Using Athena, ad-hoc queries can be executed using ANSI SQL
without actually loading the data into Athena.
Itcan be integrated with Amazon Quick Sight for data visualization
and helps to generate reports with business intelligence tools.
Itexecutes multiple queries in parallel, so no need to worry about
compute resources.
Ithelps to analyze different kinds of data (unstructured, semi-
structured, and structured) stored in Amazon S3.
Itcan be integrated with the AWS Glue Data Catalog to store
metadata in Amazon S3 and offers data discovery features of AWS
Glue.
Ithelps to connect SQL clients with a JDBC or an ODBC driver.
Itsupports various standard data formats, such as CSV, JSON, ORC,
Avro, and Parquet.
Pricing Details:
6. Amazon Elasticsearch Service (Amazon ES) is
a managed servicethat allows users to
deploy, manage, and scale Elasticsearch
clusters in the AWS Cloud. Amazon ES
provides direct access to the Elasticsearch
APIs.
What is Amazon ES?
AmazonElasticsearch Service
✔ Amazon ES with Kibana (visualization) & Logstash (log
ingestion) provides an enhanced search experience for the
applications and websites to find relevant data quickly.
✔ Amazon ES launches the Elasticsearch cluster’s resources
and detects the failed Elasticsearch nodes and replaces
them.
✔ The Elasticsearch cluster can be scaled with a few clicks in
the console.
Elasticsearch is a free and open-sourcesearch engine for all types of
data like textual, numerical, geospatial, structured, and
unstructured.
Amazon CloudTrail
Amazon CloudWatch
AWS IAM
Amazon Kinesis
AWS Lambda
Amazon S3
Amazon ES can be integrated with following
services:
Amazon DynamoDB
Pricing Details: ● Charges are applied for each hour of useof EC2 instances and storagevolumes attached to the instances.
● Amazon ES does not charge for data transfer between availability zones.
7. Amazon EMR (Elastic Map Reduce) is a
serviceused to process and analyze large
amounts of data in the cloud using Apache
Hive, Hadoop, ApacheFlink, Spark, etc.
What is Amazon EMR?
Amazon EMR
Replacing failed instances
Monitoring
Bug fixes
It offersbasic functionalities for maintaining clusters such
as
Amazon EMRstorage layers
AWS Command Line Interface(AWS CLI)
EMR Console
SoftwareDevelopment Kit (SDK)
Web ServiceAPI
Amazon EMR can be accessed in the following ways:
✔ The main componentof EMR is a cluster that collects Amazon EC2 instances
(also known as nodes in EMR).
✔ Itdecouples the compute and storage layer by scaling independently and
storing cluster data on Amazon S3.
✔ Italso controls network access for the instances by configuring instance
firewall settings.
✔ Itoffers basic functionalities for maintaining clusters such as monitoring,
replacing failed instances, bug fixes, etc.
✔ Itanalyzes machine learning workloads using ApacheSpark MLlib and
TensorFlow, clickstreamworkloads using ApacheSpark and ApacheHive, and
real-time streaming workloads fromAmazon Kinesis using ApacheFlink.
✔ Itprovides morethan one compute instances or containers to process the
workloads and can be executed on the following AWS services:
Amazon EC2 Amazon EKS AWS Outposts
8. Amazon Kinesis Data Streams (KDS) is a
scalable real-time data streaming service. It
captures gigabytes of data fromsources like
website clickstreams, events streams
(databaseand location-tracking), and social
media feeds.
What are Amazon Kinesis Data Streams?
AmazonKinesis Data Streams
AmazonKinesis is a service used to collect, process,
and analyzereal-time streaming data. Itcan be an
alternative to Apache Kafka.
Amazon Kinesis Data
Streams
❑ Kinesis family consists of Kinesis Data Streams, Kinesis
Data Analytics, Kinesis Data Firehose, and Kinesis Video
Streams.
❑ The Real-time data can be fetched fromProducers that
are Kinesis Streams API, Kinesis Producer Library (KPL),
and Kinesis Agent.
❑ Itallows building customapplications known as Kinesis
Data Streams applications (Consumers), which reads
data froma data streamas data records.
Data Streams are divided into Shards /Partitions whosedata
retention is 1 day (by default) and can be extended to 7 days
Each shard provides a capacity of 1MB per second input data
and 2MB per second output data.
Amazon Kinesis Data Streams
9. Amazon Kinesis Data Firehoseis a serverless
serviceused to capture, transform, and load
streaming data into data stores and analytics
services.
What is Amazon Kinesis Data
Firehose?
AmazonKinesis Data Firehose
❖ Itsynchronously replicates data across three AZs while
delivering them to the destinations.
❖ Itallows real-time analysis with existing business
intelligence tools and helps to transform, batch, compress
and encryptthe data before delivering it.
❖ Itcreates a Kinesis Data Firehose delivery stream to send
data. Each delivery streamkeeps data records for one day.
❖ Ithas 60 seconds minimumlatency or a minimum of 32 MB
of data transfer at a time.
❖ Kinesis Data Streams, CloudWatch events can be considered
as the source(s) to Kinesis Data Firehose.
Amazon S3
Splunk
Amazon Redshift
AWS Kinesis
Data Firehose
It delivers streamingdata to the following services:
AWS Kinesis Data Firehose
Amazon Elasticsearch Service
10. Amazon MSK is a managedcluster
service used to build and execute
Apache Kafka applications for
processingstreamingdata.
What is Amazon MSK?
Amazon ManagedStreaming for Apache Kafka
Encryption at Rest
AWS IAM for API Authorization
Apache Kafka Access Control
Lists (ACLs)
It provides multiple kinds of security
for Apache Kafka clusters, including:
✔ It easily configures applications by removingall the
manual tasks used to configure.
The steps which Amazon MSK manages are:
❖ Replacingservers duringfailures
❖ Handlingserverpatchesand upgradeswith no downtime
❖ Maintenanceof Apache Kafka clusters
❖ Maintenanceof Apache ZooKeeper
❖ Multi-AZ replication for Apache Kafka clusters
❖ Planningscalingevents
It helps to populate machine learning
applications,analyticalapplications,
data lakes, and stream changes to and
from databases usingApache Kafka
APIs.
AWS Glue: To execute Apache Spark job on
Amazon MSK cluster
Lambda Functions
Amazon Kinesis Data Analytics: To execute Apache
Flink job on Amazon MSK cluster
Amazon MSK
Integrates with:
11. Amazon Redshift is a fastand petabyte-scale,
SQL based data warehouseserviceused to
analyzedata easily.
What is Amazon Redshift?
Amazon Redshift
⮚ Itoffers on-demand pricing that will chargeby the hour
with no commitments and no upfrontcosts.
⮚ Charges are applied based on the type and number of
nodes used in the Redshift Cluster.
⮚ Charged based on the number of bytes scanned by Redshift
Spectrum, rounded up to 10MBminimum per query.
Pricing Details:
Itsupports OnlineAnalytical Processing (OLAP) typeof DB
workloads and analyzes them using standard SQL and existing
Business Intelligence (BI) tools (AWS QuickSightor Tableau).
Itis used for executing complex analytic queries on semi-structured
and structured data using query optimization, columnar-based
storage, and Massively Parallel Query Execution (MPP).
Redshift Spectrumhelps to directly query fromthe objects
(files) on S3 without actually loading them.
Ithas the capability to automatically copy snapshots
(automated or manual) of a cluster to another AWS Region
Functions of Redshift:
12. AWS Glue is a serverless ETL (extract,
transform, and load) service used to
categorize data and move them between
various data stores and streams.
What is AWS Glue?
AWS Glue
Properties of AWS Glue:
Ithas a central repository known as the AWS Glue Data Catalog
that automatically generates Python or Scala code.
Itprocesses semi-structured data using a simple ‘dynamic’ frame
in the ETL scripts similar to an Apache Spark data framethat
organizes data into rows and columns.
Itallows organizations to work together and performdata
integration tasks, like extraction, normalization, combining,
loading, and running ETL workloads.
Itsupports data integration, preparing and combining data for
analytics, machine learning, and other applications’ development.
Ithelps execute the Apache Spark environment’s ETL jobs by
discovering data and storing the associated metadata in the AWS Glue
Data Catalog.
AWS Glue and Spark can be used together by converting dynamic
frames and Spark data frames to performall kinds of analysis.
AWS Glue works with the following services:
● Redshift - for data warehouses
● S3 - for data lakes
● RDS or EC2 instances - for data stores
13. AWS Lake Formation is a cloud servicethat
is used to create, manage and securedata
lakes. Itautomates the complex manual
steps required to create data lakes.
What is AWS Lake Formation?
AWS Lake Formation
⮚ Lake Formation is pointed at the data sources, then crawls
the sources and moves the data into the new Amazon S3
data lake.
⮚ Itintegrates with AWS Identity and Access Management
(IAM) to provide fine-grained access to the data stored in
data lakes using a simple grant/revokeprocess
A data lake is a securerepository that stores all the data in its
original formand is used for analysis.
Amazon CloudTrail
Amazon CloudWatch
Amazon EMR
Amazon Glue: Both use sameData Catalog
AWS Key ManagementService
Amazon RedshiftSpectrum
AWS Lake Formationintegrates with:
Amazon Athena: Athena's users can query those AWS Glue
catalog which has Lake Formation permissions on them.
Pricing Details: ● Charges are applied based on the serviceintegrations (AWS Glue, Amazon S3, Amazon EMR, Amazon
Redshift) at a standard rate
15. AWS Step Functions is a serverless orchestration service
that converts an application's workflow into a series of
steps by combining AWS Lambda functions and other AWS
services.
What is AWS Step Functions?
AWS Step Functions
⮚ AWS Step Functions resembles state machines and tasks. Each step in a
workflow is a state. The output of one step signifies an input to the next
results in functions orchestration.
⮚ Ithelps to execute each step in an order defined by the business logic of
the application.
⮚ Itprovides somebuilt-in functionalities like sequencing, error handling,
timeout handling, and removing a significant operational overhead from
the team.
⮚ Itcan control other AWS services, like AWS Lambda (to performtasks),
processing machinelearning models, AWS Glue (to create an extract,
transform, and load (ETL) workflows),and automated workflows that
require human approval.
⮚ Itprovides multiple automation features like routine deployments,
upgrades, installations, migrations, patch management, infrastructure
selection, and data synchronization
FunctionsOrchestration using AWS Step Function
Standard
Workflows
Express
Workflows
● It executes once in a workflow
execution for up to one year.
● They are ideal for long-running and
auditable workflows.
● It executes at-least-once in a workflow
execution for up to five minutes.
● They are ideal for high-processing
workloads, such as streaming data
processing and IoT data ingestion.
Executions are the instances whereworkflow runs to perform
tasks.
Dynamic Parallelism using
AWS Step Functions
16. Amazon EventBridge is a serverless event
bus servicethat connects applications with
data frommultiple sources.
What is Amazon
EventBridge?
AmazonEventBridge
Functions of Amazon EventBridge:
Ithelps to build loosely coupled and distributed event-driven
architectures.
Itconnects applications and delivers the events without the need
to write customcode.
The EventBridge schema registry stores a collection of event
structures (schemas) and allows users to download code for those
schemas in the IDErepresenting events as objects in the code.
An event bus is an entity that receives events, and rules get
attached to that event bus that matches the events received.
Itdelivers a streamof real-time data fromSaaS applications or other
AWS services and routes that data to different targets such as
Amazon EC2 instances, Amazon ECS tasks, AWS CodeBuild projects,
etc
Itsets up routing rules that determine the targets to build
application architectures that react according to the data sources.
Amazon EventBridge integrates withthe following
services:
AWS CloudFormation
AWS CloudTrail
AWS Kinesis Data Streams
AWS Config
AWS Lambda
AWS Identity and Access Management (IAM)
17. Amazon Simple Notification Service(Amazon
SNS) is a serverless notification service that
offers messagedelivery frompublishers to
subscribers.
What is Amazon SNS?
AmazonSNS
SNS helps to publish messages to many subscriber
endpoints:
✔ Itcreates asynchronous communication between publishers
and subscribers by sending messages to a ‘topic.’
✔ Itsupports application-to-application subscribers that
include Amazon SQS and other AWS services and
Application-to-person subscribers thatinclude Mobile SMS,
Email, etc.
AWS Lambda Functions
Amazon SQS Queues
Mobile push
Email
Amazon Kinesis Data Firehose
SMS
• The producer sends one messageto one SNS topic.
• Multiple receivers (subscribers) listen for the notification of
messages.
• All the subscribers willreceive all the messages.
Example:
1 message, 1 topic, 10 subscribers so thata single messagewill be
notified to 10 different subscribers.
Amazon SNS
18. Amazon Simple Queue Service (SQS) is a
serverless serviceused to decouple (loose
couple) serverless applications and
components.
What are Amazon Simple
Queue Service(SQS)?
AmazonSimple Queue Service (SQS)
❑ The queue represents a temporary repository between
the producer and consumer of messages.
❑ Itcan scale up to 1-10000 messagesper second.
❑ The default retention period of messages is four days
and can be extended to fourteen days.
❑ SQS messages get automatically deleted after being
consumed by the consumers.
❑ SQS messages havea fixed sizeof 256KB.
Delay Queue is a queue that allows users to postpone/delay the
delivery of messages to a queue for a specific number of seconds.
Messages can be delayed for 0 seconds (default) -15 (maximum)
minutes.
Dead-Letter Queue is a queue for those messages that are not
consumed successfully. Itis used to handle messagefailure.
There are two SQS Queue types:
StandardQueue -
❖ The unlimited number of transactions per second.
❖ Messages get delivered in any order.
❖ Messages can be sent twice or multiple times.
FIFO Queue -
❖ 300 messages per second.
❖ Supportbatches of 10 messages per operation, results in
3000 messages per second.
❖ Messages get consumed only once.
Visibility Timeout is the amount of time during which SQS prevents
other consumers fromreceiving (poll) and processing the messages.
Default visibility timeout - 30 seconds
Minimum visibility timeout - 0 seconds
Maximum visibility timeout - 12 hours
19. AWS AppSync is a serverless serviceused to
build GraphQL API with real-time data
synchronization and offline programming
features.
What is AWS AppSync?
AWS AppSync
GraphQL is a data language built to allow
apps to fetch data fromservers. RDS Databases
Amazon DynamoDBtables
Third Party HTTP Endpoints
Amazon Elasticsearch
AWS Lambda Functions
The different data sources supported by AppSync
are:
⮚ Itreplaces the functionality of Cognito Sync by providing
offline data synchronization.
⮚ Itimproves performanceby providing data caches, provides
subscriptions to supportreal-time updates, and provides
client-side data stores to keep off-line clients in sync.
⮚ Itoffers certain advantages over GraphQL, such as enhanced
coding style and seamless integration with modern tools
and frameworks likeiOS and Android
⮚ AppSync interface provides a live GraphQL API featurethat
allows users to test and iterate on GraphQL schemas and
data sources quickly.
⮚ Along with AppSync, AWS provides an Amplify Framework
that helps build mobile and web applications using GraphQL
APIs.
Queries: For fetchingdata from the API
Subscriptions: The connections for streaming
data from API
Mutations: For changingdata via API
AWS AppSync
AWS AppSync
AWS AppSync
20. Amazon Simple Workflow Service (Amazon
SWF) is used to coordinate work amongst
distributed application components.
What is Amazon Simple
Workflow Service?
Amazon Simple Workflow Service
A task is a logical representation of work performed by a
component of the application.
Tasks are performed by implementing workers and execute
either on Amazon EC2 or on on-premise servers (which
means it is not a serverless service).
Amazon SWF stores tasks and assigns them to workers
during execution.
It controls task implementation and coordination, such as
tracking and maintaining the state using API.
It helps to create distributed asynchronous applications and
supports sequential and parallel processing.
It is best suited for human-intervened workflows.
Amazon SWF is a less-used service, so AWS Step Functions
is the better option than SWF.
22. AWS Cost Explorer is a UI-toolthat enables users to
analyzethe costs and usagewith a graph, the Cost
Explorer costand usage reports, and the Cost
Explorer RI report. Itcan be accessed from the
Billing and Cost Management console.
What is AWS Cost Explorer?
AWS Cost Explorer
The default reports provided by Cost Explorer are:
Cost and Usage Reports
Reserved Instance
Reports
✔ The first time the user signs up for Cost Explorer, it
directs through the console’s main parts.
✔ Itprepares the data regarding costs & usageand
displays up to 12 months of historical data (might be
less if less used), currentmonth data, and then
calculates the forecastdata for the next 12 months.
AWS Cost Explorer
23. AWS Budgets enables the customer to
set custombudgets to track cost and
usagefromthe simplest to the complex
use cases.
What is AWS Budgets?
AWS Budgets
AWS Budgets can be used to set reservation utilization or
coveragetargets allowing you to get alerts by email or SNS
notification when the metrics reach the threshold.
Amazon RDS
Amazon EC2
Amaxon ElasticSearch
Amazon Redshift
Amazon Elasticache
Reservation Alerts feature is provided to:
❑ AWS Budgets can be accessed fromthe AWS Management
Console’s servicelinks and within the AWS Billing Console.
❑ Budgets API or CLI (command-lineinterface) can also be used
to create, edit, delete and view up to 20,000 budgets per
payer account.
AWS Budgets can now be created monthly, quarterly, or annual
budgets for the AWS resourceusage or the AWS costs.
Types of Budgets:
• Cost budgets
• Usage budgets
• RI utilization budgets
• RI coveragebudgets
• Savings Plans utilization budgets
• Savings Plans coveragebudgets
Users canset up five alerts for eachbudget. But the most
important are:
i. Alerts when current monthly costs exceed the budgeted
amount.
ii. Alerts when current monthly costs exceed 80% of the
budgeted amount.
iii. Alerts when forecasted monthly costs exceed the budgeted
amount.
24. AWS Cost & UsageReport is a service that allows
users to access the detailed set of AWS costand
usagedata available, including metadata about
AWS resources, pricing, Reserved Instances, and
Savings Plans.
What is AWS Cost and Usage
Report?
AWS Cost & Usage Report
✔ For viewing, reports can be downloaded fromthe
Amazon S3 console; for analyzing the report, Amazon
Athena can be used, or upload the reportinto Amazon
Redshift or Amazon QuickSight.
✔ Users with IAMpermissions or IAMroles can access and
view the reports.
✔ If a member account in an organization owns or creates a
Cost and Usage Report, it can haveaccess only to billing
data when it has been a member of the Organization.
✔ If the master accountof an AWS Organization wants to
block access to the member accounts to set-up a Cost and
Usage Report, Service Control Policy (SCP) can be used.
❑ AWS Cost & UsageReport is a part of AWS Cost Explorer.
AWS Cost and Usage Reports functions:
❑ Itsends reportfiles to your Amazon S3 bucket.
❑ Itupdates reports up to three times a day.
25. Reserved InstanceReporting is a service
used to summarizeReserved Instance(RIs)
usageover a while.
What is ReservedInstanceReporting?
Reserved Instance Reporting
RI Utilization reports can be visualized by exporting to
both PDF and CSV formats.
RI coverage reports:
RI coveragereport is used to visualize RI
coverageand monitor against a RI coveragethreshold.
Along with AWS Cost Explorer, it increases costsavings
as compared to On-Demand instanceprices.
RI utilizationreports:
RI utilization reportis used to visualize
daily RI utilization.
Reserved Instance
Reporting
Target utilization (threshold utilization) of RI utilization
reports arerepresented with the dotted line in the chart
with different colored status:
❖ Red bar - RIs with no hours used.
❖ Yellow bar - Under the utilization target.
❖ Green bar - Reached utilization target.
❖ Gray bar - instances not using reservations.
RI Utilization Report
RI Coverage Report
27. Amazon Elastic Compute Cloud (Amazon EC2) is
a servicethat provides secureand scalable
compute capacity in the AWS cloud. It falls
under the category of Infrastructureas a Service
(IAAS).
What is Amazon EC2?
Amazon EC2
It provides pre-configured templates that package the operating system and
other software for the instances. This template is called Amazon Machine
Images (AMIs).
It helps to login into the instances using key-pairs, in which AWS manages the
public key, and the user operates the private key.
It enables users to write scripts under the option ‘User data,’ used at the
instances’ launch.
It provides different compute platforms and instance types based on price, CPU,
operating system, storage, and networking, and each instance type consists of one
or more instance sizes. Eg., t2.micro, t4g.nano, m4.large, r5a.large, etc.
It also provides firewall-like security by specifying IP ranges, type, protocols
(TCP), port range (22, 25, 443) using security groups.
It provides temporary storage volumes known as instance store volumes, which
are deleted if the instance gets stopped, hibernated, or terminated. It also offers
non-temporary or persistent volumes known as Amazon EBS volumes.
It offers to choose from three IP addresses, which are Public IP address (Changes
when the instance is stopped or refreshed), Private IP address (retained even if the
model is stopped), Elastic IP address (static public IP address).
It provides the different type of instances based on the pricing models:
On-Demand Instances
✔ Useful for short-term needs, unpredictable workloads.
✔ No advancepayment, no prior commitment.
Spot Instances
✔ No advancepayment, no prior commitment.
✔ Useful for cost-sensitivecompute workloads.
ReservedInstances
✔ Useful for long-running workloads and predictable usage.
✔ Offer to choosefromNo upfront, Partial upfront, or All upfront.
DedicatedInstances
✔ Instances run on hardwarededicated to a single user.
✔ Other customers can not sharethe hardware.
DedicatedHosts
✔ A whole physicalserver with an EC2 instance allocates to an organization.
28. Amazon EC2 Auto Scaling is a region-specific
serviceused to maintain application
availability and enables users to
automatically add or removeEC2 instances
according to the compute workloads.
What is Amazon EC2 Auto Scaling?
AmazonEC2 Auto Scaling
❖ The Auto Scaling group is a collection of the minimum number of
EC2 used for high availability.
❖ Itenables users to use Amazon EC2 Auto Scaling features such as
fault tolerance, health check, scaling policies, and costmanagement.
❖ The scaling of the Auto Scaling group depends on the size of the
desired capacity. Itis not necessary to keep DesiredCapacity and
MaxSize equal.
E.g.,
DesiredCapacity: '2' - There will be total 2 EC2
instances
MinSize: '1'
MaxSize: ‘2
❖ EC2 Auto Scaling supports automatic HorizontalScaling (increases or
decreases the number of EC2 instances) rather than Vertical Scaling
(increases or decreases EC2 instances like large, small, medium).
Itscales across multiple Availability Zones within the
same AWS region.
Launch Configuration Launch Template
A launch configuration is a
configuration file used by an Auto
Scaling group to launch EC2
instances
A launch template is similar to
launch configuration with extra
features as below
Itlaunches any one of the Spotor
On-Demand instances
Itlaunches both Spotand On-
Demand instances.
Itspecifies single instance types. Itspecifies multiple instance types
Itspecifies one launch
configuration at a time
Itspecifies multiple launch
templates.
29. AmazonEC2 Auto Scaling
The ways to scale Auto Scaling Groups are as follows:
Manual Scaling
Update the desired
capacity of the Auto
Scaling Group
manually.
Scheduled Scaling
This scaling policy adds
or removes instances
based on predictable
traffic patterns of the
application.
Example:
Scale-outon every
Tuesday or Scale in
on every Saturday
Dynamic Scaling
⮚ Targettracking scaling policy: This scaling policy adds or removes instances
to keep the scaling metric close to the specified target value.
⮚ SimpleScalingPolicy:This scaling policy adds or removes instances when
the scaling metric value exceeds the threshold value.
⮚ Step Scaling Policy: This scaling policy adds or removes instances based on
step adjustments (lower bound and upper bound of the metric value).
Amazon EC2 Auto Scaling using CloudWatch Alarm
⮚ The Cooldown periodis the time during
which an Auto Scaling group doesn’t
launch or terminate anyinstances before
the previous scalingactivitycompletes.
30. AWS Batch is a fully managed and regional
batch processing service that allows
developers, scientists, and engineers to
execute large amounts of batch computing
workloads on AWS.
What is AWS Batch?
AWS Batch
⮚ It submits a job to a particular job queue and
schedules them in a computing environment.
⮚ A job is a work unit such as a shell script, a Linux
executable, or a Docker container image.
⮚ AWS Batch can be integrated with AWS data stores
like Amazon S3 or Amazon DynamoDB to retrieve and
write data securely.
It provides a correct amount of memory and
can efficiently execute 100,000s of batch
computing workloads across AWS compute
services such as:
1. AWS Fargate
2. Amazon EC2
3. Spot Instances
31. What is AWS Elastic
Beanstalk?
AWS Elastic
Beanstalk
⮚ Itfalls under the category of Platformas a Service (PaaS)
⮚ Itis also defined as a developer-centric view of deploying an
application on AWS. The only responsibility of the developer is to write,
and Elastic Beanstalk handles code and the infrastructure
⮚ An Elastic Beanstalk application comprises components, including
environments, versions, platforms, and environmentconfigurations.
Rolling with an additional batch
All at once, Rolling
Immutable
Traffic splitting
Itprovides multiple deployment policies such as:
AWS CloudFormation AWS Elastic Beanstalk
Itdeploys infrastructureusing
YAML/JSONtemplate files.
Itcan deploy Elastic Beanstalk
environments.
Itdeploys applications on EC2.
Itcannot deploy Cloud
Formation templates.
.
⮚ Elastic Beanstalk console offers users to performdeployment and
management tasks such as changing the sizeof Amazon EC2 instances,
monitoring (metrics, events), and environmentstatus.
⮚ Itsupports web applications coded in popular languages and
frameworks such as Java, .NET, Node.js, PHP, Ruby, Python,
Go, and Docker.
⮚ Ituses Elastic Load Balancing and Auto Scaling to scale the
application based on its specific needs automatically.
AWS CloudFormation vs. AWS ElasticBeanstalk
AWS Elastic Beanstalk is a serviceused to
quickly deploy, scale, and manage applications
in the AWS Cloud with automatic
infrastructuremanagement.
The workflowof Elastic Beanstalk
32. Amazon EC2 Amazon Lambda
They are termed virtual servers
in the AWS cloud.
They are termed virtual functions.
Itis limited to instance types
(RAM and CPU).
Limited by time (less execution
time of 300 seconds).
Itruns continuously. It runs on demand
Scaling computing resources is
manual
Ithas automated scaling.
d
AWS Lambda is a serverless computing service
that allows users to run code as functions
without provisioning or managing servers.
What is AWS Lambda?
AWS Lambda
Charges are applied based on
the number of requests for the
functions and the time taken to
execute the code
Ithelps to run the code on highly-
available computing infrastructure
and performs administrativetasks
like server maintenance, logging,
capacity provisioning, and automatic
scaling and code monitoring.
Using AWS Lambda, one can build
serverless applications composed of
Lambda functions triggered by
events and can be automatically
deployed using AWS CodePipeline
and AWS CodeBuild.
Lambda Functions supports thefollowing languages:
Jav
a
Power
shell
Node.js Ruby
Python
Go C#
Pricing details:
✔ The memory allocated to AWS
Lambda for computing is
128MB(minimum) to
3008MB(maximum).
Additional memory can be
requested in an increment of
64MBbetween 128MB -
3008MB.
✔ The default execution
time for AWS Lambda is 3
seconds, and the
maximum is 15 minutes
(900 seconds).
AWS Lambda
Amazon EC2 Amazon Lambda
They are termed virtual servers
in the AWS cloud.
They are termed virtual
functions.
Itis limited to instance types
(RAM and CPU).
Limited by time (less execution
time of 300 seconds).
Itruns continuously. Itruns on demand.
Scaling computing resources is
manual.
Ithas automated scaling.
33. AWS Serverless Application Repository is
a managed repository used by developers
and organizations to search, assemble,
publish, deploy and storeserverless
architectures.
What is AWS Serverless
Application Repository?
AWS Serverless Application Repository
Publishing Applications:
Upload and publish applications
to be used by other developers.
Deploying Applications:
Search for applications with their
required files and deploy them.
⮚ Ithelps sharereusable serverless application architectures
and composenew serverless architectures using AWS
Serverless Application Model (SAM) template.
⮚ Ituses pre-built applications in serverless deployments,
eliminating the need to re-build and publish code to AWS.
⮚ Itdiscovers and offers bestpractices for serverless
architectures to provideconsistency within the
organizations or providepermissions to shareapplications
with specific AWS accounts.
⮚ Itintegrates with AWS Lambda that allows developers of
all levels to work with serverless computing by using re-
usable architectures.
There are two ways to work with the AWS Serverless Application
AWS ServerlessApplication Repository can be accessed in the
following ways:
AWS Management Console
AWS SAM command-lineinterface (AWS SAMCLI)
AWS SDKs
35. Amazon Elastic Container Registry (ECR) is a
managed servicethat allows users to store,
manage, share, and deploy container images
and other artifacts.
Whatis Amazon Elastic Container Registry?
Amazon Elastic Container Registry
⮚ Itstores both the containers which are created, and any
container softwareboughtthrough AWS Marketplace.
⮚ Itis integrated with the following services:
Amazon Elastic Container Service(ECS)
Amazon Elastic Kubernetes Service(EKS)
AWS Lambda
Docker CLI
AWS Fargate for easy deployments
AWS Identity and Access Management (IAM) enables resource-
level control of each repository within ECR.
.Amazon Elastic Container Registry (ECR) supports public and private
container image repositories. Itallows sharing container applications
privately within the organization or publicly for anyone to download.
Images areencrypted at rest using Amazon S3 server-sideencryption
or using customer keys managed by AWS Key Management System
(KMS).
Amazon Elastic Container Registry (ECR) is integrated with continuous
integration, continuous delivery, and third-party developer tools.
Imagescanning allows identifying vulnerabilities in the container
images. It ensures that only scanned images are pushed to the
repository
Amazon ECR example
36. Amazon Elastic Container Service is a
regional and docker-supported service
that allows users to manage and scale
containers on a cluster.
What is Amazon Elastic
Container Service?
AmazonElastic Container Service
⮚ ECS cluster is a combination of tasks or services
executed on EC2 Instances or AWS Fargate.
⮚ It offers to scale ECS clusters using Autoscaling based on
CPU usage and other Autoscaling rules.
⮚ Using Application Load Balancer, ECS enables dynamic
port mapping and path-based routing.
⮚ It provides Multi-AZ features for the ECS clusters.
Two main use cases of Amazon ECS are:
Microservices are built by the
architectural method that
decouples complex applications
into smaller and independent
services.
Batch Jobs - Batch jobs
are short-lived packages
that can be executed
using containers.
Amazon ECS with Application Load Balancer
37. Amazon Elastic Kubernetes Service
(Amazon EKS) is a servicethat enables users
to manage Kubernetes applications in the
AWS cloud or on-premises.
What is Amazon Elastic
Kubernetes Service?
Amazon Elastic Kubernetes Service(EKS)
Using Amazon EKS, Kubernetes clusters and applications
can be managed across hybrid environments without
altering the code.
Amazon EKS
The EKS cluster consists of two components:
❑ Amazon EKS control plane
❑ Amazon EKS nodes
⮚ The Amazon EKS control plane consists of nodes that run
the Kubernetes software, such as etcd and the Kubernetes
API server.
⮚ To ensurehigh availability, Amazon EKS runs Kubernetes
control plane instances across multiple Availability Zones.
⮚ Itautomatically replaces unhealthy control plane instances
and provides automated upgrades and patches for the new
control planes.
⮚ Users can execute batch workloads on the EKS cluster using the
Kubernetes Jobs API across AWS computeservices such as
Amazon EC2, Fargate, and Spot Instances.
⮚ The two methods for creating a new Kubernetes cluster with
nodes in Amazon EKS:
o eksctl - A command-lineutility that consists of kubectl for
creating/managing Kubernetes clusters on Amazon EKS.
o AWS Management Consoleand AWS CLI
Amazon Elastic Kubernetes Serviceis integrated with many AWS services for unique capabilities:
❖ Images - Amazon ECR for container images
❖ Load distribution - AWS ELB (Elastic Load Balancing)
❖ Authentication - AWS IAM
❖ Isolation - Amazon VPC
38. AWS Fargate is a serverless compute
serviceused for containers by Amazon
Elastic Container Service(ECS) and Amazon
Elastic Kubernetes Service(EKS).
What is AWS Fargate?
AWS Fargate
In the AWS Management Console, ECS
clusters containing Fargate and EC2 tasks
are displayed separately.
Itexecutes each task of Amazon ECS or pods of
Amazon EKS in its kernel as an isolated computing
environment and improves security.
Itpackages the application in containers by just
specifying the CPUand memory requirements with
IAMpolicies. Fargatetask does not shareits underlying
kernel, memory resources, CPUresources, or elastic
network interface (ENI) with another task.
Itautomatically scales the compute environmentthat
matches the resourcerequirements for the container.
AWS Fargate
Difference between EC2 instance and AWS Fargate
Security groups for pods in EKS cannotbe used when pods
are running on Fargate.
StorageTypes supported
for Fargate Tasks
Amazon EFS volume for
persistent storage
Ephemeral Storage for
non-persistentstorage
40. Amazon Aurora is a MySQL and PostgreSQL-
compatible, fully managed relational
database engine built to enhance traditional
enterprise databases’performanceand
availability.
What is Aurora?
Amazon Aurora
⮚ Is a part of the fully managed Amazon Relational
DatabaseService (Amazon RDS).
Features include:
CLI commands and API operations for
patching
RDS Management Console
Database Setup
Backup
Failure Detection and repair
Recovery
Performance
5x greater than 3x greater than
MySQL on RDS
PostgreSQL on
RDS
⮚ Amazon Aurora replicates 2 copies of data in each availability zone(minimum
of 3 AZ). So a total of 6 copies per region.
Data Replication : 2 Types
⮚Aurora replica(in-
region)
Itcan provide15 read
replicas.
⮚MySQL Read Replica(cross-
region)
Itcan provide5 read replicas.
Amazon Aurora Cross-Region read replicas help to improvedisaster
recovery and provide fastreads in regions closer to the application
users.
41. Amazon DocumentDBis a fully managed
NoSQL database servicethat manages
MongoDBdatabases in AWS.
What is Amazon DocumentDB?
AmazonDocumentDB
Ithelps to scale storage and compute services independently.
Itprovides automatic failover either to one of up to 15 replicas
created in other Availability Zones or to a new instance if no
replicas havebeen provisioned.
Itprovides 99.99% availability by copying the cluster's data in
three different Availability Zones.
Itprovides backup capability and point-in-time recovery for the
cluster. Ithas a backup retention period of up to 35 days.
Itis best suited for TTL and Timeseries Workloads and supports ACID
properties based on transactions across oneor more documents.
❖ Itis a non-relational database serviceand supports
document data structures.
❖ Using DocumentDB with Amazon CloudWatch helps to
monitor the health and performanceof the instances in a
cluster.
❖ Itworks by building clusters that consistof 0 - 16
database instances (1 primary and 15 read replicas) and a
cluster storagevolume.
42. Amazon DynamoDBis a serverless NoSQL
database servicethat provides fastand
predictable performancewith single-digit
millisecond latency.
What is Amazon DynamoDB?
Amazon DynamoDBexample
Itis a multi-region cloud servicethat supports key-valueand
document data structure.
Itprovides high availability and data durability by replicating data
synchronously on solid-statedisks (SSDs) across 3 AZs in a region.
Amazon DynamoDBAccelerator (DAX) is a highly available in-memory
cache servicethat provides data fromDynamoDBtables. DAX is not
used for strongly consistentreads and write-intensive workloads.
Itprovides a push button scaling feature, signifying that DB can
scale without any downtime.
Ithelps to store session states and supports ACID transactions for
business-criticalapplication
Itprovides the on-demand backup capability of the tables for long-
term retention and enables point-in-time recovery fromaccidental
write or delete operations.
Itsupports Cross-Region Replication using DynamoDBGlobal
Tables. Global Tables helps to deploy a multi-region databaseand
provideautomatic multi-master replication to AWS regions.
AmazonDynamoDB
43.
44. Amazon Keyspaces (for ApacheCassandra)
is a serverless serviceused to manage
Apache Cassandra databases in AWS.
What is Amazon Keyspaces?
Amazon Keyspaces
Functions of Keyspaces:
Iteliminates the developers’ operational burden such as scaling,
patching, updates, server maintenance, and provisioning.
Itoffers high availability and durability by maintaining three copies
of data in multiple Availability Zones.
Ithelps to run existing Cassandra workloads on AWS without
making any changes to the Cassandra application code.
Itimplements the Apache Cassandra Query Language(CQL) API for
using CQL and Cassandra drivers similar to Apache Cassandra.
Ithelps to build applications that can servethousands of requests
with single-digit-millisecond responselatency.
Itcontinuously backups hundreds of terabytes of table data and
provides point-in-time recovery in the next 35 days.
Itprovides the following throughputcapacity modes for
reads and writes:
On-demand
Charges are applied for the
reads and write performed.
Provisioned
Charges are minimized by
specifying the number of
reads and writes per
second in advance.
Using Amazon Keyspaces, tables can be scaled automatically,
and read-writecosts can be optimized by choosing either on-
demand or provisioned capacity mode.
45. Amazon Neptune is a graph databaseservice
used as a web serviceto build and run
applications that requireconnected datasets
What is Amazon Neptune?
Amazon Neptune
Functions of Amazon Neptune:
Itprovides fault-tolerant storage by replicating two copies of
data across three availability zones.
Itprovides continuous backup to Amazon S3 and point-in-time
recovery fromstoragefailures.
Itis highly available across threeAZs and automatically fails
over any of the 15 low latency read replicas.
Itautomatically scales storagecapacity and provides encryption at
rest and in transit.
⮚ Itoffers to choosefromgraph models and languages for
querying data.
⮚ Property Graph (PG) model with Apache TinkerPop
Gremlin graph traversallanguage.
⮚ W3C standard ResourceDescription Framework (RDF)
model with SPARQL Query Language.
⮚ The graph databaseengine helps to store billions of
connections and provides milliseconds latency for querying
them.
46. Amazon Relational Database Service(Amazon
RDS) is a serviceused to build and operate
relational databases in the AWS Cloud
What is Amazon RDS?
Amazon RDS
⮚ Itis best suited for structured data and Online Transaction
Processing (OLTP) types of databaseworkloads such as InnoDB.
⮚ AWS KMS provides encryption at rest for RDS instances, DB snapshots, DBinstance
storage, and Read Replicas. The existing database cannot be encrypted.
⮚ Amazon RDS only scales up for compute and storage, with no option for decreasing
allocated storage
⮚ Itprovides Multi-AZ and Read Replicas features for high availability, disaster
recovery, and scaling.
• Multi-AZ Deployments - Synchronous replication
• Read Replicas - Asynchronous replication.
✔ If there is a need for unsupported RDS databaseengines, DB can
be deployed on EC2 instances.
Amazon RDS
It supportsthe following database engines:
PostgreSQL
SQL Server
MariaDB
Amazon Aurora
Oracle
MYSQL
RDS provides read replicas of reading
replicas and can also read replicas as
a standby DB like Multi-AZ.
Read replicas feature is not
available for SQL Server.
The following tasks need to be taken care of manually.
Encryptionand Security Updates and Backups Disaster Recovery
48. AWS CodeBuild is a continuous
integration servicein the cloud used to
compile sourcecode, run tests, and build
packages for deployment.
What is AWS CodeBuild?
AWS CodeBuild
❑ AWS Code Services family consists of AWS CodeBuild, AWS
CodeCommit, AWS CodeDeploy, and AWS CodePipeline that
providecomplete and automated continuous integration and
delivery (CI/CD).
❑ Itprovides prepackaged and customized build environments for
many programming languages and tools.
❑ Itscales automatically to process multiple separate builds
concurrently.
❑ Itcan be used as a build or test stage of a pipeline in AWS
CodePipeline.
❑ Itrequires VPC ID, VPC subnetIDs, and VPC security group IDs to
access resources in a VPC to performbuild or test.
❑ Charges are applied based on the amountof time taken by AWS
CodeBuild to complete the build.
❑ The following ways areused to run CodeBuild:
AWS CodePipeline console
AWS CodeBuild
AWS Command Line Interface (AWS
CLI)
AWS SDKs
AWS CodeBuild
49. AWS CodeCommit is a managed source
control serviceused to storeand manage
private repositories in the AWS cloud,
such as Git.
What is AWS CodeCommit?
AWS CodeCommit
Functions of AWS CodeCommit:
Itprovides high availability, durability, and redundancy.
Iteliminates the need to back up and scale the sourcecontrol servers.
Itworks with existing Git-based repositories, tools, and commands in
addition to AWS CLI commands and APIs.
CodeCommit repositories supportpull requests, version differencing,
merge requests between branches, and notifications through emails
about any code changes.
AWS CodeCommit
As compared to Amazon S3 versioning ofindividual files, AWS
CodeCommit supporttracking batched changes across multiple files.
Itprovides encryption at restand in transit for the files in the
repositories.
50. AWS CodeDeploy is a servicethat helps to
automate application deployments to a
variety of compute services such as Amazon
EC2, AWS Fargate, AWS ECS, and on-premises
instances.
What is AWS CodeDeploy?
AWS CodeDeploy
❑ Using Amazon EKS, Kubernetes clusters and
applications can be managed across hybrid
environments without altering the code.
❑ Itcan fetch the content for deployment fromAmazon
S3 buckets, Bitbucket, or GitHub repositories.
❑ Itcan deploy different types of application content such
as Code, Lambda functions, configuration files, scripts
and even Multimedia files.
❑ Itcan scale with the infrastructureto deploy on
multiple instances across development, test, and
production environments.
❑ Itcan integrate with existing continuous delivery
workflows such as AWS CodePipeline, GitHub, Jenkins.
AWS CodeDeploy
In-place deployment:
● All the instances in the deployment group are stopped, updated with
new revision and started again after the deployment is complete.
● Useful for EC2/On-premises computeplatform.
Blue/greendeployment:
● The instances in the deployment group of the original environment
are replaced by a new set of instances of the replacement
environment.
● Using Elastic Load Balancer, traffic gets rerouted fromthe original
environment to the replacement environment and instances of the
original environment get terminated after the deployment is
complete.
● Useful for EC2/On-Premises, AWS Lambda and Amazon ECS compute
platform
It providesthe following deploymenttype to choose from:
AWS CodeDeploy
51. AWS X-Ray is a service that allows
visual analysis or allows to trace
microservices based applications.
What is AWS X-Ray?
AWS X-Ray
✔ It provides end-to-end information about the request,
response and calls made to other AWS resources by
travelling through the application's underlying components
consisting of multiple microservices.
✔ It creates a service graph by using trace data from the AWS
resources.
⮚ The graph shows the information about front-end and
backend services calls to process requests and continue
the flow of data.
⮚ The graph helps to troubleshoot issues and improve the
performance of the applications.
AWS Elastic Load Balancer
AWS EC2 (Applications deployed on Instances)
Amazon ECS (Elastic Container Service)
AWS Elastic BeanStalk
Amazon API Gateway
AWS Lambda
The X-Ray SDKs are available for the following
languages:
⮚ Go
⮚ Java
⮚ Node.js
⮚ Python
⮚ Ruby
⮚ .Net
It works with the following AWS services:
52. Amazon WorkSpaces is a managed service
used to provision virtualWindows or Linux
desktops for users across theglobe.
What is Amazon WorkSpaces?
AmazonWorkSpaces
Itoffers to choosePCoIP protocols (port4172) or WorkSpaces Streaming
Protocol(WSP, port4195) based on user’s requirements such as the type
of devices used for workspaces, operating system, and network
conditions.
Amazon WorkSpaces Application Manager (Amazon WAM) helps
to manage the applications on Windows WorkSpaces.
Ithelps to eliminate the management of on-premiseVDIs
(Virtual Desktop Infrastructure).
Multi-factor authentication (MFA) and AWS Key Management Service
(AWS KMS) is used for account and data security.
Each WorkSpaceis connected to a virtual private cloud (VPC) with
two elastic network interfaces (ENI) and AWS Directory Service.
❖ Amazon WorkSpaces can beaccessed with the following
client application for a specific device:
✔ Android devices, iPads
✔ Windows, macOS, and Ubuntu Linux computers
✔ Chromebooks
✔ Teradici zero client devices -supported only with
PCoIP
❖ For Amazon WorkSpaces, billing takes place either monthly or
hourly.
54. Amazon API Gateway is a servicethat
maintains and secures APIs atany scale. Itis
categorized as a serverless serviceof AWS.
What is Amazon API Gateway?
Amazon API Gateway
Amazon API Gateway:
✔ Acts as a frontdoor for real-world applications to access data, business
logic fromthe back-end services, such as code running on AWS Lambda, or
any web application.
✔ Handles the processing of hundreds of thousands of existing API calls,
including authorization, access control, different environments (dev, test,
production), and API version management.
✔ Helps to create web APIs thatroute HTTP requests to Lambda functions
Example:
When a request is sent through a browser or HTTP client to the
public endpoint, API Gateway API broadcasts therequest and sends
it to the Lambda function. The Function calls the Lambda API to get
the required data and returns it to the API.
AWS Lambda + API Gateway = No need to manage infrastructure
Resources
Stages
Methods
API Gateway consists of:
Integrations
Outsideof VPC with: Insideof VPC with:
EC2 Endpoints
AWS Lambda
AWS
Lambda
Any
AWS
service
EC2
End
points
Load
Balancers
Amazon API
Gateway
integrates
56. Amazon GameLift is a fully managed
serviceused to deploy, manage, and
scale dedicated servers in the cloud for
multiplayer games.
What is Amazon GameLift?
Amazon GameLift
Gateway
Services
Game Server
Game Services
Amazon
GameLift
✔ Ithelps to host customgame servers on Amazon Linux or
Windows Server operating systems and manages scaling,
security, storage, and performancetracking.
✔ Along with FlexMatch, Amazon Gamelift helps to match players
and connect up to 200 single-team or multi-team players in a
single game session on the server.
✔ GameLift Queues helps to maintain new game sessions across
multiple regions and provides players with sorted lists of
available game sessions.
✔ Italso provides ready-to-goor real-time gameserver solutions
to deploy game servers with minimal configuration settings and
customlogic required for the game and players.
✔ GameLift FleetIQ offers Spot Instances in AmazonEC2 for cloud-
based game hosting with automatic scaling capabilities.
⮚ Itprovides a low-latency player experience and reduces
manual tasks to deploy and manage game servers.
⮚ The three main components of game infrastructurethat
fits with Amazon GameLift are:
58. AWS IoT Analytics is a managed
service used to analyze massive
data from IoT devices.
What is AWS IoT Analytics?
AWS IoT Analytics
It extracts data by running SQL queries using an in-built
SQL query engine.
It provides non-overlapping windows to perform analysis
and time-series data storage to store processed and raw
data.
It scales automatically and can analyze petabytes of
data from multiple devices.
❑ It works with unstructured data or less structured data
like temperature, motion, or sound, resulting in false
reading during analysis.
❑ It collects and processes data by filtering and applying
mathematical transformation on the devices. AWS IoT Analyticsflow
59. AWS IoTCore is a cloud servicethat enables
users to connect IoTdevices (wireless
devices, sensors, and smartappliances) to
the AWS cloud without managing servers.
What is AWS IoT Core?
AWS IoT Core
❑ Itprovides secureand bi-directional communication with all the
devices, even when they aren’t connected.
❑ Itconsists of a device gateway and a messagebroker that helps
connect and process messages and routes thosemessages to
other devices or AWS endpoints.
❑ Ithelps developers to operate wireless LoRaWAN(low-power
long-rangeWide Area Network) devices.
❑ Ithelps to create a persistent Device Shadow (a virtual version of
devices) so that other applications or devices can interact.
⮚ Itsupports devices and clients that usethe following
protocol:
MQTT (Message Queuing and Telemetry
Transport) - publish and subscribe
messages
HTTPS protocol - publish messages
MQTT over WSS protocols- publish and
subscribe messages
AWS IoT Core
It integrates with Amazon services like Amazon CloudWatch, AWS CloudTrail,
Amazon S3, Amazon DynamoDB, AWS Lambda, Amazon Kinesis, Amazon
SageMaker, and Amazon QuickSightto build IoT applications.
60. AWS IoT Device Defender is a fully
managed service that enables users to
audit the configuration associated with
the devices to mitigate security risks.
What is AWS IoT Device
Defender?
AWS IoT Device Defender
A configuration is a set of rules or
controls to secure communication.
It offers in-built mitigation actions
to reduce the impact of security
issues.
It handles IoT security in the cloud.
It alerts using Amazon CloudWatch, AWS IoT Console, and Amazon
SNS if there are any anomalies in the IoT configuration.
61. AWS IoTDevice Management is a cloud
serviceused to manage, track, and monitor
thousands of IoTdevices in a fleet
throughouttheir lifecycle.
What is AWS IoTDevice
Management?
AWS IoT Device Management
Cameras
Appliances
Machines
Example of Device Fleets are:
Operational Systems,
Vehicles, etc.
❖ Itregisters connected devices individually or in groups and manages
permissions for devicesecurity.
❖ Ithelps users to organizeand manage devices into groups according to
business and security requirements.
❖ Itgroups multiple sensors within a single unit or in a fleet for
communication.
❖ Itoffers near real-time search capability to quickly find any IoTdevice
across the fleet by using attributes such as device ID, type, and state.
❖ Fleet Hub, a fully managed web application that can interact with the
device, fleets fromanywheresecurely.
❖ Itreduces the effortof managing large and multiple IoTdevice
deployments and offers scaling capabilities for the connected device
fleets.
62. AWS IoT Events is a monitoring
service that allows users to monitor
and respond to devise fleets’
events in IoT applications.
What is AWS IoT Events?
AWS IoT Events
It builds event monitoring
applications in the AWS Cloud that
can be accessed through the AWS
IoT Events console.
It detects events from IoT sensors
such as temperature, motor voltage,
motion detectors, humidity.
It helps to create event logic using
conditional statements and trigger
alerts when an event occurs.
AWS IoT Events accepts data from many IoT sources like sensor devices,
AWS IoT Core, and AWS IoT Analytics.
63. AWS IoTGreengrass is a cloud servicethat
groups, deploys, and manages softwarefor
all devices at once and enables edge devices
to communicate securely.
What is AWS IoT Greengrass?
AWS IoT Greengrass
Amazon Kinesis
Amazon Simple Storage Service(Amazon S3)
AWS IoTCore
Itsynchronizes data on the device using the following AWS services:
AWS IoTAnalytics
❖ Itis used on multiple IoTdevices in homes, vehicles,
factories, and businesses.
❖ Itprovides a pub/sub messagemanager that stores messages
as a buffer to preservethem in the cloud
❖ The Greengrass Coreis a device that enables the
communication between AWS IoTCore and the AWS IoT
Greengrass.
❖ Devices with IoTGreengrass can process data streams without
being online.
❖ Itprovides different programming languages, open-source
software, and development environments to develop and test
IoTapplications on specific hardware.
❖ Itprovides encryption and authentication for device data for
cloud communications.
❖ Itprovides AWS Lambda functions and Docker containers as
an environment for code execution.
64. FreeRTOS is an open-source operating
system for microcontrollers that
enables devices to connect, manage,
program, deploy and scale.
What is FreeRTOS?
FreeRTOS
It helps securely connect small
devices to AWS IoT Core or the
devices running AWS IoT Greengrass.
The microcontroller is a kind of
processor available in many devices
like industrial automation,
automobiles, sensors, appliances.
It acts as a multitasking scheduler and
provides multiple memory allocation
options, semaphore, task
notifications, message queues, and
message buffers.
66. Amazon SageMaker is a cloud service
that allows developers to prepare,
build, train, deploy and manage
machine learning models.
What is Amazon
SageMaker?
Amazon SageMaker
Amazon SageMaker notebook instances are created using
Jupyter notebooks to write code to train and validate the
models.
Amazon SageMaker gets billed in seconds based on the
amount of time required to build, train, and deploy
machine learning models.
It scales up to petabytes level to train models and
manages all the underlying infrastructure.
❖ It provides a secure and scalable environment to
deploy a model using SageMaker Studio or the
SageMaker console.
❖ It has pre-installed machine learning algorithms to
optimize and deliver 10X performance.
67. Amazon Polly is a cloud service
used to convert text into speech.
What is Amazon Polly?
Amazon Polly
It supports many different languages,
and Neural Text-to-Speech (NTTS)
voices to create speech-enabled
applications.
It requires no setup costs, only pay
for the text converted.
It offers caching and replays of
Amazon Polly’s generated speech in a
format like MP3.
68. Amazon Transcribe is a service used
to convert audio (speech) to text
using a Deep Learning process known
as automatic speech recognition
(ASR).
What is Amazon Transcribe?
Amazon Transcribe
Amazon Transcribe Medical is used to
convert medical speech to text for
clinical documentation.
It is best suited for customer
service calls, live broadcasts, and
media subtitling.
It automatically matches the text
quality similar to the manual
transcription. For transcribe, charges
are applied based on the seconds of
speech converted per month.
69. AWS Deep Learning AMIs are
customized machine images that
provide tools to improve and scale deep
learning in the cloud.
What is AWS Deep Learning AMIs?
AWS Deep Learning AMIs
It uses EC2 machines pre-installed with deep learning
tools such as:
❖ TensorFlow
❖ Apache MXNet
❖ Keras etc.
To train modern and custom AI models
It provides two kinds of AMIs:
Conda AMI:-
✔ It uses Anaconda environments
✔ Frameworks get installed separately
using Conda packages
Base AMI:-
✔ No frameworks installed.
✔ It is used for private deep
learning engine repositories.
71. Amazon CloudWatch is a service that
monitors based on multiple metrics of AWS
and on-premises resources.
What is Amazon CloudWatch?
AmazonCloudWatch
AWS CloudWatch monitors AWS resources such as
Amazon RDS DB instances, Amazon EC2 instances,
Amazon DynamoDBtables, and any log files generated
by the applications.
Amazon CloudWatch
⮚ Collects and correlates monitoring data in logs, metrics, and
events fromAWS resources, applications, and services that
run on AWS and on-premises servers.
⮚ Offers dashboardsand creates graphs to visualizecloud
resources.
⮚ Visualizes logs to address issues and improveperformance
by performing queries.
⮚ Alarms can be created using CloudWatch Alarms that monitors
metrics and send notifications.
⮚ CloudWatch Agent or API can be used to monitor hybrid cloud
architectures.
⮚ CloudWatch Container Insights and Lambda Insights both
providedashboards to summarizethe performanceand errors
for a selected time window.
Amazon CloudWatch is used alongside the following applications:
❖ Amazon Simple Notification Service(Amazon SNS)
❖ Amazon EC2 Auto Scaling
❖ AWS CloudTrail
❖ AWS Identity and Access Management (IAM)
72. AWS CloudFormation is a servicethat collects
AWS and third-party resources and manages
them throughout their lifecycles by launching
them together as a stack.
What is AWS CloudFormation?
AWS CloudFormation
AWS CloudFormation
Example: CloudFormation template for creating EC2 instance
EC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: 1234xyz
KeyName: aws-keypair
InstanceType: t2.micro
SecurityGroups:
- !Ref EC2SecurityGroup
BlockDeviceMappings:
- DeviceName: /dev/sda1
Ebs:
VolumeSize: 50
Template:
❑ A template is used to create, update, and delete an
entire stack as a single unit without managing
resources individually.
❑ CloudFormation provides the capability to reuse the
template to set the resources easily and repeatedly.
Stacks:
❑ Stacks can be created using the AWS CloudFormation console
and AWS Command Line Interface(CLI).
❑ NestedStacks arestacks created within another stack by using
the ‘AWS::CloudFormation::Stack’ resourceattribute.
❑ The main stack is termed as parent stack, and other belonging
stacks aretermed as child stack, which can be implemented by
using ref variable ‘! Ref’.
AWS does not chargefor using AWS CloudFormation, and
charges are applied for the CloudFormation template services.
73. What is AWS CloudTrail?
AWS CloudTrail
AWS CloudTrail is a service that gets
enabled when the AWS account is created
and is used to enable compliance and
auditing of the AWS account.
Records": [{
"eventVersion": "1.0",
"userIdentity": {
"type": "IAMUser",
"principalId": "PR_ID",
"arn":
"arn:aws:iam::210123456789:user/Rohit",
"accountId": "210123456789",
"accessKeyId": "KEY_ID",
"userName": "Rohit"
},
"eventTime": "2021-01-24T21:18:50Z",
"eventSource": "iam.amazonaws.com",
"eventName": "CreateUser",
"awsRegion": "ap-south-2",
"sourceIPAddress": "176.1.0.1",
"userAgent": "aws-cli/1.3.2 Python/2.7.5
Windows/7",
"requestParameters": {"userName": "Nayan"},
"responseElements": {"user": {
"createDate": "Jan 24, 2021 9:18:50 PM",
"userName": "Nayan",
"arn": "arn:aws:iam::128x:user/Nayan",
"path": "/",
"userId": "12xyz"
}}
}]}
✔ Itoffers to view, analyze, and respond to activity across theAWS
infrastructure.
✔ Itrecords actions as an event by an IAMuser, role, or an AWS service.
✔ CloudTrail records can download Cloud Trial events in JSONor CSV file.
✔ CloudWatch monitors and manages the activity of AWS services and
resources, reporting on their health and performance. Whereas
CloudTrail resembles logs of all actions performed inside the AWS
environment.
✔ IAM log file -
The below example shows that the IAMuser Rohit used the AWS
Management Console to call the AddUserToGroup action to add Nayan
to the administrator group.
74. AWS Config is a servicethat allows users to
determine the quality of a resource's
configuration in the AWS account.
What is AWS Config?
AWS Config
Functions of AWS Config:
Itoffers a dashboard to view compliance status for an account
across regions.
Ituses Config rules to evaluate configuration settings of the AWS
resources.
Ithelps to monitor configuration changes performed over a specific
period using AWS Config consoleand AWS CLI and generates
notifications about changes.
Itcaptures the history of configurations and tracks relationships of
resources beforemaking changes.
Using AWS CloudTrail, AWS Config helps to identify and troubleshoot
issues by capturing API calls as events.
AWS Config in action
75. AWS License Manager is a serviceused to
centralize the usageof softwarelicenses
across the environment.
What is AWS License Manager?
AWS License Manager
AWS License Manager’s managed entitlements provide built-in
controls to softwarevendors (ISVs) and administrators so that
they can assign licenses to approved users and workloads.
AWS Systems Manager can manage licenses on physicalor virtual
servers hosted outside of AWS using AWS License Manager.
Itallows administrators to specify Dedicated Hostmanagement
preferences for allocation and capacity utilization.
AWS Organizations and AWS License Manager help to allow cross-
account disclosureof computing resources in the organization.
❖ Itsupports Bring-Your-Own-License(BYOL) feature, which
means that users can manage their existing licenses for
third-party workloads (MicrosoftWindows Server, SQL
Server) to AWS.
❖ Itenables administrators to create customized licensing
rules that help preventlicensing violations (using more
licenses than the agreement).
❖ Itprovides a dashboard to control the visibility of all the
licenses to the administrators.
76. AWS Management Consoleis a web console
with multiple wizards and services used to
manage Amazon Web Services.
What is AWS Management Console?
AWS ManagementConsole
⮚ Itcan be visible when a user first-time signs in. Itprovides access
to other service consoles and a user interface for exploring AWS.
AWS ManagementConsole
⮚ AWS Management Consoleprovides a Services option on the navigation bar that
allows choosing services fromthe Recently visited listor the All services list.
⮚ A GUI Consoleis available as an app for Android and iOS for a better experience.
⮚ There is a Search box on the navigation bar to search for AWS services by
entering all or part of the name of the service
⮚ On the navigation bar,
there is an option to
select Regions from.
AWS Regions
AWS Services Console
77. AWS Organizations is a servicethat allows
users to manage multiple AWS accounts
grouped into a single organization.
What are AWS Organizations?
AWS Organizations
AWS Organizations
Itincludes account management and combined billing
capabilities to meet the business’s budgetary and
security needs.
❑ Iteasily shares critical common
resources across theaccounts.
❑ Itorganizes accounts into
organizationalunits (OUs), which are
groups of accounts that serve
specified applications.
Service Control Policies
ServiceControl Policies (SCPs) can be created
to provide governanceboundaries for the
OUs. SCPs ensurethat users in the accounts
only performactions that meet security
requirements.
The master account is responsiblefor paying
charges of all resources used by the accounts
in the organization.
78. AWS PersonalHealth Dashboard is a tool
that provides alerts and remediation
measures to diagnoseand resolve issues
related to AWS resources and infrastructure.
What is AWS Personal Health Dashboard?
AWS Personal Health Dashboard
AWS offers two dashboards:
AWS Service Health
Dashboard
AWS Service Health Dashboard
provides access to the current status
and a complete health check of all
regions’ services.
.
Personal Health Dashboard
The PersonalHealth Dashboard
provides notification of any
serviceinterruptions that may
affect the AWS account’s
resources.
AWS PersonalHealth Dashboard
⮚ AWS PersonalHealth Dashboard integrates with Amazon
CloudWatch Events to create customrules and specify targets
such as AWS Lambda functions to enable remediation actions.
At the Personal Health Dashboard, there are three
categories:
Open issues - shows issues of the last seven days.
Scheduled changes - shows items of any upcoming
changes.
Other notifications.
79. AWS Systems Manager (SSM) is a servicethat
allows users to centralize or group
operational data using multiple services and
automate operations across AWS
infrastructure.
What is AWS Systems Manager?
AWS Systems Manger
✔ Itsimplifies maintenance and identifies issues in the resources
that may impact the applications.
✔ Itdisplays the operational data, systemand application
configurations, softwareinstallations, and other details on a
single dashboard known as AWS Systems Manager Explorer.
✔ Itmanages secrets and configuration data and separates them
fromcode using a centralized storeknown as Parameter Store.
✔ Ithelps to communicate with the Systems Manager agent
installed on AWS servers and in an on-premises environment.
Agents are installed to manage resources on servers using
different operating systems.
Ithelps to automate repetitive operations and management
tasks using predefined playbooks.
Itconnects with Jira Service Desk and ServiceNow to allow ITSM
platformusers to manage AWS resources.
Ithelps to manage servers withoutactually logging into the
server using a web consoleknown as Session Manager.
Systems Manager Distributor helps to distribute softwarepackages
on hosts along with versioning.
81. AWS Database Migration Serviceis a cloud
serviceused to migrate relational databases
fromon-premises, Amazon EC2, or Amazon
RDS to AWS securely.
What is AWS Database
Migration Service?
AWS DatabaseMigration Service
AWS DMS does not stop the running application while
performing the migration of databases, resulting in
downtime minimization.
Amazon Database
Management Service
Homogeneous migration Heterogeneous migration
AWS DMS supports the following data sources and targets engines
for migration:
❑ Sources: Oracle, MicrosoftSQL Server, PostgreSQL, Db2 LUW,
SAP, MySQL, MariaDB, MongoDB, and Amazon Aurora.
❑ Targets: Oracle, MicrosoftSQL Server, PostgreSQL, SAP ASE,
MySQL, Amazon Redshift, Amazon S3, and Amazon DynamoDB.
❑ Itperforms all the management steps required during the
migration, such as monitoring, scaling, error handling, network
connectivity, replicating during failure, and softwarepatching.
❑ AWS DMS with AWS Schema Conversion Tool (AWS SCT) helps
to performheterogeneous migration.
82. AWS Server Migration Service is a service
that easily migrates on-premises servers to
the AWS Cloud.
What is AWS Server
Migration Service?
AWS Server Migration Service
Functions of AWS Server Migration Service:
Ithelps to track the progress of the servers of an application.
With the feature of incremental replication, it allows scalable
testing of migrated servers. Thesereplications minimize the
application downtimethat affects the business.
Itaccomplishes migration by replicating on-premiseserver VMs as
Amazon Machine Images (AMIs) ready for deploymenton Amazon
EC2.
Ituses service-linked roles to allow permissions to call other AWS
services on behalf.
Limitations of Server Migration Service are:
AWS Server Migration Service integrates with Amazon CloudWatch
Events, AWS CloudTrail, and Identity and access management.
❑ Fifty concurrentVM migrations per account unless the
customer requests a limit increase.
❑ Ninety days of serviceusage per VM, they are starting from the
initial replication of a VM. An ongoing replication after 90 days
terminates unless the customer requests a limit increase.
❑ Fifty concurrentapplication migrations per account.
84. Amazon Virtual Private Cloud is a servicethat
allows users to create a virtual dedicated
network for resources.
What is AmazonVPC?
AmazonVPC
❑ Itincludes many components such as Internetgateways, VPN
tools, CIDR, Subnets, Route tables, VPC endpoint, NAT
instances, Bastion servers, Peering Connection, and others.
❑ Itspans across multiple Availability Zones (AZs) within a
region.
❑ The first four IP and last one IP addresses arereserved per
subnet.
❑ Itcreates a public subnet for web servers thatuses internet
access and a private subnet for backend systems, such as
databases or application servers.
❑ Itcan monitor resources using Amazon CloudWatch and Auto
Scaling Groups.
Private subnet - A subnetthat does not haveinternet access is
termed a private subnet.
Public subnet - A subnetthat has internet access is termed a public
subnet.
VPN only subnet - A subnet that does not have internet access but
has access to the virtual private gateway for a VPNconnection is
termed a VPN-only subnet.
❖ Every EC2 instance is launched within a default VPCwith equal security and controllike normalAmazon VPC. Default VPChas no private
subnet.
❖ Ituses Security Groups and NACL (Network Access Control Lists) for multi-layer security.
❖ Security Groups (stateful) provide instance-level security, whereas NACLs (stateless) providesubnet-level security.
❖ VPC sharing is a component that allows subnets to sharewith other AWS accounts within the same AWS Organization.
85. Amazon CloudFront is a content delivery network
(CDN) service that securely delivers any kind of
data to customers worldwide with low latency,
low network, and high transfer speeds.
What is Amazon CloudFront?
Amazon CloudFront
Amazon CloudFront Access Controls:
Signed URLs:
● Use this to restrict access to individual files.
Signed Cookies:
● Use this to provide access to multiple restricted files.
● Use this if the user does not want to change current URLs.
Geo Restriction:
● Use this to restrict access to the data based on the geographic location of the
website viewers.
Origin Access Identity (OAI):
● Outside access is restricted using signed URLs and signed cookies, but what if
someone tries to access objects using Amazon S3 URL, bypassing CloudFront signed
URL and signed cookies. To restrict that, OAI is used.
● Use OAI as a special CloudFront user and associate it with your CloudFront
distribution to secure Amazon S3 content.
CloudFront Signed URL:
○ It allows access to a path, no matter what is
the origin
○ It can be filtered by IP, path, date, expiration
○ It leverages caching features
S3 Pre-Signed URL:
o It issues a request as the
person who pre-signed the
URL.
⮚ It makes use of Edge locations (worldwide network of data centers) to deliver
the content faster.
⮚ Without edge locations, it retrieves data from an origin such as an Amazon S3
bucket, a Media Package channel, or an HTTP server.
Amazon EC2
Amazon S3
Elastic Load Balancing
Amazon Route 53
CloudFront is integrated with AWS Services such as:
AWS Essential Media Services
CloudFront provides some security features such as:
❖ Field-level encryption with HTTPS - Data remains encrypted
throughout starting from the upload of sensitive data.
❖ AWS Shield Standard - Against DDoS attacks.
❖ AWS Shield Standard + AWS WAF + Amazon Route 53 - Against
more complex attacks than DDoS.
CloudFront Signed URL S3 Pre-Signed URL
86. Route 53 is a managed DNS (Domain Name System)
service where DNS is a collection of rules and records
intended to help clients/users understand how to
reach any server by its domain name.
What is Route 53?
Amazon Route 53
Amazon Route 53
⮚ Route 53 hosted zone is a collection of records for a specified
domain that can be managed together.
⮚ There are two types of zones:
Private Hosted Zone - Determines how traffic is routed within VPC.
Public Hosted Zone - Determines how traffic is routed on the Internet.
AAAA: hostname to IPv6
A: hostname to IPv4
CNAME: hostname to hostname
Alias: hostname to AWS resource
The most common records supported in Route 53 are:
Route 53 CNAME Route 53 Alias
It points a hostname to any other
hostname.(app.mything.com ->
abc.anything.com)
It points a hostname to an AWS
Resource.(app.mything.com ->
abc.amazonaws.com)
It works only for the non-root
domains.(abcxyz.maindomain.com)
It works for the root domain and non-root
domain. (maindomain.com)
It charges for CNAME queries. It doesn’t charge for Alias queries.
It points to any DNS record that is
hosted anywhere.
It points to an ELB, CloudFront distribution,
Elastic Beanstalk environment, S3 bucket as a
static website, or another record in the same
hosted zone.
Route 53 Routing Policies:
Simple:
❖ It is used when there is a need to redirect traffic to a single resource.
❖ It does not support health checks.
Weighted:
❖ It is similarto simple, but you can specify a weight associated with resources.
❖ It supports health checks.
Failover:
❖ If the primary resource is down (based on health checks), it will route to a secondary
destination.
❖ It supports health checks.
Geo-location:
❖ It routes traffic to the closest geographic location you are in.
Geo-proximity:
❖ It routes traffic based on the location of resources to the closest region within a
geographic area.
Latency based:
❖ It routes traffic to the destination that has the least latency.
Multi-value answer:
❖ It distributes DNS responses across multiple IP addresses.
87. AWS Direct Connect is a cloud servicethat
helps to establish a dedicated connection
froman on-premises network to one or more
VPCs and other services in the same region.
What is AWS Direct Connect?
AWS Direct Connect
✔ With the help of industry-standard 802.1QvirtualLANs (VLANs), thededicated
connection can be partitioned into multiple virtual interfaces.
✔ Virtual interfaces can be reconfigured at any time to meet the changing
needs.
Private virtual
interface:
Ithelps to connect an
Amazon VPC using
private IP addresses.
Public virtual
interface:
Ithelps to connect
AWS services located
in any AWS region
(except China) from
your on-premises data
center using public IP
addresses.
Pricing details:
⮚ Porthours - charges are determined by capacity and
connection type
⮚ Outbound data transfer
Amazon DirectConnect
88. AWS PrivateLink is a network service used
to connect to AWS services hosted by
other AWS accounts (referred to as
endpointservices) or AWS Marketplace.
What is PrivateLink?
AWS PrivateLink
⮚ It is used for scenarios where the source VPC acts as a
service provider, and the destination VPC acts as a service
consumer.
⮚ So, serviceconsumers use an interface endpoint to
access the services runningin the service provider.
⮚ It provides security by not allowingthe public internet and
reducingthe exposure to threats, such as brute force and
DDoS attacks.
Interface Endpoints GatewayEndpoints
Types of VPC End Points
It is a gateway in the
route-tablethat routes
traffic only to Amazon S3
and DynamoDB
It serves as an entry point
for traffic destinedto an
AWS service or a VPC
endpoint service
89. AWS Transit Gateway is a network hub used
to interconnect multiple VPCs. Itcan be used
to attach all hybrid connectivity by
controlling your organization's entire AWS
routing configuration in one place
What is AWS Transit Gateway?
AWS Transit Gateway
Transit Gateway VPC Peering
Ithas an hourly charge per
attachment in addition to the data
transfer fees.
Itdoes not chargefor data transfer.
Multicast traffic can be routed
between VPC attachments to a
Transit Gateway.
Multicast traffic cannot be routed
to peering connections.
Itprovides Maximum bandwidth
(burst) of 50 Gbps per Availability
Zone per VPCconnection.
Itprovides no aggregate
bandwidth.
Security groups feature does not
currently work with Transit
Gateway
Security groups feature works with
intra-Region VPC peering.
Itcan be more than one per region but can not be
peered within a single region. It supports attaching
Amazon VPCs with IPv6 CIDRs.
Ithelps to solve the problem of complex VPC peering
connections.
Transit Gateway reduces the complexity of
maintaining VPNconnections with hundreds of VPCs,
which become very useful for large enterprises.
Transit Gateway vs. VPC Peering
AWS Transit Gateway
90. Elastic Load Balancing is a managed service that allows
traffic to get distributed across EC2 instances,
containers, and virtual appliances as target groups.
What is Elastic Load Balancing?
Elastic Load Balancing (ELB)
Elastic Load Balancing
Elastic Load Balancer types are as follows:
Classic Load Balancer:
▪ Oldest and less recommended load balancer.
▪ Routes TCP, HTTP, or HTTPS traffic at layer 4 and layer 7.
▪ They are used for existing EC2-Classic instances.
Application Load Balancer:
▪ Routes HTTP and HTTPS traffic at layer 7.
▪ Offers path-based routing, host-based routing, query-string,
parameter-based routing, and source IP address-based routing.
Network Load Balancer:
▪ Routes TCP, UDP, and TLS traffic at layer 4.
▪ Suitable for high-performance and low latency applications.
Gateway Load Balancer:
▪ Suitable for third-party networking appliances.
▪ It simplifies tasks to manage, scale, and deploy virtual appliances.
▪ ELB integrates with every AWS service throughout the
applications.
▪ It is tightly integrated with Amazon EC2, Amazon ECS/EKS.
▪ ELB integrates with Amazon VPC and AWS WAF to offer extra
security features to the applications.
▪ It helps monitor the servers’ health and performance in real-time
using Amazon CloudWatch metrics and request tracing.
▪ ELB can be placed based on the following aspects:
▪ Internet-facing ELB:
o Load Balancers have public IPs.
▪ Internal only ELB:
o Load Balancers have private IPs.
▪ ELB offers the functionality of Sticky sessions. It is a process to
route requests to the same target from the same client.
Classic Load
Balancer
(Internet-Facing)
Classic Load Balancer (Internal)
91. AWS Cloud Map is a servicethat keeps track
of application components and health status
and allows dynamic scaling and
responsiveness for theapplication.
What is AWS Cloud Map?
AWS Cloud Map
✔ Ituses discovery API to return URLs and IP addresses.
✔ Ithelps to discover services with the help of AWS SDK, API calls,
or DNS queries.
✔ AWS Cloud Map only returns healthy instances if health checking
is specified while creating the service.
✔ When a new resourceis added, a serviceinstance is created by
calling the RegisterInstanceAPI action. The serviceinstance helps
locate the resourceusing DNS or using the AWS Cloud Map
DiscoverInstancesAPI action.
✔ Itdecreases time consumption as it restricts the user from
managing resourcenames and their locations manually within the
application code.
✔ Itstrongly integrates with Amazon Elastic Container Service.
✔ AWS Cloud Map provides a registry for the application services
defined by namespaces and restricts developers fromstoring,
track, and update resourcenames and location information within
the application code.
AWS Cloud Map
93. AWS Identity and Access Management is a free
serviceused to define permissions and manage
users to access multi-account AWS services.
What is Amazon IAM ?
Amazon Identity and Access Management(IAM)
Amazon Identity and
Access Management
AmazonIdentity and Access Management allows:
❖ users to analyzeaccess and provideMFA (Multi-factor
authentication) to protect the AWS environment.
❖ managing IAMusers, IAMroles, and federated users.
IAM Policies
Policies are documents written in JSON(key-valuepairs) used
to define permissions.
IAM Users
User can be a person or service.
IAM Roles
IAMusers or AWS services can assumea role to obtain
temporary security credentials to make AWS API calls.
IAM Groups
Groups arecollections of users, and policies are attached to
them. Itis used to assign permissions to users.
94. Amazon Cognito is a serviceused for
authentication, authorization, and user
management for web or mobile applications.
What is Amazon Cognito?
Amazon Cognito
⮚ Amazon Cognito allows customers to sign in through social identity
providers such as Google, Facebook, and Amazon, and through
enterprise identity providers such as MicrosoftActive Directory via
SAML.
User poolsare user
repositories (where user profile
details are kept) that provide
sign-up and sign-in options for
your app users.
Identity poolsare user
repositories of an account, which
providetemporary and limited-
permission AWS credentials to the
users so that they can access other
AWS resources withoutre-entering
their credentials.
The two main components of Amazon Cognito are as follows:
⮚ Amazon Cognito User Pools is a standards-based Identity
Provider and supports Oauth 2.0, SAML 2.0, and OpenID
Connect. Amazon Cognito identity pools are useful for both
authenticated and unauthenticated identities.
⮚ Amazon Cognito is capable enough to allow usage of user pools
and identity pools separately or together
95. AWS Certificate Manager is a servicethat
allows a user to protect AWS applications by
storing, renewing, and deploying public and
private SSL/TLS X.509 certificates.
What is AWS Certificate Manager?
AWS Certificate Manager
⮚ HTTPS transactions require server certificates X.509 thatbind the
public key in the certificate to provideauthenticity.
⮚ The certificates are signed by a certificate authority (CA) and
contain the server’s name, the validity period, the public key, the
signaturealgorithm, and more.
⮚ Itcentrally manages the certificate lifecycle and helps to
automate certificate renewals.
⮚ SSL/TLS certificates providedata-in-transit security and authorize
the identity of sites and connections between browsers and
applications.
⮚ The certificates created by AWS Certificate Manager for using
ACM-integrated services are free.
⮚ With AWS Certificate Manager Private Certificate Authority,
monthly charges areapplied for the private CA operation and the
private certificates issued.
:
Extended Validation Certificates (EV SSL)
Most expensive SSL certificate type
Organization Validated Certificates (OV SSL)
Validates a business’ creditably.
Domain Validated Certificates (DV SSL)
Provides minimal encryption
Wildcard SSL Certificate
Secures basedomain and subdomains.
Multi-DomainSSL Certificate (MDC)
Secureup to hundreds of domain and subdomains.
Unified CommunicationsCertificate (UCC)
Single certificate secures multiple domain names.
The types of SSL certificates are:
AWS Certificate Manager (ACM)
Useful for customers who need a secureand public web
presence.
ACM Private CA
Useful for customers that are intended for private usewithin an
organization.
Ways to deploy managed X.509 certificates:
96. AWS Directory Service, also known as AWS
Managed MicrosoftActive Directory (AD),
enables multiple ways to use MicrosoftActive
Directory (AD) with other AWS services.
What is AWS Directory Service?
AWS Directory
Service
Simple AD
AD Connector
Amazon Cognito
AWS Directory
Service
AWS Directory Service provides the followingdirectory types
to choose from:
⮚ Using AWS Managed MicrosoftAD, it becomes easy to migrate AD-
dependent applications and Windows workloads to AWS.
⮚ A trustrelationship can be created between AWS Managed
MicrosoftAD and existing on-premises MicrosoftActiveusing
single sign-on (SSO).
Amazon Cognito
● Itis a user directory type that provides sign-up and sign-in for
the application using Amazon Cognito User Pools.
Simple AD
● Itis an inexpensive Active Directory-compatibleservice
driven by SAMBA 4.
● Itcan be used when there is a need for less than 5000
users.
● Itdoes not supportMulti-factor authentication (MFA).
AD Connector
● Itis like a gateway used for redirecting directory requests to the
on-premiseActive Directory.
● For this, there must be an existing AD, and VPC must be
connected to the on-premisenetwork via VPNor Direct
Connect.
● Itsupports multi-factor authentication (MFA) via existing
RADIUS-based MFA infrastructure.
97. AWS Key Management Service is a global
servicethat creates, stores, and manages
encryption keys.
What is AWS Key Management Service?
AWS Key Management Service
AWS Key
Management Service
Customer ManagedCMKs:
The CMKs created, managed, and used by users aretermed as
Customer managed CMKs and supportcryptographic operations.
AWS ManagedCMKs:
The CMKs created, managed, and used by AWS services on the
user’s behalf are termed AWS-managed CMKs.
❑ Provides data security at rest using encryption keys
and provides access controlfor encryption,
decryption, and re-encryption.
❑ Offers SDKs for differentlanguages to add digital
signaturecapability in the application code.
❑ Allows rotation of master keys once a year using
previous versions of keys.
Encryptionusing AWS KMS
98. AWS ResourceAccess Manager (RAM) is a
servicethat allows resources to be shared
through AWS Organizations or across AWS
accounts.
What is AWS Resource
Access Manager?
AWS Resource Access Manager
⮚ The resourcesharing feature of AWS RAM reduces customers’ need
to create duplicate resources in each of their accounts.
⮚ Itcontrols the consumption of shared resources using existing
policies and permissions.
⮚ Itcan be integrated with Amazon CloudWatch and AWS CloudTrail to
providedetailed visibility into shared resources and accounts.
⮚ Access controlpolicies in AWS IAMand ServiceControl Policies in
AWS Organizations providesecurity and governancecontrols to AWS
ResourceAccess Manager (RAM).
AWS Resource Access Manager
99. AWS Secrets Manager is a servicethat prevents
secret credentials frombeing hardcoded in the
sourcecode.
What is AWS Secrets Manager?
AWS Secrets Manager
AWS Secrets Manager
AWS Secrets Manager:
❑ Ensures in-transitencryption of the secret between
AWS and the systemto retrieve the secret.
❑ Rotates credentials for AWS services using the
Lambda function that instructs Secrets Manager to
interact with the service or database.
❑ Stores the encrypted secret value in SecretString or
SecretBinary field.
❑ Uses open-sourceclient components to cache secrets
and updates them when there is a need for rotation.
Secrets Manager canbe accessedusing the following ways:
▪ AWS Management Console
▪ AWS Command Line Tools
▪ AWS SDKs
▪ HTTPS Query API
Secret rotationis supportedwiththe belowDatabases:
▪ MySQL, PostgreSQL, Oracle, MariaDB, MicrosoftSQL Server, on
Amazon RDS
▪ Amazon Aurora on Amazon RDS
▪ Amazon DocumentDB
▪ Amazon Redshift
❑ Itprovides security and compliance facilities by rotating
secrets safely without the need for code deployment.
❑ Itintegrates with AWS CloudTrail and AWS CloudWatch to log
and monitor services for centralized auditing.
❑ Itintegrates with AWS Config and facilitates tracking of
changes in Secrets Manager.
100. AWS Security Hub is a servicethat offers
security aspects to protect the environment
using industry-standard bestpractices.
What is AWS Security Hub?
AWS Security Hub
⮚ AWS Security Hub helps the Payment Card Industry Data
Security Standard (PCI DSS) and the Center for Internet
Security (CIS) AWS Foundations Benchmark with a set of
security configuration best practices for AWS.
Enabling (or disabling) Can quickly do AWS Security Hub
through:
⮚ AWS Management Console
⮚ AWS CLI
⮚ By using Infrastructure-as-Codetools -- Terraform
AWS Security Hub provides an option to aggregate, organize, and
prioritize the security alerts or findings frommultiple AWS
services.
Itautomatically checks the compliance status using CIS AWS
Foundations Benchmark.
Ituses integrated dashboards to show the currentsecurity and
compliance status.
Itcollects findings or alerts frommultiple AWS accounts. Then it
analyzes security trends and identifies the highest priority
security issues.
The security alerts or findings can be investigated using Amazon
Detective or Amazon CloudWatch Event rules.
Itcollects data from AWS services across accounts and reduces the
need for time-consuming data conversion efforts
Charges are applied only for the currentRegion, not for all Regions
in which Security Hub is enabled.
102. Amazon S3 is a simple service used to provide key-based object
storage across multiple availability zones (AZs) in a specific region.
What is AmazonSimple Storage Service?
Amazon Simple Storage Service (S3)
Amazon S3 uses the following ways for security:
User-based security
▪ IAM policies
Resource-Based
▪ Bucket Policies
▪ Bucket Access Control List (ACL)
▪ Object Access Control List (ACL)
❑ S3 is a global service with region-specific buckets.
❑ It is also termed a static website hosting service.
❑ It provides 99.999999999% (11 9's) of content durability.
❑ S3 offers strong read-after-write consistency for any object.
❑ Objects (files) are stored in a region-specific container known as Bucket.
❑ Objects that are stored can range from 0 bytes - 5TB.
▪ It provides ‘Multipart upload’ features that upload objects in parts,
suitable for 100 MB or larger objects.
▪ It offers to choose ‘Versioning’ features to retain multiple versions of
objects, must enable versioning at both source and destination.
▪ Amazon S3 Transfer Acceleration allows fast and secure transfer of objects
over long distances with minimum latency using Amazon CloudFront’s
Edge Locations.
▪ Amazon S3 uses access control lists (ACL) to control access to the objects
and buckets.
▪ Amazon S3 provides Cross-Account access to the objects and buckets by
assuming a role with specified privileges.
Amazon S3 provides the following storage classes used to maintain the integrity
of the objects:
❑ S3 Standard - offers frequent data access.
❑ S3 Intelligent-Tiering - automatically transfer data to other cost-effective
access tiers.
❑ S3 Standard-IA - offers immediate and infrequent data access.
❑ S3 One Zone-IA - infrequent data access.
❑ S3 Glacier - long-term archive data, cheap data retrieval.
❑ S3 Glacier Deep Archive - used for long-term retention.
Amazon S3 offers to choose from the following ways to replicate objects:
▪ Cross-Region Replication - used to replicate objects in different AWS Regions.
▪ Same Region Replication - used to replicate objects in the same AWS Region.
Amazon S3
103. Amazon Elastic Block Store is a servicethat
provides the block-level storage driveto
storepersistent data.
What is Amazon Elastic
Block Store?
AmazonElastic Block Store
Amazon EBS can be attached and detached to an instance and
can be reattached to other EC2 instances.
Amazon EBS easily scales up to petabytes of data storage.
Amazon EBS offers point-in-time snapshots for volumes to migrate
to other AZs or regions.
By default, the non-rootEBS volume does not get affected
when the instance is terminated.
Amazon EBS volumes are best suited for database servers with high
reads and write and throughput-intensiveworkloads with continuous
reads and write.
Amazon EBS uses AWS KMS service with AES-256 algorithm to
supportencryption.
EBS snapshots areregion-specific and are incrementally stored in
Amazon S3.
❖ Multiple EBS volumes can be attached to a single EC2 instance in
the sameavailability zone.
❖ A single EBS volume can not be attached to multiple EC2
instances.
❖ Amazon EBS Multi-Attach is a feature used to attach a single
Provisioned IOPS SSD(io1 or io2) volume to multiple instances in
the sameAvailability Zone.
❖ EBS volumes persistindependently after getting attached to an
instance, which means the data will not be erased even if it
terminates.
❖ By default, the rootEBS volume gets terminated when the
instance is terminated.