Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.This slide describes about features of EC2, EC2 Options, family type, storage, EBS Volumes, EC2 Instance Store, Security Groups, Volumes and Snapshots, Amazon Machine Image (AMI), Elastic load balancer, Classic load balancer, Application load balancer, Network load balancer, AWS CLI and EC2 Metadata
___________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity in the cloud and makes web scale computing easier for customers. Amazon EC2 provides a wide variety of compute instances suited to every imaginable use case, from static websites to high performance supercomputing on-demand, available via highly flexible pricing options. Amazon EC2 works with Amazon Elastic Block Store (Amazon EBS) and Auto Scaling to make it easy for you to get the performance and availability you need for your applications. This session will introduce the key features and different instance types offered by Amazon EC2, demonstrate how you can get started and provide guidance on choosing the right types of instance and purchasing options.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key features, and the concept of instance generations.
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? This presentation will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Access a recorded version of the webinar based on this presentation on YouTube here: http://youtu.be/jLVPqoV4YjU
You can find the rest of the Masterclass webinar series for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
If you are interested in learning about how you apply variety of different AWS services to specific challenges, please check out the Journey Through the Cloud series, which you can find here: http://aws.amazon.com/campaigns/emea/journey/
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current-generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Amazon Elastic Compute Cloud (Amazon EC2) provides resizable compute capacity in the cloud and makes web scale computing easier for customers. Amazon EC2 provides a wide variety of compute instances suited to every imaginable use case, from static websites to high performance supercomputing on-demand, available via highly flexible pricing options. Amazon EC2 works with Amazon Elastic Block Store (Amazon EBS) and Auto Scaling to make it easy for you to get the performance and availability you need for your applications. This session will introduce the key features and different instance types offered by Amazon EC2, demonstrate how you can get started and provide guidance on choosing the right types of instance and purchasing options.
Amazon EC2 provides a broad selection of instance types to accommodate a diverse mix of workloads. In this session, we provide an overview of the Amazon EC2 instance platform, key features, and the concept of instance generations.
Amazon EC2 forms the backbone compute platform for hundreds of thousands of AWS customers, but how do you go beyond starting an instance and manually configuring it? This presentation will take you on a journey starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies.
Access a recorded version of the webinar based on this presentation on YouTube here: http://youtu.be/jLVPqoV4YjU
You can find the rest of the Masterclass webinar series for 2015 here: http://aws.amazon.com/campaigns/emea/masterclass/
If you are interested in learning about how you apply variety of different AWS services to specific challenges, please check out the Journey Through the Cloud series, which you can find here: http://aws.amazon.com/campaigns/emea/journey/
Amazon Elastic Compute Cloud (Amazon EC2) provides a broad selection of instance types to accommodate a diverse mix of workloads. In this technical session, we provide an overview of the Amazon EC2 instance platform, key platform features, and the concept of instance generations. We dive into the current-generation design choices of the different instance families, including the General Purpose, Compute Optimized, Storage Optimized, Memory Optimized, and GPU instance families. We also detail best practices and share performance tips for getting the most out of your Amazon EC2 instances.
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/. This slide describes about features of simple storage service, s3 buckets, s3-static web hosting, cross region replication, storage classes and comparison, glacier, transfer acceleration, life cycle management, security and encryption
___________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
Introduction to AWS VPC, Guidelines, and Best PracticesGary Silverman
I crafted this presentation for the AWS Chicago Meetup. This deck covers the rationale, building blocks, guidelines, and several best practices for Amazon Web Services Virtual Private Cloud. I classify it as a somewhere between a 101 and 201 level presentation.
If you like the presentation, I would appreciate you clicking the Like button.
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...Simplilearn
This presentation AWS S3 will help you understand what is cloud storage, types of storage, life before Amazon S3, what is S3 ( Amazon Simple Storage Service ), benefits of S3, objects and buckets, how does Amazon S3 work along with the explanation on features of AWS S3. Amazon S3 is a storage service for the Internet. It is a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data storage infrastructure at a relatively low cost. Amazon S3 gives a simple web service interface that can be used to store and restore any amount of data. Using this, developers can build applications that make use of Internet storage with ease. Amazon S3 is designed to be highly flexible and scalable. Now, lets deep dive into this presentation and understand what Amazon S3 actually is.
Below topics are explained in this AWS S3 presentation:
1. What is Cloud storage?
2. Types of storage
3. Before Amazon S3
4. What is S3
5. Benefits of S3
6. Objects and buckets
7. How does Amazon S3 work
8. Features of S3
This AWS certification training is designed to help you gain in-depth understanding of Amazon Web Services (AWS) architectural principles and services. You will learn how cloud computing is redefining the rules of IT architecture and how to design, plan, and scale AWS Cloud implementations with best practices recommended by Amazon. The AWS Cloud platform powers hundreds of thousands of businesses in 190 countries, and AWS certified solution architects take home about $126,000 per year.
This AWS certification course will help you learn the key concepts, latest trends, and best practices for working with the AWS architecture – and become industry-ready aws certified solutions architect to help you qualify for a position as a high-quality AWS professional.
The course begins with an overview of the AWS platform before diving into its individual elements: IAM, VPC, EC2, EBS, ELB, CDN, S3, EIP, KMS, Route 53, RDS, Glacier, Snowball, Cloudfront, Dynamo DB, Redshift, Auto Scaling, Cloudwatch, Elastic Cache, CloudTrail, and Security. Those who complete the course will be able to:
1. Formulate solution plans and provide guidance on AWS architectural best practices
2. Design and deploy scalable, highly available, and fault tolerant systems on AWS
3. Identify the lift and shift of an existing on-premises application to AWS
4. Decipher the ingress and egress of data to and from AWS
5. Select the appropriate AWS service based on data, compute, database, or security requirements
6. Estimate AWS costs and identify cost control mechanisms
This AWS course is recommended for professionals who want to pursue a career in Cloud computing or develop Cloud applications with AWS. You’ll become an asset to any organization, helping leverage best practices around advanced cloud-based solutions and migrate existing workloads to the cloud.
Learn more at: https://www.simplilearn.com/
With AWS, you can choose the right storage service for the right use case. This session shows the range of AWS choices - object storage to block storage - that is available to you. We include specifics about real-world deployments from customers who are using Amazon S3, Amazon EBS, Amazon Glacier, and AWS Storage Gateway.
Amazon EC2 changes the economics of computing and provides you with complete control of your computing resources. It is designed to make web-scale cloud computing easier for developers. In this session, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies. We will also discuss tools and best practices that will help you build failure resilient applications that take advantage of the scale and robustness of AWS regions.
Amazon Web Services (AWS) provides on-demand computing resources and services in the cloud, with pay-as-you-go pricing. This session provides an overview and describes how using AWS resources instead of your own is like purchasing electricity from a power company instead of running your own generator. Using AWS resources provides many of the same benefits as a public utility: Capacity exactly matches your need, you pay only for what you use, economies of scale result in lower costs, and the service is provided by a vendor experienced in running large-scale networks. A high-level overview of AWS infrastructure (such as AWS Regions and Availability Zones) and AWS services is provided as part of this session.
Speaker: Tom Whateley, Solutions Architect and Stephanie Zieno, Account Manager, Amazon Web Services
AWS is an internet-based computing service in which large groups of remote servers are networked to allow centralized data storage, and online access to computer services or resources.
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, re-sizable capacity for an industry-standard relational database and manages common database administration tasks
YouTube Link: https://youtu.be/9HsEMyKrlnw
**AWS Certification Training: https://www.edureka.co/cloudcomputing **
This "AWS S3 Tutorial for Beginners" PPT by Edureka will help you understand one of the most popular storage service, Amazon S3, and related concepts in detail. Following are the offerings of this PPT:
1. AWS Storage Services
2. What is AWS S3?
3. Buckets & Objects
4. Versioning & Cross Region Replication
5. Transfer Acceleration
6. S3 Demo and Use Case
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
In this session we will explore the world’s first cloud-scale file system and its targeted use cases. Session attendees will learn about EFS’s benefits, how to identify applications that are appropriate for use with EFS, and details about its performance and security models. The target audience is file system administrators, application developers, and application owners that operate or build file-based applications.
by Apurv Awasthi, Sr. Technical Product Manager, AWS
This session introduces the concepts of AWS Identity and Access Management (IAM) and walks through the tools and strategies you can use to control access to your AWS environment. We describe IAM users, groups, and roles and how to use them. We demonstrate how to create IAM users and roles, and grant them various types of permissions to access AWS APIs and resources. We also cover the concept of trust relationships, and how you can use them to delegate access to your AWS resources. This session covers also covers IAM best practices that can help improve your security posture. We cover how to manage IAM users and roles, and their security credentials. We also explain ways for how you can securely manage you AWS access keys. Using common use cases, we demonstrate how to choose between using IAM users or IAM roles. Finally, we explore how to set permissions to grant least privilege access control in one or more of your AWS accounts. Level 100
Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/.Training for AWS Solutions Architect at http://zekelabs.com/courses/amazon-web-services-training-bangalore/. This slide describes about features of simple storage service, s3 buckets, s3-static web hosting, cross region replication, storage classes and comparison, glacier, transfer acceleration, life cycle management, security and encryption
___________________________________________________
zekeLabs is a Technology training platform. We provide instructor led corporate training and classroom training on Industry relevant Cutting Edge Technologies like Big Data, Machine Learning, Natural Language Processing, Artificial Intelligence, Data Science, Amazon Web Services, DevOps, Cloud Computing and Frameworks like Django,Spring, Ruby on Rails, Angular 2 and many more to Professionals.
Reach out to us at www.zekelabs.com or call us at +91 8095465880 or drop a mail at info@zekelabs.com
Introduction to AWS VPC, Guidelines, and Best PracticesGary Silverman
I crafted this presentation for the AWS Chicago Meetup. This deck covers the rationale, building blocks, guidelines, and several best practices for Amazon Web Services Virtual Private Cloud. I classify it as a somewhere between a 101 and 201 level presentation.
If you like the presentation, I would appreciate you clicking the Like button.
AWS S3 | Tutorial For Beginners | AWS S3 Bucket Tutorial | AWS Tutorial For B...Simplilearn
This presentation AWS S3 will help you understand what is cloud storage, types of storage, life before Amazon S3, what is S3 ( Amazon Simple Storage Service ), benefits of S3, objects and buckets, how does Amazon S3 work along with the explanation on features of AWS S3. Amazon S3 is a storage service for the Internet. It is a simple storage service that offers software developers a highly-scalable, reliable, and low-latency data storage infrastructure at a relatively low cost. Amazon S3 gives a simple web service interface that can be used to store and restore any amount of data. Using this, developers can build applications that make use of Internet storage with ease. Amazon S3 is designed to be highly flexible and scalable. Now, lets deep dive into this presentation and understand what Amazon S3 actually is.
Below topics are explained in this AWS S3 presentation:
1. What is Cloud storage?
2. Types of storage
3. Before Amazon S3
4. What is S3
5. Benefits of S3
6. Objects and buckets
7. How does Amazon S3 work
8. Features of S3
This AWS certification training is designed to help you gain in-depth understanding of Amazon Web Services (AWS) architectural principles and services. You will learn how cloud computing is redefining the rules of IT architecture and how to design, plan, and scale AWS Cloud implementations with best practices recommended by Amazon. The AWS Cloud platform powers hundreds of thousands of businesses in 190 countries, and AWS certified solution architects take home about $126,000 per year.
This AWS certification course will help you learn the key concepts, latest trends, and best practices for working with the AWS architecture – and become industry-ready aws certified solutions architect to help you qualify for a position as a high-quality AWS professional.
The course begins with an overview of the AWS platform before diving into its individual elements: IAM, VPC, EC2, EBS, ELB, CDN, S3, EIP, KMS, Route 53, RDS, Glacier, Snowball, Cloudfront, Dynamo DB, Redshift, Auto Scaling, Cloudwatch, Elastic Cache, CloudTrail, and Security. Those who complete the course will be able to:
1. Formulate solution plans and provide guidance on AWS architectural best practices
2. Design and deploy scalable, highly available, and fault tolerant systems on AWS
3. Identify the lift and shift of an existing on-premises application to AWS
4. Decipher the ingress and egress of data to and from AWS
5. Select the appropriate AWS service based on data, compute, database, or security requirements
6. Estimate AWS costs and identify cost control mechanisms
This AWS course is recommended for professionals who want to pursue a career in Cloud computing or develop Cloud applications with AWS. You’ll become an asset to any organization, helping leverage best practices around advanced cloud-based solutions and migrate existing workloads to the cloud.
Learn more at: https://www.simplilearn.com/
With AWS, you can choose the right storage service for the right use case. This session shows the range of AWS choices - object storage to block storage - that is available to you. We include specifics about real-world deployments from customers who are using Amazon S3, Amazon EBS, Amazon Glacier, and AWS Storage Gateway.
Amazon EC2 changes the economics of computing and provides you with complete control of your computing resources. It is designed to make web-scale cloud computing easier for developers. In this session, we will take you on a journey, starting with the basics of key management and security groups and ending with an explanation of Auto Scaling and how you can use it to match capacity and costs to demand using dynamic policies. We will also discuss tools and best practices that will help you build failure resilient applications that take advantage of the scale and robustness of AWS regions.
Amazon Web Services (AWS) provides on-demand computing resources and services in the cloud, with pay-as-you-go pricing. This session provides an overview and describes how using AWS resources instead of your own is like purchasing electricity from a power company instead of running your own generator. Using AWS resources provides many of the same benefits as a public utility: Capacity exactly matches your need, you pay only for what you use, economies of scale result in lower costs, and the service is provided by a vendor experienced in running large-scale networks. A high-level overview of AWS infrastructure (such as AWS Regions and Availability Zones) and AWS services is provided as part of this session.
Speaker: Tom Whateley, Solutions Architect and Stephanie Zieno, Account Manager, Amazon Web Services
AWS is an internet-based computing service in which large groups of remote servers are networked to allow centralized data storage, and online access to computer services or resources.
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, re-sizable capacity for an industry-standard relational database and manages common database administration tasks
YouTube Link: https://youtu.be/9HsEMyKrlnw
**AWS Certification Training: https://www.edureka.co/cloudcomputing **
This "AWS S3 Tutorial for Beginners" PPT by Edureka will help you understand one of the most popular storage service, Amazon S3, and related concepts in detail. Following are the offerings of this PPT:
1. AWS Storage Services
2. What is AWS S3?
3. Buckets & Objects
4. Versioning & Cross Region Replication
5. Transfer Acceleration
6. S3 Demo and Use Case
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
In this session we will explore the world’s first cloud-scale file system and its targeted use cases. Session attendees will learn about EFS’s benefits, how to identify applications that are appropriate for use with EFS, and details about its performance and security models. The target audience is file system administrators, application developers, and application owners that operate or build file-based applications.
by Apurv Awasthi, Sr. Technical Product Manager, AWS
This session introduces the concepts of AWS Identity and Access Management (IAM) and walks through the tools and strategies you can use to control access to your AWS environment. We describe IAM users, groups, and roles and how to use them. We demonstrate how to create IAM users and roles, and grant them various types of permissions to access AWS APIs and resources. We also cover the concept of trust relationships, and how you can use them to delegate access to your AWS resources. This session covers also covers IAM best practices that can help improve your security posture. We cover how to manage IAM users and roles, and their security credentials. We also explain ways for how you can securely manage you AWS access keys. Using common use cases, we demonstrate how to choose between using IAM users or IAM roles. Finally, we explore how to set permissions to grant least privilege access control in one or more of your AWS accounts. Level 100
AWS Webcast - Achieving consistent high performance with Postgres on Amazon W...Amazon Web Services
Postgres is a popular relational database and is the backend of a number of high traffic applications. Join AWS and PalominoDB, the company that helped Obama for America campaign optimize the database infrastructure on AWS, to learn about how you can run high throughput, I/O intensive Postgres clusters on the Amazon EBS storage platform. We will go over best practices including performance, durability and optimization related to deploying Postgres on AWS.
You hear about the best practices learned and applied for the Obama for America campaign.
In this webinar, you will learn about:
- Amazon Elastic Block Store (EBS)
- Why Provisioned IOPS volumes fit the needs of high I/O intensive applications
- Best practices for deploying Postgres on AWS
- How to leverage Provisioned IOPS volumes for Postgres
Introduction to amazon web services for developersCiklum Ukraine
Introduction to Amazon Web Services for developers
About presenter
Roman Gomolko with 11 years of experience in development including 4 years of day-to-day work with Amazon Web Services.
Disclaimer
Cloud-hosting is buzz-word for a while and in my talk I would like to give an introduction to Amazon Web Services (AWS).
We will talk about basic building blocks of AWS like EC2, ELB, ASG, S3, CloudFront, RDS, IAM, VPC and other scary or funny abbreviations.
Then we will discuss how to migrate existing applications to AWS. This topic includes:
• how to design infrastructure and services to use when migrating
• how to choose proper instance types
• how to estimate infrastructure cost
• how it will affect performance of application migrated
Then we will make an overview of services provided by AWS and possible apply in your current of future applications:
• SQS
• DynamoDB
• Kinesis
• CloudSearch
• CodeDeploy
• CloudFormation
And if we survive we will talk a little how to design Cloud applications. That’s mainly about general principles.
My talk mostly targeted towards decision makers and decisions pushers of small and medium size companies which are consider “going cloud” or already moving into this direction. Everyone interested in gaining knowledge in these areas are welcomed as well.
We will spend around 2–3 hours together and you will be able to pitch-in any questions until we totally goes away from original plan.
Running Oracle EBS in the cloud (OAUG Collaborate 18 edition)Andrejs Prokopjevs
This presentation is based on a real-life experience migrating Oracle E-Business Suite R12.1 production to Amazon AWS, and additional proof-of-concept effort done getting various client systems upgraded to R12.2 and migrated to main cloud vendor platforms on the market. We are going to cover here various areas, like:
- Certification basics. Overview look into supported configurations.
- How to architect. Basic recommendations based on migration and 2+ year production runtime experience. We will mainly cover Amazon AWS use case.
- Advanced configurations outline.
- R12.2 and features / nuances coming with it.
- Microsoft Azure and Oracle Cloud review. Quick comparison outline of main alternative platforms.
- Cloud deployment automation and the most common scenario - auto-scaling.
This is a very client demanding topic and many are looking into cloud migration options and how they can optimize the cost comparing to the on-premise hardware hosting. And many are still misunderstanding the complexity of Oracle EBS stack being capable for cloud deployment.
Let’s get started. Join this session to continue your journey through the core AWS services with live demonstrations of how to set up and use the services.
AWS Webcast - AWS Webinar Series for Education #2 - Getting Started with AWSAmazon Web Services
This webinar will cover the basics of getting started with AWS. After a brief overview, this session will dive into core AWS services with live demonstrations of how to set up and utilize compute, storage, and other services. The focus will be on the ease of use and the ability to clone environments that largest customers are running to highlight AWS’ versatility and ease of use as a cloud platform.
DCEU 18: Use Cases and Practical Solutions for Docker Container Storage on Sw...Docker, Inc.
Mark Church - Product Manager, Docker
Don Stewart - Solutions Architect, Docker
Persistent storage has quickly advanced from something considered incompatible with containers to a mature set of solutions and patterns that have been thoroughly adopted by the industry. We’ll define the persistent characteristics of different use-cases and map these to some of the many solutions that exist for container storage. From this talk you’ll learn about the storage options available to users on Swarm, Kubernetes, on-premises, cloud, and how they work and compare to each other. You’ll also learn how to characterize different persistent application requirements and the solutions best for suited for them.
AWS Webcast - Webinar Series for State and Local Government #2: Discover the ...Amazon Web Services
This webinar will cover the basics of getting started with AWS. After a brief overview this session will dive into live demonstration of core AWS services of how to set up and utilize compute (EC2), storage (S3), and other services. The focus will be on how you get started with AWS, including creating user accounts, set up multiple EC2 virtual machine instances, set up an email alert for changes in EC2 based on usage, upload data to S3 services and make it available via the internet.
Containerization of your application is only the first step towards modernizing your application. Building cloud-native application requires other tools like Container orchestration platform, Service Mesh tool, Logging & Alert Monitoring tool and Visualization tools.
Real cloud-native platforms need to be equipped with the necessary tool-stack like Kubernetes, Istio, Prometheus, Grafana, and Kiali.
In this webinar, we will cover building a cloud-native platform from zero.
Take home from the webinar -
- What and Why of a cloud-native application
- Steps to build a cloud-native platform from scratch and its challenges
- A high-level overview of Istio, Prometheus, Grafana, and Kiali
- Integrating your cloud-native application with Istio, Prometheus, Grafana, and Kiali
- Live Demo - Deploy, Monitor, and control a full-fledged Microservice-based application.
Design Patterns for Pods and Containers in Kubernetes - Webinar by zekeLabszekeLabs Technologies
The combination of Docker and Kubernetes is quickly becoming the de-facto standard for building Microservices. Whether you are a developer or an architect you need to know how to bundle your application into Containers and Pods. Docker and Kubernetes give a lot of good features out of the box. To effectively leverage these features, you need to know - how to use them, what are some commonly used Pod design patterns and the best practices.
In this webinar, we will explore various such questions and their answers along with appropriate examples. Some of those questions would be-
1. When and how to build multi-container pods?
2. What are some of the well-adopted design patterns for pods?
3. What are some multi-pod design patterns?
4. How to use Lifecycle hooks, Init Containers and Health probes?
Github repo - https://github.com/ashishrpandey/pod-design-pattern-webinar
Information Technology is nothing but a reflection of the needs of Business.
Before Industry 4.0, as IT professionals we were just 'coding' or 'decoding' the trend of Business. Any change in the Business scenario would shake the IT sector but the reverse was not true.
But now, after the Industry 4.0, due to High-Speed Internet boom, omniChannel presence of consumer needs, market consolidation, and above all - consumer psyche, the business service providers cannot wait for long to see their product in the market.
This is where there is a call for Process Change - from Waterfall to Agile.
WHAT THIS WEBINAR IS ALL ABOUT:
1. Discuss the macroscopic view of Business & Technology and how they beautifully merge together
2. How Agile is becoming more relevant to the current trend
3. What preparatory works are needed to get into an Agile perspective
4. The Agile StoryBoard - a walkthrough of concepts and terminologies
5. Do's and Don'ts of 'Team Agile'
6. Next Steps
Building machine learning muscle in your team & transitioning to make them do machine learning at scale. We also discuss about Spark & other relevant technologies.
Agenda
1. The changing landscape of IT Infrastructure
2. Containers - An introduction
3. Container management systems
4. Kubernetes
5. Containers and DevOps
6. Future of Infrastructure Mgmt
About the talk
In this talk, you will get a review of the components & the benefits of Container technologies - Docker & Kubernetes. The talk focuses on making the solution platform-independent. It gives an insight into Docker and Kubernetes for consistent and reliable Deployment. We talk about how the containers fit and improve your DevOps ecosystem and how to get started with containerization. Learn new deployment approach to effectively use your infrastructure resources to minimize the overall cost.
The slides talk about Docker and container terminologies but will also be able to see the big picture of where & how it fits into your current project/domain.
Topics that are covered:
1. What is Docker Technology?
2. Why Docker/Containers are important for your company?
3. What are its various features and use cases?
4. How to get started with Docker containers.
5. Case studies from various domains
What is Serverless?
How it evolved?
What are its features?
What are the tradeoffs?
Should I use serverless?
How is it different from the container as a service?
Our subject matter expert answered these in a technology conference hosted by one of our esteemed client that works in the domain of Marketing Data Analytics.
Terraform is an Infrastructure Automation tools. This can work equally good for on-premises, public cloud, private cloud, hybrid-cloud and multi-cloud infrastructure.
Visit us for more at www.zekeLabs.com
Terraform is an Infrastructure Automation tools. This can work equally good for on-premises, public cloud, private cloud, hybrid-cloud and multi-cloud infrastructure.
Visit us for more at www.zekeLabs.com
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
2. Amazon Web Services
L: 03 | EC2 - Elastic Cloud Compute
Visit : www.zekeLabs.com for more details.
3. EC2 : Elastic Cloud Compute
● Elastic Cloud Compute provides Resizable Compute Capacity in the Cloud.
● Virtual Machine in the Cloud.
Visit : www.zekeLabs.com for more details.
4. What is Amazon EC2
● Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web
Services (AWS) cloud.
● Using Amazon EC2 eliminates your need to invest in hardware upfront, so you can develop and deploy
applications faster.
● You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and
networking, and manage storage.
● Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity,
reducing your need to forecast traffic.
● Pay only for capacity you actually use
● Choose Linux or windows
● Choose across regions and availability zones for reliability
Visit : www.zekeLabs.com for more details.
5. Features of Amazon EC2
● Virtual computing environments, known as instances
● Preconfigured templates for your instances, known as Amazon Machine Images (AMIs), that package the
bits you need for your server (including the operating system and additional software)
● Various configurations of CPU, memory, storage, and networking capacity for your instances, known
as instance types
● Secure login information for your instances using key pairs (AWS stores the public key, and you store the
private key in a secure place)
● Storage volumes for temporary data that's deleted when you stop or terminate your instance, known
as instance store volumes
● Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS), known
as Amazon EBS volumes
Visit : www.zekeLabs.com for more details.
6. Features of Amazon EC2
● Multiple physical locations for your resources, such as instances and Amazon EBS volumes, known
as regions and Availability Zones
● A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your
instances using security groups
● Static IPv4 addresses for dynamic cloud computing, known as Elastic IP addresses
● Metadata, known as tags, that you can create and assign to your Amazon EC2 resources
● Virtual networks you can create that are logically isolated from the rest of the AWS cloud, and that you can
optionally connect to your own network, known as virtual private clouds(VPCs)
Visit : www.zekeLabs.com for more details.
7. Overview
● The instance is an Amazon EBS-backed instance (meaning that the root volume is an EBS volume). You
can either specify the Availability Zone in which your instance runs, or let Amazon EC2 select an
Availability Zone for you.
● When you launch your instance, you secure it by specifying a key pair and security group.
● When you connect to your instance, you must specify the private key of the key pair that you specified
when launching your instance.
Visit : www.zekeLabs.com for more details.
9. EC2 Options
● On Demand : Pay a fixed rate by the hour with no commitment.
- For users wishing for low cost and flexibility without any upfront payment or long term commitment.
- Applications with short term, spiky and unpredictable workloads.
- Ideal for Startups
● Reserved: Capacity Reservation based on baselining, and hence significant discount on the hourly charge
for an instance. 1 year or 3 year terms.
- Steady and Predictable usage applications.
- Applications requiring Reserved Capacity.
● Spot : Bid for price one wishes to pay for Instance Capacity, Greater savings for applications having
flexible start and end times.
- Applications with flexible start and end times.
- Very low cost Compute, No cost for the hour in which AWs terminates the instance.
● Dedicated Hosts: Physical EC2 servers dedicated for use. Useful in case of Server bound licenses for
regulatory requirements. On demand pricing and cheap if reserved.
Visit : www.zekeLabs.com for more details.
10. EC2 : Different EC2 Family Types
●
Visit : www.zekeLabs.com for more details.
11. ● File Storage
○ Elastic File Store (EFS)
● Block Storage
○ Elastic Block Store (EBS)
● Object Storage
○ Simple Storage Service (S3)
○ Glacier
Storage
Visit : www.zekeLabs.com for more details.
12. ● Storage Volumes that can be attached to Amazon EC2 instances.
● File Systems and Databases can be run.
● Automatic Replication within the AZ’s.
● Note : One EBS Volume can not be mounted to multiple EC2 instances, USE EFS in such cases.
ELASTIC BLOCK STORAGE - EBS Volumes
Visit : www.zekeLabs.com for more details.
13. ● Amazon EBS
○ Data stored on Amazon EBS volume can persist independently of the life of the instance
○ Storage is persistent
● Amazon EC2 instance store
○ Data stored on local instance store persists only as long as instance is alive
○ Storage is ephemeral
Elastic Block Store vs EC2 Instance Store
Visit : www.zekeLabs.com for more details.
14. EBS - Volume Types
● General Purpose SSD (GP2)
- Balance of Price and Performance.
- Ratio of 3 IOPS per GB with up to 10,000 IOPS and the ability to burst up to 3000 IOPS for volumes
under 1 Gib
● Provisioned IOPS SSD (IO1)
- For I/O intensive applications like larger Relational or NoSql Database.
- Used if the requirement is more than 10,000 IOPS & can provision up to 20,000 IOPS per volume.
● Throughput Optimized HDD(ST1): Magnetic Disks : For Sequential Data that is frequently accessed.
- Big Data, Data Warehouses, Log Processing Etc.
- Can not be the BOOT Volume
● Cold HDD(SC1)
- Lowest Cost Storage for infrequently accessed workloads.
- File Server
- Can not be Boot Volume.
● Magnetic (Standard)
- Bootable and used for infrequently accessed data.
16. EC2 - Important Points
● IOPS
● Root Volume is not encrypted by default. 3rd Party tool (Eg. Bitlocker) to encrypt the root volume.
● Additional Volumes can be encrypted by default.
● Security Groups - Virtual Firewalls
● Termination Protection turned off by default.
● On an EBS- backed instance, the default action is for the root EBS volume to be deleted when the
instance is terminated.
17. Launch an EC2 Instance via Web Console
● Determine the AWS region in which you want to launch the Amazon EC2 Instance.
● Launch an Amazon EC2 instance from a preconfigured Amazon Machine Image (AMI).
● Choose an instance type based on memory, storage, CPU and network requirements
● Configure network, IP address, security groups, tags and key pairs
19. EC2 Security Group Basics
● Security Group is like a virtual firewall.
● Ingress(Inbound) and Egress(Outbound)
● Changes in Security Groups configuration acts immediately.
● It is our first line of defence.
Visit : www.zekeLabs.com for more details.
20. Security Groups
● By default everything on Aws is private. All inbound traffic is blocked by default.
● If we do not allow a particular protocol no one will be able to access our instance
using that protocol
● Any rule edit on security group have immediate effect.
● Inbound rules also apply over outbound automatically (Stateful)
● You can't deny traffic by using rule. By default everything is denied
● You can allow the source to be itself.
● There can be multiple security groups on an ec2 instance
● Can not block an specific ip address using security group but by using a network
access list.
21. Lab on Security Group
Visit : www.zekeLabs.com for more details.
22. Security Groups Lab
● Log in to EC2 server.
● Install Apache Server : yum install httpd -y
● Turn On the Server: service httpd status => service httpd start => chkconfig httpd on
● Go to root directory of the web server : cd /var/www/html
● Create a html page using vi or nano index.html
● Try accessing with different variations of security groups.
● All Inbound is denied by default and Outbound is open to world.
● Security groups are STATEFUL.
Visit : www.zekeLabs.com for more details.
24. Volumes vs Snapshots
● Volume exists on EBS. It’s more or less Virtual Hard Disk.
● Snapshots exists on S3.
● Snapshot of Volume can be taken and stored on S3.
● Snapshots are point in time copies of Volumes.
● Snapshots are incremental backups. Only changed blocks are moved to S3.
● First snapshot takes time.
● Snapshots excludes data held in the cache by applications and the OS.
● You can track the status of your EBS snapshots through CloudWatch Events
Visit : www.zekeLabs.com for more details.
25. Lab on Snapshots & Volume
Visit : www.zekeLabs.com for more details.
26. Lab on Snapshots & Volume
● Create a volume and attach it to the EC2 instance.
● lsblk : Check the volumes and the mount points.
● file -s /dev/xvdf
● mkfs -t ext4 /dev/xvdf
● mkdir /fileserver
● mount /dev/xvdf /fileserver
● umount /dev/xvdf
● Detach the Volume.
● Create the snapshot.
● Create a Volume from the snapshot. Mount and Unmount again.
Visit : www.zekeLabs.com for more details.
27. Volumes and Snapshot Security
● Snapshots of Encrypted Volumes are encrypted automatically.
● Unencrypted Snapshots can be shared with other AWS Accounts or can even be made public.
● To create a snapshot for Amazon EBS Volumes that serve as root devices, instance should be
stopped before taking the snapshot.
● Amazon EBS encryption uses AWS Key Management Service (AWS KMS) master keys when
creating encrypted volumes and any snapshots created from your encrypted volumes.
Visit : www.zekeLabs.com for more details.
30. Amazon Machine Image
● An Amazon Machine Image (AMI) provides the information required to launch an instance, which is
a virtual server in the cloud.
● An AMI includes the following
○ A template for the root volume for the instance (for example, an operating system, an
application server, and applications)
○ Launch permissions that control which AWS accounts can use the AMI to launch instances
○ A block device mapping that specifies the volumes to attach to the instance when it's launched
● Select the AMI based on the following
○ Region
○ Operating Systems
○ Launch Permissions
○ Architecture (32 bit or 64 bit)
○ Storage for the root bit
Visit : www.zekeLabs.com for more details.
32. EBS Root Volumes & Instance Store Volumes
● Instance Store or Ephemeral Storage : Can’t be stopped, Lesser durability.
● Data loss in case underlying host fails.
● EBS backed Volumes: Can be Stopped, Snapshots & Volumes can be reattached.
● Both of the instance types can be rebooted.
Visit : www.zekeLabs.com for more details.
34. Elastic Load Balancers
● Elastic Load Balancing automatically distributes incoming application traffic across multiple targets,
such as Amazon EC2 instances, containers, and IP addresses.
● A load balancer accepts incoming traffic from clients and routes requests to its registered EC2
instances in one or more Availability Zones.
● The load balancer also monitors the health of its registered instances and ensures that it routes
traffic only to healthy instances.
● When the load balancer detects an unhealthy instance, it stops routing traffic to that instance, and
then resumes routing traffic to that instance when it detects that the instance is healthy again.
● You configure your load balancer to accept incoming traffic by specifying one or more listeners. A
listener is a process that checks for connection requests.
● It is configured with a protocol and port number for connections from clients to the load balancer and
a protocol and port number for connections from the load balancer to the instances.
Visit : www.zekeLabs.com for more details.
36. Elastic Load Balancer Types
● 3 types of load balancers
○ Classic Load Balancers
○ Application Load Balancer
○ Network Load Balancer
Visit : www.zekeLabs.com for more details.
37. Classic Load Balancer
● The AWS Classic Load Balancer (CLB) operates at Layer 4 (Transport Layer) of the OSI model.
What this means is that the load balancer routes traffic between clients and backend servers based
on IP address and TCP port.
● For example, an ELB at a given IP address receives a request from a client on TCP port 80 (HTTP).
It will then route that request based on the rules previously configured when setting up the load
balancer to a specified port on one of a pool of backend servers. In this example, the port on which
the load balancer routes to the target server will often be port 80 (HTTP) or 443 (HTTPS).
● The backend destination server will then fulfill the client request, and send the requested data back
to the ELB, which will then forward the backend server reply to the client. From the client’s
perspective, this request will appear to have been entirely fulfilled by the ELB. The client will have no
knowledge of the backend server or servers fulfilling client requests.
Visit : www.zekeLabs.com for more details.
38. Application Load Balancers
● AWS Application Load Balancer (ALB) operates at Layer 7 (Application Layer) of the OSI model. At
Layer 7, the ELB has the ability to inspect application-level content, not just IP and port. This lets it
route based on more complex rules than with the Classic Load Balancer.
● In another example, an ELB at a given IP will receive a request from the client on port 443
(HTTPS). The Application Load Balancer will process the request, not only by receiving port, but
also by looking at the destination URL.
● Multiple services can share a single load balancer using path-based routing. In the example given
here, the client could request any of the following URLs:
○ http://www.example.com/blog
○ http://www.example.com/video
● The Application Load Balancer will be aware of each of these URLs based on patterns set up when
configuring the load balancer, and can route to different clusters of servers depending on application
need.
Visit : www.zekeLabs.com for more details.
39. Network Load Balancers
● Network Load Balancer has been designed to handle sudden and volatile traffic patterns, making it
ideal for load balancing TCP traffic. It is capable of handling millions of requests per second while
maintaining low latencies and doesn’t have to be “pre-warmed” before traffic arrives.
● Best use cases for Network Load Balancer:
○ When you need to seamlessly support spiky or high-volume inbound TCP requests.
○ When you need to support a static or elastic IP address.
Visit : www.zekeLabs.com for more details.
43. AWS CLI
● Configure the CLI:
aws configure
● After configuring
aws service help
● Roles : Secure compared to storing the Key and Key ID on the EC2 server.
● Roles permissions can be changed later but they can only be attached to EC2 during the launch.
Visit : www.zekeLabs.com for more details.
45. EC2 Metadata
● Instance metadata is data about your instance that you can use to configure or manage the
running instance.
● How to retrieve the data about the data.
curl http://169.254.169.254/latest/meta-data
Visit : www.zekeLabs.com for more details.
47. Auto Scaling
● Contains a collection of EC2 instances that share similar characteristics and are treated as a
logical grouping for the purposes of instance scaling and management.
● For example, if a single application operates across multiple instances, you might want to increase
the number of instances in that group to improve the performance of the application, or decrease
the number of instances to reduce costs when demand is low.
● Auto Scaling groups are used to scale the number of instances automatically based on criteria
that you specify, or to maintain a fixed number of instances even if an instance becomes
unhealthy.
Visit : www.zekeLabs.com for more details.
49. Auto Scaling
● Manages Amazon EC2 capacity automatically.
● Maintains the right number of instances for your application.
● Operates a healthy group of instances, and scales it according to your needs.
● Launch Configurations: Reusable configuration or templates of instances for auto scaling.
Custom AMI’s or AMI’s that are created from already running instances can also be used.
● Launch configuration can be changed at any point of time.
● Auto Scaling Group: Specify how many instances you want to run in it.Your group will maintain
this number of instances, and replace any that become unhealthy or impaired.
● You can optionally configure your group to adjust in capacity according to demand, in response to
Amazon CloudWatch metrics.
Visit : www.zekeLabs.com for more details.
51. Placement Groups
● Logical grouping of instances within a single Availability zone. Multiple AZ’s not possible.
● Applications that need low latency, speeds upto 10Gbps can be achieved.
● Recommended for applications needing Low Network Latency, High Network Throughput or
both.
● Suitable for Hadoop Clustering, Cassandra nodes etc.
● Placement Group name must be unique in the AWS Account.
● Only certain types of instances can be launched in a placement group (Optimized - Mem, GPU,
Storage)
● Homogenous instances recommended and Placements Groups can’t be merged.
● Existing instances can’t be moved into Placement Group. (Possible only through AMI’s)
Visit : www.zekeLabs.com for more details.
52. Visit : www.zekeLabs.com for more details
THANK YOU
Let us know how can we help your organization to Upskill the
employees to stay updated in the ever-evolving IT Industry.
Get in touch:
www.zekeLabs.com | +91-8095465880 | info@zekeLabs.com
Editor's Notes
Advance settingd#!/bin/bashYum update -yFor mac users: Apps > Utilities > Terminal | ssh ec2-user@public ip -i keypair.pem