1. Server load balancers distribute traffic across multiple servers to improve availability and scalability. They use algorithms like round-robin to distribute client requests.
2. High availability aims for near 100% uptime by eliminating single points of failure and allowing for graceful degradation. Availability levels like 99.9% mean less than 9 hours of downtime per year.
3. Popular open source load balancers include Nginx, HAProxy, and Envoy. They support load balancing at the TCP/HTTP layers and features like health checks, SSL termination, and session persistence.
99.999% Available OpenStack Cloud - A Builder's GuideDanny Al-Gaaf
High availability is a very important and frequently discussed topic for clouds at the infrastructure level. There are several concepts to provide a HA-ready OpenStack. And also software defined storage like Ceph is highly available with no single point of failure.
But what about HA if you bring OpenStack and Ceph together? How do they work together and what are the impacts on the availability of your OpenStack cloud infrastructure from the tenant or application point of view?
How does the design of your classic high-available data center, e.g. with two fire compartments, power backup, and redundant power and network lines impact your cluster setup? There are many different scenarios of potential failures. What does this mean regarding building and managing failure zones, especially in case of technologies like Ceph which need to be able to build a quorum to keep up running.
This talk will cover:
- Failure scenarios and their impact on OpenStack and Ceph availability
- Which components of the cloud need a quorum
- How to setup the infrastructure to ensure a quorum
- How the different quorum devices work together and if they guarantee the HA of your cloud
- Pitfalls and solutions
SANDcamp 2014 - A Perfect Launch, Every TimeJon Peck
How do you ensure that your Drupal site launch goes off without a hitch? Launches are tough on a new developer. Everyone remembers the lump in their throats around launch time; the rush to finish content, make final theme tweaks, adjust for sudden browser weirdness. As momentum picks up, the odd change request always appears, databases are slingshot hither and yon, while everyone scrambles to resolve merge conflicts like a Tokyo train at rush hour.
We emerge scarred but smarter, intent on making the next launch less painful. But with different teams launching different sites, it can be hard to establish an iterative process. Especially as new work accumulates in the backlog, we reap what we sow in technical debt from rushed launches, quick & dirty choices made under the gun, and unimplemented ideas from retrospectives.
Pantheon, however, has the same Customer Success team launching several enterprise sites per week, while simultaneously assisting hundreds of self serve customers when they need a hand. Because we need to work effectively, we have developed the tools and process to ensure:
- Great site performance on day one
- Less problems over the long run
- Clear expectations from informed stakeholders
The session will cover other key areas:
- Preparing for launch for the PM, Stakeholder, Developer & Sys Admin
- Auditing the site for land mines, carnivorous acid pool islands, and deadweight
- Load testing to obliterate surprises with actionable results
This session is platform agnostic; whether you use PaaS, shared hosting, or wield your own hardware, PMs, developers, and clients will leave with new tools in their belt to launch with less agita. We will share some of our challenges and how we overcame them, and hopefully hear from you about how you overcame yours!
Kraken is a P2P docker image distribution system. It’s loosely based on BitTorrent protocol, fully compatible with docker registry API, and supports pluggable storage backends like S3, HDFS, etc. It successfully solved scaling problems we saw under different scenarios, also greatly sped up container deployment.
Boosting I/O Performance with KVM io_uringShapeBlue
Storage performance is becoming much more important. KVM io_uring attempts to bring the I/O performance of a virtual machine on almost the same level of bare metal. Apache CloudStack has support for io_uring since version 4.16. Wido will show the difference in performance io_uring brings to the table.
Wido den Hollander is the CTO of CLouDinfra, an infrastructure company offering total Webhosting solutions. CLDIN provides datacenter, IP and virtualization services for the companies within TWS. Wido den Hollander is a PMC member of the Apache CloudStack Project and a Ceph expert. He started with CloudStack 9 years ago. What attracted his attention is the simplicity of CloudStack and the fact that it is an open-source solution. During the years Wido became a contributor, a PMC member and he was a VP of the project for a year. He is one of our most active members, who puts a lot of efforts to keep the project active and transform it into a turnkey solution for cloud builders.
-----------------------------------------
The CloudStack European User Group 2022 took place on 7th April. The day saw a virtual get together for the European CloudStack Community, hosting 265 attendees from 25 countries. The event hosted 10 sessions with from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
------------------------------------------
About CloudStack: https://cloudstack.apache.org/
99.999% Available OpenStack Cloud - A Builder's GuideDanny Al-Gaaf
High availability is a very important and frequently discussed topic for clouds at the infrastructure level. There are several concepts to provide a HA-ready OpenStack. And also software defined storage like Ceph is highly available with no single point of failure.
But what about HA if you bring OpenStack and Ceph together? How do they work together and what are the impacts on the availability of your OpenStack cloud infrastructure from the tenant or application point of view?
How does the design of your classic high-available data center, e.g. with two fire compartments, power backup, and redundant power and network lines impact your cluster setup? There are many different scenarios of potential failures. What does this mean regarding building and managing failure zones, especially in case of technologies like Ceph which need to be able to build a quorum to keep up running.
This talk will cover:
- Failure scenarios and their impact on OpenStack and Ceph availability
- Which components of the cloud need a quorum
- How to setup the infrastructure to ensure a quorum
- How the different quorum devices work together and if they guarantee the HA of your cloud
- Pitfalls and solutions
SANDcamp 2014 - A Perfect Launch, Every TimeJon Peck
How do you ensure that your Drupal site launch goes off without a hitch? Launches are tough on a new developer. Everyone remembers the lump in their throats around launch time; the rush to finish content, make final theme tweaks, adjust for sudden browser weirdness. As momentum picks up, the odd change request always appears, databases are slingshot hither and yon, while everyone scrambles to resolve merge conflicts like a Tokyo train at rush hour.
We emerge scarred but smarter, intent on making the next launch less painful. But with different teams launching different sites, it can be hard to establish an iterative process. Especially as new work accumulates in the backlog, we reap what we sow in technical debt from rushed launches, quick & dirty choices made under the gun, and unimplemented ideas from retrospectives.
Pantheon, however, has the same Customer Success team launching several enterprise sites per week, while simultaneously assisting hundreds of self serve customers when they need a hand. Because we need to work effectively, we have developed the tools and process to ensure:
- Great site performance on day one
- Less problems over the long run
- Clear expectations from informed stakeholders
The session will cover other key areas:
- Preparing for launch for the PM, Stakeholder, Developer & Sys Admin
- Auditing the site for land mines, carnivorous acid pool islands, and deadweight
- Load testing to obliterate surprises with actionable results
This session is platform agnostic; whether you use PaaS, shared hosting, or wield your own hardware, PMs, developers, and clients will leave with new tools in their belt to launch with less agita. We will share some of our challenges and how we overcame them, and hopefully hear from you about how you overcame yours!
Kraken is a P2P docker image distribution system. It’s loosely based on BitTorrent protocol, fully compatible with docker registry API, and supports pluggable storage backends like S3, HDFS, etc. It successfully solved scaling problems we saw under different scenarios, also greatly sped up container deployment.
Boosting I/O Performance with KVM io_uringShapeBlue
Storage performance is becoming much more important. KVM io_uring attempts to bring the I/O performance of a virtual machine on almost the same level of bare metal. Apache CloudStack has support for io_uring since version 4.16. Wido will show the difference in performance io_uring brings to the table.
Wido den Hollander is the CTO of CLouDinfra, an infrastructure company offering total Webhosting solutions. CLDIN provides datacenter, IP and virtualization services for the companies within TWS. Wido den Hollander is a PMC member of the Apache CloudStack Project and a Ceph expert. He started with CloudStack 9 years ago. What attracted his attention is the simplicity of CloudStack and the fact that it is an open-source solution. During the years Wido became a contributor, a PMC member and he was a VP of the project for a year. He is one of our most active members, who puts a lot of efforts to keep the project active and transform it into a turnkey solution for cloud builders.
-----------------------------------------
The CloudStack European User Group 2022 took place on 7th April. The day saw a virtual get together for the European CloudStack Community, hosting 265 attendees from 25 countries. The event hosted 10 sessions with from leading CloudStack experts, users and skilful engineers from the open-source world, which included: technical talks, user stories, new features and integrations presentations and more.
------------------------------------------
About CloudStack: https://cloudstack.apache.org/
Improving Kafka at-least-once performance at UberYing Zheng
At Uber, we are seeing an increasing demand for Kafka at-least-once delivery (asks=all). So far, we are running a dedicated at-least-once Kafka cluster with special settings. With a very low workload, the dedicated at-least-once cluster has been working well for more than a year. When trying to allow at-least-once producing on the regular Kafka clusters, the producing performance was the main concern. We spent some effort on this issue in the recent months, and managed to reduce at-least-once producer latency by about 80% with code changes and configuration tuning. When acks=0, these improvements also help increasing Kafka throughput and reducing Kafka end-to-end latency.
OpenStackTage Cologne - OpenStack at 99.999% availability with CephDanny Al-Gaaf
High availability is a very important and frequently discussed topic for clouds at the infrastructure level. There are several concepts to provide a HA-ready OpenStack and also software defined storage like Ceph is highly available with no single point of failure.
But what about HA if you bring OpenStack and Ceph together? What are the dependencies between them and how do they influence the availability of your cloud instances from the tenant or application point of view?
How does the design of your classic high-available data center, e.g. with two fire compartments, power backup, and redundant power and network lines impact your cluster setup? There are many different scenarios of potential failures. What does this mean regarding building and managing failure zones, especially in case of technologies like Ceph which need to be able to build a quorum to keep up running?
Presentation from DockerCon EU '17 about how Aurea achieved over 50% cost reduction using Docker and about two major technical obstacles we had when dockerizing legacy applications.
Joined by Rick Nelson, Technical Solutions architect from NGINX Server Density take you though the do's and don'ts of monitoring NGINX. Critical and non critical metrics to monitor, important alerts to configure and the best monitoring tools available.
The monolith to cloud-native, microservices evolution has driven a shift from monitoring to observability. OpenTelemetry, a merger of the OpenTracing and OpenCensus projects, is enabling Observability 2.0. This talk covers the latest concepts in observability and then demonstrates how to configure and deploy various OpenTelemetry components to effectively meet your SLO's.
Logging at OVHcloud :
Logs Data platform est la plateforme de collecte, d'analyse et de gestion centralisée de logs d'OVHcloud. Cette plateforme a pour but de répondre aux challenges que constitue l'indexation de plus de 4000 milliards de logs par une entreprise comme OVHcloud. Cette présentation vous décrira l'architecture générale de Logs Data Platform autour de ses composants centraux Elasticsearch et Graylog et vous décrira les différentes problématiques de scalabilité, disponibilité, performance et d'évolutivité qui sont le quotidien de l'équipe Observability à OVHcloud.
Mixing performance, configurability, density, and security at scale has, historically, been hard with PHP. Early approaches have involved CGIs, suhosin, or multiple Apache instances. Then came PHP-FPM. At Pantheon, we've taken PHP-FPM, integrated it with cgroups, namespaces, and systemd socket activation. We use it to deliver all of our goals at unheard-of densities: thousands and thousands of isolated pools per box.
Watch how it's configured and see PHP-FPM pools start real-time to serve different Drupal sites as requests come into a server.
All of our tools for this are open-source and usable on your own virtual machines and hardware.
A short introductory talk given as part of the April 2018 Kong meetup "Introducing Kubernetes Ingress Controller for Kong".
This talk covers the new features and improvements made to Kong from 2017 to 2018, including the groundwork conducted by Kong Inc. and open source contributors that allowed for the development of the Kong Ingress Controller for Kubernetes.
The Kong Ingress Controller for Kubernetes was then announced during the meetup:
https://github.com/Kong/kubernetes-ingress-controller
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
Improving Kafka at-least-once performance at UberYing Zheng
At Uber, we are seeing an increasing demand for Kafka at-least-once delivery (asks=all). So far, we are running a dedicated at-least-once Kafka cluster with special settings. With a very low workload, the dedicated at-least-once cluster has been working well for more than a year. When trying to allow at-least-once producing on the regular Kafka clusters, the producing performance was the main concern. We spent some effort on this issue in the recent months, and managed to reduce at-least-once producer latency by about 80% with code changes and configuration tuning. When acks=0, these improvements also help increasing Kafka throughput and reducing Kafka end-to-end latency.
OpenStackTage Cologne - OpenStack at 99.999% availability with CephDanny Al-Gaaf
High availability is a very important and frequently discussed topic for clouds at the infrastructure level. There are several concepts to provide a HA-ready OpenStack and also software defined storage like Ceph is highly available with no single point of failure.
But what about HA if you bring OpenStack and Ceph together? What are the dependencies between them and how do they influence the availability of your cloud instances from the tenant or application point of view?
How does the design of your classic high-available data center, e.g. with two fire compartments, power backup, and redundant power and network lines impact your cluster setup? There are many different scenarios of potential failures. What does this mean regarding building and managing failure zones, especially in case of technologies like Ceph which need to be able to build a quorum to keep up running?
Presentation from DockerCon EU '17 about how Aurea achieved over 50% cost reduction using Docker and about two major technical obstacles we had when dockerizing legacy applications.
Joined by Rick Nelson, Technical Solutions architect from NGINX Server Density take you though the do's and don'ts of monitoring NGINX. Critical and non critical metrics to monitor, important alerts to configure and the best monitoring tools available.
The monolith to cloud-native, microservices evolution has driven a shift from monitoring to observability. OpenTelemetry, a merger of the OpenTracing and OpenCensus projects, is enabling Observability 2.0. This talk covers the latest concepts in observability and then demonstrates how to configure and deploy various OpenTelemetry components to effectively meet your SLO's.
Logging at OVHcloud :
Logs Data platform est la plateforme de collecte, d'analyse et de gestion centralisée de logs d'OVHcloud. Cette plateforme a pour but de répondre aux challenges que constitue l'indexation de plus de 4000 milliards de logs par une entreprise comme OVHcloud. Cette présentation vous décrira l'architecture générale de Logs Data Platform autour de ses composants centraux Elasticsearch et Graylog et vous décrira les différentes problématiques de scalabilité, disponibilité, performance et d'évolutivité qui sont le quotidien de l'équipe Observability à OVHcloud.
Mixing performance, configurability, density, and security at scale has, historically, been hard with PHP. Early approaches have involved CGIs, suhosin, or multiple Apache instances. Then came PHP-FPM. At Pantheon, we've taken PHP-FPM, integrated it with cgroups, namespaces, and systemd socket activation. We use it to deliver all of our goals at unheard-of densities: thousands and thousands of isolated pools per box.
Watch how it's configured and see PHP-FPM pools start real-time to serve different Drupal sites as requests come into a server.
All of our tools for this are open-source and usable on your own virtual machines and hardware.
A short introductory talk given as part of the April 2018 Kong meetup "Introducing Kubernetes Ingress Controller for Kong".
This talk covers the new features and improvements made to Kong from 2017 to 2018, including the groundwork conducted by Kong Inc. and open source contributors that allowed for the development of the Kong Ingress Controller for Kubernetes.
The Kong Ingress Controller for Kubernetes was then announced during the meetup:
https://github.com/Kong/kubernetes-ingress-controller
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
ER(Entity Relationship) Diagram for online shopping - TAEHimani415946
https://bit.ly/3KACoyV
The ER diagram for the project is the foundation for the building of the database of the project. The properties, datatypes, and attributes are defined by the ER diagram.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
2. 2
Introduction
● More users, more resources needed
○ CPU, RAM, HDD …
● Scale Up & Scale Out
○ One powerful server to service more users; or
○ Multiple servers to service more users
● Pros & Cons ?
● C10K / C100K Problem
3. 3
Introduction
● High Availability
○ A characteristic of a system, which aims to ensure an agreed level of
operational performance, usually uptime, for a higher than normal
period.
● Availability (per year)
○ 99%: 3.65days
○ 99.9%: 8.77 hours (3 nines)
○ 99.99%: 52.60 minutes (4 nines)
○ 99.999%: 5.26 minutes (5 nines)
4. 4
High Availability
● Principles
○ Elimination of single points of failure.
○ Reliable crossover.
■ Reliable configuration / topology change
○ Detection of failures as they occur.
● Graceful Degradation
○ the ability of a computer, machine, electronic system or network to
maintain limited functionality even when a large portion of it has
been destroyed or rendered inoperative.
Single point of failure - Wikipedia
5. 5
Load Balancing
● Client Side
○ e.g: DNS round-robin
○ Pros & Cons
● Server Side
○ Server Load Balancer
6. 6
Server Load Balancer (1)
● Provide “Scale-Out” and HA features
● Share loading among all backend nodes with some algorithms
○ Static Algorithms: does not take into account the state of the system
for the distribution of tasks.
○ Dynamic Algorithms
7. 7
Server Load Balancer (2)
● Layer 4 or Layer 7
○ Layer 4 Switch
● Distribution Algorithms
○ Round-robin
○ Random
○ Ratio
○ Hash Table
○ Least-connections
○ Persistence
■ Session-ID (e.g. HTTP Cookie)
8. 8
Server Load Balancer (3)
● Persistence (Stickiness)
○ "The Server" in OLG
○ How to handle information that must be kept across the multiple
requests in a user's session.
● Session ID?
○ Cookie
○ IP Address
○ TCP Connection
● Pros & Cons ?
9. 9
Server Load Balancer (4)
● SSL offloading (SSL/TLS termination)
○ Pros?
● Problems of Server Load Balancer
○ SPoF
○ Capacity Limit
○ Latency
10. 10
HW & SW of Server Load Balancer
● Nginx
● Ingress in K8S
● PF in FreeBSD
● haproxy
● Envoy Proxy
● F5 BIG-IP
● A10
● on Cloud
○ AWS ELB (Elastic Load Balancer)
○ Google CLB (Cloud Load Balancer)
11. 11
Global Server Load Balancer (GSLB)
● Globally balancing traffic to the nearest node.
● Pros
○ (Speed of light)
● Cons ?
● Technology
○ GeoDNS
■ resolve IP address based by the
location of clients
○ Anycast
■ use BGP
■ Google DNS 8.8.8.8
21. 21
Haproxy Configuration
● backend
○ balance
■ roundrobin, leastconn, hdr(param)
○ mode
○ http-request
○ server
■ check
■ fall
■ rise
■ inter
■ cookie
22. 22
Haproxy - run
● /etc/rc.conf.local
○ haproxy_enable="YES"
● /usr/local/etc/rc.d/haproxy start
● Question: how to setup a backup node for haproxy?
24. 24
Envoy
● https://www.envoyproxy.io
● Developed by Lyft (a ride-sharing company like Uber) and
opensourced in 2017
○ Apache License 2.0
● Features
○ Dynamic APIs for configuration
○ Service Discovery
○ gRPC / MongoDB / HTTP support
● MicroService
25. 25
Envoy - Installation
● Broken in FreeBSD now (require BoringSSL)
○ You can install it on Linux instead
● https://www.getenvoy.io
○ Debian: https://www.getenvoy.io/install/envoy/debian/
○ Ubuntu: https://www.getenvoy.io/install/envoy/ubuntu/
○ Centos: https://www.getenvoy.io/install/envoy/centos/