SlideShare a Scribd company logo
1 of 39
Red Hat GFS
Global File System
• Red Hat GFS is a cluster file system that allows a
cluster of nodes to simultaneously access a
block device that is shared among the nodes.
• GFS employs distributed metadata and multiple
journals for optimal operation in a cluster.
• To maintain file system integrity, GFS uses a lock
manager to coordinate I/O.
Benefits of Red Hat GFS
• Simplifying your data infrastructure
–Install and patch applications once for the
entire cluster.
–Eliminates the need for redundant copies of
application data (duplication).
–Enables concurrent read/write access to data
by many clients.
–Simplifies backup and disaster recovery (only
one file system to back up or recover).
Benefits of Red Hat GFS
• Maximize the use of storage resources;
minimize storage administration costs.
– Manage storage as a whole instead of by partition.
– Decrease overall storage needs by eliminating the
need for data replications.
• Scale the cluster seamlessly by adding servers
or storage on the fly.
– No more partitioning storage through complicated
techniques.
– Add servers to the cluster on the fly by mounting
them to the common file system.
Red Hat GFS
• Nodes that run Red Hat GFS are configured and
managed with Red Hat Cluster Suite
configuration and management tools.
• Volume management is managed through
CLVM (Cluster Logical Volume Manager).
Red Hat GFS
• Red Hat GFS provides data sharing among GFS
nodes in a Red Hat cluster.
• GFS provides a single, consistent view of the file-
system name space across the GFS nodes in a Red
Hat cluster.
• GFS allows applications to install and run without
much knowledge of the underlying storage
infrastructure.
• GFS provides features that are typically required in
enterprise environments, such as quotas, multiple
journals, and multipath support.
Red Hat GFS
• You can deploy GFS in a variety of
configurations to suit your needs for
performance, scalability, and economy.
• For superior performance and scalability, you
can deploy GFS in a cluster that is connected
directly to a SAN.
• For more economical needs, you can deploy
GFS in a cluster that is connected to a LAN with
servers that use GNBD (Global Network Block
Device) or to iSCSI (Internet Small Computer
System Interface) devices.
Red Hat GFS - Superior Performance and
Scalability
• You can obtain the highest shared-file performance
when applications access storage directly.
• The GFS SAN configuration provides superior file
performance for shared files and file systems.
• Linux applications run directly on cluster nodes
using GFS. Without file protocols or storage servers
to slow data access, performance is similar to
individual Linux servers with directly connected
storage.
• GFS supports over 300 GFS nodes.
Red Hat GFS - Superior Performance and
Scalability
GFS with a SAN
Red Hat GFS - Performance, Scalability,
Moderate Price
• Multiple Linux client applications on a LAN can
share the same SAN-based data.
• SAN block storage is presented to network clients
as block storage devices by GNBD servers.
• From the perspective of a client application,
storage is accessed as if it were directly attached to
the server in which the application is running.
Stored data is actually on the SAN.
• Storage devices and data can be equally shared by
network client applications.
• File locking and sharing functions are handled by
GFS for each network client.
Red Hat GFS - Performance, Scalability,
Moderate Price
GFS and GNBD with a SAN
Red Hat GFS - Economy and Performance
• Linux client applications can also take
advantage of an existing Ethernet topology to
gain shared access to all block storage devices.
• Client data files and file systems can be shared
with GFS on each client.
• Application failover can be fully automated with
Red Hat Cluster Suite.
Red Hat GFS - Economy and Performance
GFS and GNBD with Directly Connected Storage
Cluster Logical Volume Manager
Cluster Logical Volume Manager
• The Cluster Logical Volume Manager (CLVM)
provides a cluster-wide version of LVM2.
• CLVM provides the same capabilities as LVM2
on a single node, but makes the volumes
available to all nodes in a Red Hat cluster.
• The key component in CLVM is clvmd. clvmd is a
daemon that provides clustering extensions to
the standard LVM2 tool set and allows LVM2
commands to manage shared storage.
• clvmd runs in each cluster node and distributes
LVM metadata updates in a cluster.
CLVM Overview
CLVM Configuration
You can configure CLVM using the
same commands as LVM2, using the
LVM graphical user interface, or
using the storage configuration
function of the Conga cluster
configuration graphical user
interface
CLVM Configuration
LVM Graphical User Interface
CLVM Configuration
Conga LVM Graphical User Interface
CLVM Configuration
Basic concept of creating logical volumes
Global Network Block Device
Global Network Block Device
• Global Network Block Device (GNBD) provides
block-device access to Red Hat GFS over TCP/IP.
• GNBD is similar in concept to NBD; however,
GNBD is GFS-specific and tuned solely for use
with GFS.
• GNBD is useful when the need for more robust
technologies — Fibre Channel or single-initiator
SCSI — are not necessary or are cost-
prohibitive.
Global Network Block Device
• GNBD consists of two major components: a GNBD
client and a GNBD server.
• A GNBD client runs in a node with GFS and imports
a block device exported by a GNBD server.
• A GNBD server runs in another node and exports
block-level storage from its local storage (either
directly attached storage or SAN storage).
• Multiple GNBD clients can access a device exported
by a GNBD server, thus making a GNBD suitable for
use by a group of nodes running GFS.
Global Network Block Device
Linux Virtual Server
Linux Virtual Server
• Linux Virtual Server (LVS) is a set of integrated
software components for balancing the IP load
across a set of real servers. LVS runs on a pair of
equally configured computers: one that is an active
LVS router and one that is a backup LVS router. The
active LVS router serves two roles:
– To balance the load across the real servers.
– To check the integrity of the services on each real
server.
• The backup LVS router monitors the active LVS
router and takes over from it in case the active LVS
router fails.
LVS components and their interrelationship
LVS components and their interrelationship
The pulse daemon runs on both the active and passive LVS routers.
On the backup LVS router, pulse sends a heartbeat to the public
interface of the active router to make sure the active LVS router is
properly functioning.
On the active LVS router, pulse starts the lvs daemon and responds
to heartbeat queries from the backup LVS router.
Once started, the lvs daemon calls the ipvsadm utility to configure
and maintain the IPVS (IP Virtual Server) routing table in the kernel
and starts a nanny process for each configured virtual server on
each real server.
LVS components and their interrelationship
Each nanny process checks the state of one configured service
on one real server, and tells the lvs daemon if the service on
that real server is malfunctioning.
If a malfunction is detected, the lvs daemon instructs ipvsadm
to remove that real server from the IPVS routing table.
If the backup LVS router does not receive a response from the
active LVS router, it initiates failover by calling send_arp to
reassign all virtual IP addresses to the NIC hardware
addresses (MAC address) of the backup LVS router, sends a
command to the active LVS router via both the public and
private network interfaces.
LVS components and their interrelationship
Because there is no built-in component in LVS to share the
data among real servers, you have two basic options:
• Synchronize the data across the real servers.
• Add a third layer to the topology for shared data access.
The first option is preferred for servers that do not allow large
numbers of users to upload or change data on the real
servers.
If the real servers allow large numbers of users to modify
data, such as an e-commerce website, adding a third layer is
preferable.
Two-Tier LVS Topology
Three-Tier LVS Topology
Routing Methods
Routing Methods - NAT Routing
Routing Methods - Direct Routing
Persistence and Firewall Marks
Overview
In certain situations, it may be desirable for a client
to reconnect repeatedly to the same real server,
rather than have an LVS load-balancing algorithm
send that request to the best available server.
Examples of such situations include multi-screen
web forms, cookies, SSL, and FTP connections. In
those cases, a client may not work properly unless
the transactions are being handled by the same
server to retain context.
Persistence
When a client connects to a service, LVS remembers the last
connection for a specified period of time. If that same client
IP address connects again within that period, it is sent to the
same server it connected to previously — bypassing the
load-balancing mechanisms.
Persistence also allows you to specify a subnet mask to apply
to the client IP address test as a tool for controlling what
addresses have a higher level of persistence, thereby
grouping connections to that subnet.
persistence is not the most efficient way to deal with the
problem of grouping together connections destined for
different ports.
Firewall Marks
Firewall marks are an easy and efficient way to a group
ports used for a protocol or group of related protocols.
For example, if LVS is deployed to run an e-commerce
site, firewall marks can be used to bundle HTTP
connections on port 80 and secure, HTTPS connections
on port 443. By assigning the same firewall mark to
the virtual server for each protocol, state information
for the transaction can be preserved because the LVS
router forwards all requests to the same real server
after a connection is opened.

More Related Content

What's hot

What Is Ansible? | How Ansible Works? | Ansible Tutorial For Beginners | DevO...
What Is Ansible? | How Ansible Works? | Ansible Tutorial For Beginners | DevO...What Is Ansible? | How Ansible Works? | Ansible Tutorial For Beginners | DevO...
What Is Ansible? | How Ansible Works? | Ansible Tutorial For Beginners | DevO...
Simplilearn
 

What's hot (20)

ansible why ?
ansible why ?ansible why ?
ansible why ?
 
The kvm virtualization way
The kvm virtualization wayThe kvm virtualization way
The kvm virtualization way
 
VMware vSphere Networking deep dive
VMware vSphere Networking deep diveVMware vSphere Networking deep dive
VMware vSphere Networking deep dive
 
Distributed fun with etcd
Distributed fun with etcdDistributed fun with etcd
Distributed fun with etcd
 
Ansible
AnsibleAnsible
Ansible
 
Ansible
AnsibleAnsible
Ansible
 
[발표자료] 오픈소스 Pacemaker 활용한 zabbix 이중화 방안(w/ Zabbix Korea Community)
[발표자료] 오픈소스 Pacemaker 활용한 zabbix 이중화 방안(w/ Zabbix Korea Community) [발표자료] 오픈소스 Pacemaker 활용한 zabbix 이중화 방안(w/ Zabbix Korea Community)
[발표자료] 오픈소스 Pacemaker 활용한 zabbix 이중화 방안(w/ Zabbix Korea Community)
 
WebSphere Application Server
WebSphere Application ServerWebSphere Application Server
WebSphere Application Server
 
VMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes EverythingVMware - Virtual SAN - IT Changes Everything
VMware - Virtual SAN - IT Changes Everything
 
VMWARE ESX
VMWARE ESXVMWARE ESX
VMWARE ESX
 
Ansible 101
Ansible 101Ansible 101
Ansible 101
 
Control Kubernetes Ingress and Egress Together with NGINX
Control Kubernetes Ingress and Egress Together with NGINXControl Kubernetes Ingress and Egress Together with NGINX
Control Kubernetes Ingress and Egress Together with NGINX
 
Esxi troubleshooting
Esxi troubleshootingEsxi troubleshooting
Esxi troubleshooting
 
VM example in mule
VM example in muleVM example in mule
VM example in mule
 
WebSphere application server 8.5.5 - quick overview
WebSphere application server 8.5.5 - quick overviewWebSphere application server 8.5.5 - quick overview
WebSphere application server 8.5.5 - quick overview
 
Helm.pptx
Helm.pptxHelm.pptx
Helm.pptx
 
VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5VMware Advance Troubleshooting Workshop - Day 5
VMware Advance Troubleshooting Workshop - Day 5
 
What Is Ansible? | How Ansible Works? | Ansible Tutorial For Beginners | DevO...
What Is Ansible? | How Ansible Works? | Ansible Tutorial For Beginners | DevO...What Is Ansible? | How Ansible Works? | Ansible Tutorial For Beginners | DevO...
What Is Ansible? | How Ansible Works? | Ansible Tutorial For Beginners | DevO...
 
Weblogic
WeblogicWeblogic
Weblogic
 
Ansible
AnsibleAnsible
Ansible
 

Viewers also liked

Viewers also liked (7)

RHEL6 High Availability Add-On Technical Guide
RHEL6 High Availability Add-On Technical GuideRHEL6 High Availability Add-On Technical Guide
RHEL6 High Availability Add-On Technical Guide
 
Pacemakerを使いこなそう
Pacemakerを使いこなそうPacemakerを使いこなそう
Pacemakerを使いこなそう
 
30分でRHEL6 High Availability Add-Onを超絶的に理解しよう!
30分でRHEL6 High Availability Add-Onを超絶的に理解しよう!30分でRHEL6 High Availability Add-Onを超絶的に理解しよう!
30分でRHEL6 High Availability Add-Onを超絶的に理解しよう!
 
Pacemaker+PostgreSQLレプリケーションで共有ディスクレス高信頼クラスタの構築@OSC 2013 Tokyo/Spring
Pacemaker+PostgreSQLレプリケーションで共有ディスクレス高信頼クラスタの構築@OSC 2013 Tokyo/SpringPacemaker+PostgreSQLレプリケーションで共有ディスクレス高信頼クラスタの構築@OSC 2013 Tokyo/Spring
Pacemaker+PostgreSQLレプリケーションで共有ディスクレス高信頼クラスタの構築@OSC 2013 Tokyo/Spring
 
HAクラスタで PostgreSQLレプリケーション構成の 高可用化
HAクラスタで PostgreSQLレプリケーション構成の 高可用化HAクラスタで PostgreSQLレプリケーション構成の 高可用化
HAクラスタで PostgreSQLレプリケーション構成の 高可用化
 
PacemakerのMaster/Slave構成の基本と事例紹介(DRBD、PostgreSQLレプリケーション) @Open Source Confer...
PacemakerのMaster/Slave構成の基本と事例紹介(DRBD、PostgreSQLレプリケーション) @Open Source Confer...PacemakerのMaster/Slave構成の基本と事例紹介(DRBD、PostgreSQLレプリケーション) @Open Source Confer...
PacemakerのMaster/Slave構成の基本と事例紹介(DRBD、PostgreSQLレプリケーション) @Open Source Confer...
 
GFS
GFSGFS
GFS
 

Similar to Rhel cluster basics 2

2.1 Red_Hat_Cluster1.ppt
2.1 Red_Hat_Cluster1.ppt2.1 Red_Hat_Cluster1.ppt
2.1 Red_Hat_Cluster1.ppt
Manoj603126
 
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Community
 

Similar to Rhel cluster basics 2 (20)

FAILOVER
FAILOVERFAILOVER
FAILOVER
 
2.1 Red_Hat_Cluster1.ppt
2.1 Red_Hat_Cluster1.ppt2.1 Red_Hat_Cluster1.ppt
2.1 Red_Hat_Cluster1.ppt
 
Nagios Conference 2011 - William Leibzon - Nagios In Cloud Computing Environm...
Nagios Conference 2011 - William Leibzon - Nagios In Cloud Computing Environm...Nagios Conference 2011 - William Leibzon - Nagios In Cloud Computing Environm...
Nagios Conference 2011 - William Leibzon - Nagios In Cloud Computing Environm...
 
State of the Container Ecosystem
State of the Container EcosystemState of the Container Ecosystem
State of the Container Ecosystem
 
Gpfs introandsetup
Gpfs introandsetupGpfs introandsetup
Gpfs introandsetup
 
Hyper-V’s Virtualization Enhancements - EPC Group
Hyper-V’s Virtualization Enhancements - EPC GroupHyper-V’s Virtualization Enhancements - EPC Group
Hyper-V’s Virtualization Enhancements - EPC Group
 
Introducción a CloudStack
Introducción a CloudStackIntroducción a CloudStack
Introducción a CloudStack
 
04_virtualization1_v1.pdf
04_virtualization1_v1.pdf04_virtualization1_v1.pdf
04_virtualization1_v1.pdf
 
9-cloud-computing.pdf
9-cloud-computing.pdf9-cloud-computing.pdf
9-cloud-computing.pdf
 
D108636GC10_les01.pptx
D108636GC10_les01.pptxD108636GC10_les01.pptx
D108636GC10_les01.pptx
 
Multi Tenancy In The Cloud
Multi Tenancy In The CloudMulti Tenancy In The Cloud
Multi Tenancy In The Cloud
 
Could the “C” in HPC stand for Cloud?
Could the “C” in HPC stand for Cloud?Could the “C” in HPC stand for Cloud?
Could the “C” in HPC stand for Cloud?
 
Clustering
Clustering Clustering
Clustering
 
2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management2017 VMUG Storage Policy Based Management
2017 VMUG Storage Policy Based Management
 
Cloud Bursting 101: What to do When Cloud Computing Demand Exceeds Capacity
Cloud Bursting 101: What to do When Cloud Computing Demand Exceeds CapacityCloud Bursting 101: What to do When Cloud Computing Demand Exceeds Capacity
Cloud Bursting 101: What to do When Cloud Computing Demand Exceeds Capacity
 
Server Farms and XML Web Services
Server Farms and XML Web ServicesServer Farms and XML Web Services
Server Farms and XML Web Services
 
cluster computing
cluster computingcluster computing
cluster computing
 
Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualization
 
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
 
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store Ceph Day Beijing: Big Data Analytics on Ceph Object Store
Ceph Day Beijing: Big Data Analytics on Ceph Object Store
 

Recently uploaded

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
WSO2
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Recently uploaded (20)

Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 

Rhel cluster basics 2

  • 2. Global File System • Red Hat GFS is a cluster file system that allows a cluster of nodes to simultaneously access a block device that is shared among the nodes. • GFS employs distributed metadata and multiple journals for optimal operation in a cluster. • To maintain file system integrity, GFS uses a lock manager to coordinate I/O.
  • 3. Benefits of Red Hat GFS • Simplifying your data infrastructure –Install and patch applications once for the entire cluster. –Eliminates the need for redundant copies of application data (duplication). –Enables concurrent read/write access to data by many clients. –Simplifies backup and disaster recovery (only one file system to back up or recover).
  • 4. Benefits of Red Hat GFS • Maximize the use of storage resources; minimize storage administration costs. – Manage storage as a whole instead of by partition. – Decrease overall storage needs by eliminating the need for data replications. • Scale the cluster seamlessly by adding servers or storage on the fly. – No more partitioning storage through complicated techniques. – Add servers to the cluster on the fly by mounting them to the common file system.
  • 5. Red Hat GFS • Nodes that run Red Hat GFS are configured and managed with Red Hat Cluster Suite configuration and management tools. • Volume management is managed through CLVM (Cluster Logical Volume Manager).
  • 6. Red Hat GFS • Red Hat GFS provides data sharing among GFS nodes in a Red Hat cluster. • GFS provides a single, consistent view of the file- system name space across the GFS nodes in a Red Hat cluster. • GFS allows applications to install and run without much knowledge of the underlying storage infrastructure. • GFS provides features that are typically required in enterprise environments, such as quotas, multiple journals, and multipath support.
  • 7. Red Hat GFS • You can deploy GFS in a variety of configurations to suit your needs for performance, scalability, and economy. • For superior performance and scalability, you can deploy GFS in a cluster that is connected directly to a SAN. • For more economical needs, you can deploy GFS in a cluster that is connected to a LAN with servers that use GNBD (Global Network Block Device) or to iSCSI (Internet Small Computer System Interface) devices.
  • 8. Red Hat GFS - Superior Performance and Scalability • You can obtain the highest shared-file performance when applications access storage directly. • The GFS SAN configuration provides superior file performance for shared files and file systems. • Linux applications run directly on cluster nodes using GFS. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage. • GFS supports over 300 GFS nodes.
  • 9. Red Hat GFS - Superior Performance and Scalability GFS with a SAN
  • 10. Red Hat GFS - Performance, Scalability, Moderate Price • Multiple Linux client applications on a LAN can share the same SAN-based data. • SAN block storage is presented to network clients as block storage devices by GNBD servers. • From the perspective of a client application, storage is accessed as if it were directly attached to the server in which the application is running. Stored data is actually on the SAN. • Storage devices and data can be equally shared by network client applications. • File locking and sharing functions are handled by GFS for each network client.
  • 11. Red Hat GFS - Performance, Scalability, Moderate Price GFS and GNBD with a SAN
  • 12. Red Hat GFS - Economy and Performance • Linux client applications can also take advantage of an existing Ethernet topology to gain shared access to all block storage devices. • Client data files and file systems can be shared with GFS on each client. • Application failover can be fully automated with Red Hat Cluster Suite.
  • 13. Red Hat GFS - Economy and Performance GFS and GNBD with Directly Connected Storage
  • 15. Cluster Logical Volume Manager • The Cluster Logical Volume Manager (CLVM) provides a cluster-wide version of LVM2. • CLVM provides the same capabilities as LVM2 on a single node, but makes the volumes available to all nodes in a Red Hat cluster. • The key component in CLVM is clvmd. clvmd is a daemon that provides clustering extensions to the standard LVM2 tool set and allows LVM2 commands to manage shared storage. • clvmd runs in each cluster node and distributes LVM metadata updates in a cluster.
  • 17. CLVM Configuration You can configure CLVM using the same commands as LVM2, using the LVM graphical user interface, or using the storage configuration function of the Conga cluster configuration graphical user interface
  • 19. CLVM Configuration Conga LVM Graphical User Interface
  • 20. CLVM Configuration Basic concept of creating logical volumes
  • 22. Global Network Block Device • Global Network Block Device (GNBD) provides block-device access to Red Hat GFS over TCP/IP. • GNBD is similar in concept to NBD; however, GNBD is GFS-specific and tuned solely for use with GFS. • GNBD is useful when the need for more robust technologies — Fibre Channel or single-initiator SCSI — are not necessary or are cost- prohibitive.
  • 23. Global Network Block Device • GNBD consists of two major components: a GNBD client and a GNBD server. • A GNBD client runs in a node with GFS and imports a block device exported by a GNBD server. • A GNBD server runs in another node and exports block-level storage from its local storage (either directly attached storage or SAN storage). • Multiple GNBD clients can access a device exported by a GNBD server, thus making a GNBD suitable for use by a group of nodes running GFS.
  • 26. Linux Virtual Server • Linux Virtual Server (LVS) is a set of integrated software components for balancing the IP load across a set of real servers. LVS runs on a pair of equally configured computers: one that is an active LVS router and one that is a backup LVS router. The active LVS router serves two roles: – To balance the load across the real servers. – To check the integrity of the services on each real server. • The backup LVS router monitors the active LVS router and takes over from it in case the active LVS router fails.
  • 27. LVS components and their interrelationship
  • 28. LVS components and their interrelationship The pulse daemon runs on both the active and passive LVS routers. On the backup LVS router, pulse sends a heartbeat to the public interface of the active router to make sure the active LVS router is properly functioning. On the active LVS router, pulse starts the lvs daemon and responds to heartbeat queries from the backup LVS router. Once started, the lvs daemon calls the ipvsadm utility to configure and maintain the IPVS (IP Virtual Server) routing table in the kernel and starts a nanny process for each configured virtual server on each real server.
  • 29. LVS components and their interrelationship Each nanny process checks the state of one configured service on one real server, and tells the lvs daemon if the service on that real server is malfunctioning. If a malfunction is detected, the lvs daemon instructs ipvsadm to remove that real server from the IPVS routing table. If the backup LVS router does not receive a response from the active LVS router, it initiates failover by calling send_arp to reassign all virtual IP addresses to the NIC hardware addresses (MAC address) of the backup LVS router, sends a command to the active LVS router via both the public and private network interfaces.
  • 30. LVS components and their interrelationship Because there is no built-in component in LVS to share the data among real servers, you have two basic options: • Synchronize the data across the real servers. • Add a third layer to the topology for shared data access. The first option is preferred for servers that do not allow large numbers of users to upload or change data on the real servers. If the real servers allow large numbers of users to modify data, such as an e-commerce website, adding a third layer is preferable.
  • 34. Routing Methods - NAT Routing
  • 35. Routing Methods - Direct Routing
  • 37. Overview In certain situations, it may be desirable for a client to reconnect repeatedly to the same real server, rather than have an LVS load-balancing algorithm send that request to the best available server. Examples of such situations include multi-screen web forms, cookies, SSL, and FTP connections. In those cases, a client may not work properly unless the transactions are being handled by the same server to retain context.
  • 38. Persistence When a client connects to a service, LVS remembers the last connection for a specified period of time. If that same client IP address connects again within that period, it is sent to the same server it connected to previously — bypassing the load-balancing mechanisms. Persistence also allows you to specify a subnet mask to apply to the client IP address test as a tool for controlling what addresses have a higher level of persistence, thereby grouping connections to that subnet. persistence is not the most efficient way to deal with the problem of grouping together connections destined for different ports.
  • 39. Firewall Marks Firewall marks are an easy and efficient way to a group ports used for a protocol or group of related protocols. For example, if LVS is deployed to run an e-commerce site, firewall marks can be used to bundle HTTP connections on port 80 and secure, HTTPS connections on port 443. By assigning the same firewall mark to the virtual server for each protocol, state information for the transaction can be preserved because the LVS router forwards all requests to the same real server after a connection is opened.