SlideShare a Scribd company logo
1 of 4
Download to read offline
Hardcore Linux Challenge yourself
        Tutorial Hardcore Linux
                          with advanced projects for power users


       Linux Virtual
        Building load-balancing clusters with LVS is as good at keeping out
        the cold as chopping logs on a winter’s day, maintains Dr Chris Brown.
                                                                                                       backend servers; in principle you could use anything that offers
                                                                                                       a web server. In my case I used a couple of laptops, one running
                                                                                                       SUSE 10.3 and one running Ubuntu 7.04. In reality you’d probably
                                                                                                       want more than two backend servers, but two is enough to prove
                                                                                                       the point.
                                                                                                           You might well be wondering why I’m running a different
                                                                                                       operating system on each of the four computers. It’s a fair
                                                                                                       question, and one I often ask myself! Unfortunately, it’s simply
                                                                                                       a fact of life for those of us who make our living by messing
                                                                                                       around with Linux.

                                                                                                       How a load-balancing cluster works
                                                                                                       Client machines send service requests (for example an HTTP GET
                                                                                                       request for a web page) to the public IP address of the director
                                                                                                       (192.168.0.41 on the figure; as far as the clients are concerned,
                                                                                                       this is the IP address at which the service is provided).
                                                                                                           The director chooses one of the backend servers to forward
                                                                                                       the request to. It rewrites the destination IP address of the packet
                                                                                                       to be that of the chosen backend server and forwards the packet
                                                                                                       out onto the internal network.
                                                                                                           The chosen backend server processes the request and
                                                                                                       sends an HTTP response, which hopefully includes the
                                                                                                       requested page. This response is addressed to the client
                                                                                                       machine (192.168.0.2) but is initially sent back to the director



                               T
                                        his month’s Hardcore tutorial shows you how to provide         (10.0.0.1), because the director is configured as the default
                                        a load-balanced cluster of web servers, with a scalable        gateway for the backend server. The director rewrites the source
                                        performance that far outstrips the capability of an
                               individual server. We’ll be using the Red Hat cluster software, but
                               the core functionality described here is based on the Linux virtual
                               server kernel module and is not specific to Red Hat.
         Our                       If you want to follow along with the tutorial for real, you’ll
         expert                need at least four computers, as shown in Figure 1 (opposite
        Dr Chris Brown         page). I realise this might put it out of the reach of some readers
        A Unix user for over   as a practical project, but hopefully it can remain as a thought
        25 years, his own
        Interactive Digital    experiment, like that thing Schr dinger did with his cat. If you’ve
        Learning company       got some old machines lying about the place, why not refer to this
        provides Linux         month’s cover feature to find out how to get them up and running
        training, content
        and consultancy.       again – then network them?
        He also specialises        Machine 1 is simply a client used for testing – it could be
        in computer-based
        classroom training
                               anything that has a web browser. In my case, I commandeered
        delivery systems.      my wife’s machine, which runs Vista. Machine 2 is where the
                               real action is. This machine must be running Linux, and should
                               preferably have two network interfaces, as shown in the figure. (It’s
                               possible to construct the cluster using just a single network, but
                               this arrangement wouldn’t be used on a production system, and it
                               needs a bit of IP-level trickery to make it work).
                                   This machine performs load balancing and routing into
                               the cluster, and is known as the “director” in some LVS
                               documentation. In my case, machine 2 was running CentOS5
                               (essentially equivalent to RHEL5). Machines 3 and 4 are the              Figure 2: Configuring a virtual server with Piranha.



             Last month Use ‘heartbeat’ to provide automatic failover to a backup server.
        98 Linux Format February 2008




LXF102.tut_adv Sec2:98                                                                                                                                                14/12/07 15:41:05
Hardcore Linux Tutorial




 Server
          IP address of the packet to refer to its own public IP address and
          sends the packet back to the client.
              All of this IP-level tomfoolery is carried out by the LVS module
          within the Linux kernel. The director is not simply acting as a
                                                                                                             1
          reverse web proxy, and commands such as netstat -ant will NOT
          show any user-level process listening on port 80.
              The address re-writing is a form of NAT (Network Address
          Translation) and allows the director to masquerade as the real
          server, and to hide the presence of the internal network that’s
          doing the real work.

          Balancing the load
          A key feature of LVS is load balancing – that is, making sure that
          each backend server receives roughly the same amount of work.
          LVS has several algorithms for doing this; we’ll mention four.
                                                                                                                            2
             Round-robin scheduling is the simplest algorithm. The
          director just works its way around the backend servers in turn,
          starting round with the first one again when it reaches the end
          of the list. This is good for testing because it’s easy to verify that
          all your backend servers are working, but it may not be best in a
          production environment.
             Weighted round-robin is similar but lets you assign a weight
          to each backend server to indicate the relative speed of each
          server. A server with a weight of two is assumed to be twice as
          powerful as a server with a weight of one and will receive twice
          as many requests.
             Least-connection load balancing tracks the number of
          currently active connections to each backend server and                                 3                                                             4
          forwards the request to the one with the least number.
             Weighted least connection method also takes into account
          the relative processing power (weight) of the server. The weighted
          least connection method is good for production clusters because
          it works well when service requests take widely varying amounts
          of time to process and/or when the backend servers in the cluster        Figure 1: A simple load-balancing cluster.
          are not equally powerful.
               The latter two algorithms are dynamic; that is, they take
          account of the current load on the machines in the cluster.

          The tools for the job                                                    Worried about bottlenecks?
          To make all this work, the kernel maintains a virtual server table
          which contains, among other things, the IP addresses of the              Since all the traffic into and out of the cluster passes through
          backend servers. The command-line tool ipvsadm is used to                the director, you might be wondering if these machines will
          maintain and inspect this table.                                         create a performance bottleneck. In fact this is unlikely to be the
                                                                                   case. Remember, the director is performing only simple packet
              Assuming that your kernel is built to include the LVS module
                                                                                   header manipulation inside the kernel, and it turns out that even
          (and most of them are these days) ipvsadm is the only program
                                                                                   a modest machine is capable of saturating a 100 MBps network.
          you absolutely must have to make LVS work; however there are             It’s much more likely that the network bandwidth into the site
          some other toolsets around that make life easier. (The situation         will be the bottleneck. For lots more information about using old
          is similar to the packet-filtering machinery in the kernel. In theory    PCs for tasks like networking, see this month’s cover feature
          we only need the command-line tool iptables to manage this and           starting on page 45.
          create a packet-filtering firewall, in practice most of us use higher-



               If you missed last issue Call 0870 837 4773 or +44 1858 438795.
                                                                                                                                               February 2008 Linux Format 99




LXF102.tut_adv Sec2:99                                                                                                                                                14/12/07 15:41:07
Tutorial Hardcore Linux

                                  level tools – often graphical – to build our firewall rulesets.) For
                                  this tutorial we’ll use the clustering toolset from Red Hat which
                                  includes a browser-based configuration tool called Piranha.
                                      Now, the configuration shown in Figure 1 has one major
                                  shortcoming – the director presents a single point of failure within
                                  the cluster. To overcome this, it’s possible to use a primary and
                                  backup server pair as the director, using a heartbeat message
                                  exchange to allow the backup to detect the failure of the primary
                                  and take over its resources.
                                      For this tutorial I’ve ignored this issue, partly because I
                                  discussed it in detail last month and partly because I would have
                                  needed a fifth machine to construct the cluster, and I only have
                                  four! It should be noted, however, that the clustering toolset
                                  from Red Hat does indeed include functionality to failover from a
                                  primary to a backup director.

                                  Setting up the backend servers
                                  If you want to actually build the cluster described in this tutorial,
                                  start with the backend servers. These can be any machines able
                                  to offer HTTP-based web service on port 80; in my case they were
                                  running Linux and Apache.
                                      You will need to create some content in the DocumentRoot                   Figure 3: Specifying the real (backend) servers.
                                  directory of each server, and for testing purposes it’s a good idea
                                  to make the content different on each machine so we can easily


           “You’ll need at least four computers                                                                Setting up the director
                                                                                                               On the director machine, begin by downloading and installing
           to follow along for real; otherwise                                                                 the Red Hat clustering tools. In my case (remember that
                                                                                                               I’m running CentOS5 on this machine) I simply used the
           see this as a thought experiment”                                                                   graphical install tool (pirut) to install the packages ipvsadm and
                                                                                                               Piranha from the CentOS repository. My next step was to run the
                                  distinguish which one actually served the request. For example,              command piranha-passwd to set a password for the piranha
                                  on backend server 1, I created a file called proverb.html with the           configuration tool:
                                  line “A stitch in time saves nine” On server 2 the line was “Look
                                                                    .                                           # /etc/init.d/piranha-gui start
                                  before you leap” Of course in reality you’d need to ensure that all
                                                     .                                                         This service listens on port 3636 and provides a browser-based
                                  the backend servers were ‘in sync’ and serving the same content              interface for configuring the clustering toolset, so once it’s running
                                  – an issue we’ll return to later.                                            I can point my browser at http://localhost:3636. I’ll need to log
                                      Give the machines static IP addresses appropriate for the                in, using the username piranha and the password I just set.
                                  internal network. In my case I used 10.0.0.2 and 10.0.0.3. Set the           From here I’m presented with four main screens: Control/
                                  default route on these machines to be the private IP address of              Monitoring, Global Settings, Redundancy and Virtual Servers (you
                                  the director (10.0.0.1).                                                     can see the links to these in Figure 3 (above). To begin, go to the



           If it doesn’t work
           Chances are good that your cluster won’t work the        192.168.0.0 network and the 10.0.0.0 network via           If it reports ‘1’ that’s fine, but if it reports ‘0’ you’ll
                                                                                                                                                ,
           first time you try it. You have a couple of choices at   the appropriate interfaces.                                need to enable IP forwarding like this:
           this point. First, you could hurl one of the laptops         If you can ping your backend servers, fire up            # echo 1 > /proc/sys/net/ipv4/ip_forward
           across the room and curse Microsoft.                     the web browser on the LVS machine (assuming               You might also verify that your kernel is actually
               These are not, of course, productive                 there is a browser installed there – you don’t really      configured to use LVS. If your distro provides a copy
           responses, even if they do make you feel better.         need one). Verify that you can reach the URLs              of the config file used to build the kernel (it will
           And Microsoft really shouldn’t be blamed since           http://10.0.0.2 and http://10.0.0.3, and see the           likely be /boot/config-something-or-other) use
           we’re not actually using any of its software. The        web content served by these two machines. (If the          grep to look for the string CONFIG_IP_VS in this
           second response is to take some deep, calming            pings are OK but the web servers aren’t accessible,        file, and verify that you see a line like CONFIG_IP_
           breaths, then conduct a few diagnostic tests:            make sure that the web servers are actually                VS=m, among others. You might also try:
               A good first step would be to verify connectivity    running on the backend servers, and that the                 # lsmod | grep vs
           from the LVS machine, using good ol’ ping.               backend servers don’t have firewall rules in place         to verify that the ip_vs module is loaded. If you
           First, make sure you can ping your test machine          that prevent them from being accessed.)                    can’t find evidence for virtual server support in the
           (192.168.0.2). Then check that you can ping both             You should also carefully check the LVS routing        kernel you may have to reconfigure and rebuild
           your backend servers (10.0.0.2 and 10.0.0.3). If         table displayed by the Control/Monitoring page of          your kernel, an activity that is beyond the scope of
           this doesn’t work, run ifconfig -a and verify that       Piranha and verify that your backend servers show          this tutorial.
           eth0 has IP address 192.168.0.41 and eth1 has            up there. If you’re still having no luck, verify that IP        If all these tests look OK and things still don’t
           address 10.0.0.1.                                        forwarding is enabled on the LVS machine. You can          work, you may now hurl one of the laptops across
               Also, run route -n to check the routing              do this with the command:                                  the room. This won’t make your cluster work, but at
           table – verify that you have routes out onto the          # cat /proc/sys/net/ipv4/ip_forward                       least you’ll have a better idea of why it’s broken ....




        100 Linux Format February 2008




LXF102.tut_adv Sec1:100                                                                                                                                                                 14/12/07 15:41:07
Hardcore Linux Tutorial

          Virtual Servers screen and add a new service. Figure 2 (page
          98) shows the form you’ll fill in. Among other things,
          you need to give the service a name, specify the port number and
                                                                                        ERRATA: Figure 1 from LXF101
          interface it will accept request packets on, and select the
          scheduling algorithm (for initial testing I chose ‘Round Robin’).
          Clicking the Real Server link at the top of this page takes you to the
          screen shown in Figure 3. Here you can specify the names, IP
          addresses and weights of your backend servers.
               Behind the scenes, most of the configuration captured by
          Piranha is stored in the config file /etc/sysconfig/ha/lvs.cf.
          Other tools in the clustering toolset read this file; it’s plain text and
          there’s no reason why you can’t just hand-edit it directly if you
          prefer. With this configuration in place, you should be good to go.
          Launch the clustering service from the command line with:
            # /etc/init.d/pulse start
          (On a production system you’d have this service start
          automatically at boot time.)
               Now go to the Piranha control/monitoring screen, shown
          in Figure 4 (below). Look carefully at the LVS routing table. You
          should see an entry there for your virtual service (that’s the line
          starting TCP...) and below that, a line for each of your backend
          servers. You can obtain the same information from the command                    In LXF101’s Clustering tutorial, we managed to print the wrong figure. Apologies
          line with the command                                                          to any readers who struggled to make sense of the diagram in the context of the
            # ipvsadm -L                                                                 tutorial. Here’s the Figure 1 that should have appeared last month!

          The periodic health check
          Also on the control/monitoring screen there’s an LVS process
          table. Here you’ll see a couple of ‘nanny’ processes. These are             requests to that machine. The nanny will continue to probe the
          responsible for verifying that the backend servers are still up and         server, however, and should it come back up, the nanny will run
          running, and there’s one nanny for each server. They work by                ipvsadm again to reinstate the routing entry.
          periodically sending a simple HTTP GET request and verifying                   You can observe this behaviour by unplugging the network
          that there’s a reply. If you look carefully at the -s and -x options        cable from a backend server (or simply stopping the httpd
          specified to nanny, you’ll see the send and expect strings that are         daemon) and examining the LVS routing table. You should see
          used for this test. (You can customise these if you want, using the         your dead server’s entry disappear. If you reconnect the network
          Virtual Servers > Monitoring Scripts page of Piranha.)                      cable, or restart the server, the entry will re-appear. Be patient,
              If you look at the Apache access logs on the backend servers,           though, it can take 20 seconds or so for these changes to show up.
          you’ll see these probe requests arriving every six seconds. If a
          nanny process detects that its server has stopped responding,               The Denouement
          it will invoke ipvsadm to remove that machine’s entry from the              If all seems well, it’s time to go to the client machine (machine 1 on
          LVS routing table, so that the director will no longer forward any          the first figure) and try to access the page in the browser. In this
                                                                                      example, you’d browse to http://192.168.0.41/proverbs.html,
                                                                                      and you should see the Proverbs page served from one of your
                                                                                      backend servers. If you force a page refresh, you’ll see the proverbs
                                                                                      page from the next server in the round-robin sequence. You should
                                                                                      also be able to verify the round-robin behaviour by examining the
                                                                                      apache access logs of the individual backend servers. (If you look
                                                                                      carefully at the log file entries, you see that these accesses come
                                                                                      from 192.168.0.2 whereas the probes from the nannies come
                                                                                      from 10.0.0.1.) If all of this works, congratulations! You’ve just built
                                                                                      your first load-balancing Linux cluster.

                                                                                      Postscript
                                                                                      Let’s pause and consider what we’ve demonstrated in these two
                                                                                      tutorials. We’ve seen how to build failover clustering solutions that
                                                                                      add another nine onto the availability of your service (for example,
                                                                                      to go from 99.9% to 99.99% availability); and we’ve seen how to
                                                                                      build load-balancing clusters able to far outstrip the performance
                                                                                      of an individual web server. In both cases we’ve used free, open
                                                                                      source software running on a free, open source operating system.
                                                                                      I sometimes wonder what, as he gets into bed at night with his
                                                                                      mug of Horlicks, Steve Ballmer makes of it all. It sure impresses
            Figure 4: The control and monitor screen of Piranha.                      the hell out of me. LXF




               Next month Distribute workload of a ‘virtual’ server across multiple backend servers.
                                                                                                                                                   February 2008 Linux Format 101




LXF102.tut_adv Sec1:101                                                                                                                                                    14/12/07 15:41:08

More Related Content

What's hot

Awrrpt 1 3004_3005
Awrrpt 1 3004_3005Awrrpt 1 3004_3005
Awrrpt 1 3004_3005Kam Chan
 
Virtualization Primer for Java Developers
Virtualization Primer for Java DevelopersVirtualization Primer for Java Developers
Virtualization Primer for Java DevelopersRichard McDougall
 
Building Business Continuity Solutions With Hyper V
Building Business Continuity Solutions With Hyper VBuilding Business Continuity Solutions With Hyper V
Building Business Continuity Solutions With Hyper Vrsnarayanan
 
Condroid WSN/DTN Gateway - Work Procedure
Condroid WSN/DTN Gateway - Work ProcedureCondroid WSN/DTN Gateway - Work Procedure
Condroid WSN/DTN Gateway - Work ProcedureLaili Aidi
 
Tudor Damian - Hyper-V 3.0 overview
Tudor Damian - Hyper-V 3.0 overviewTudor Damian - Hyper-V 3.0 overview
Tudor Damian - Hyper-V 3.0 overviewITCamp
 
Webinar: General Technical Overview of MongoDB
Webinar: General Technical Overview of MongoDBWebinar: General Technical Overview of MongoDB
Webinar: General Technical Overview of MongoDBMongoDB
 
Advance Java Programming( CM5I) 4. Networking Basics
Advance Java Programming( CM5I) 4. Networking BasicsAdvance Java Programming( CM5I) 4. Networking Basics
Advance Java Programming( CM5I) 4. Networking BasicsPayal Dungarwal
 
16 August 2012 - SWUG - Hyper-V in Windows 2012
16 August 2012 - SWUG - Hyper-V in Windows 201216 August 2012 - SWUG - Hyper-V in Windows 2012
16 August 2012 - SWUG - Hyper-V in Windows 2012Daniel Mar
 
Analysis of interrupt latencies in a real-time kernel
Analysis of interrupt latencies in a real-time kernelAnalysis of interrupt latencies in a real-time kernel
Analysis of interrupt latencies in a real-time kernelGabriele Modena
 
Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2
Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2
Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2Damir Bersinic
 
Avnet & Rorke Data - Open Compute Summit '13
Avnet & Rorke Data - Open Compute Summit '13Avnet & Rorke Data - Open Compute Summit '13
Avnet & Rorke Data - Open Compute Summit '13DaWane Wanek
 
12th Japan CloudStack User Group Meetup MidoNet with scalable virtual router
12th Japan CloudStack User Group Meetup   MidoNet with scalable virtual router12th Japan CloudStack User Group Meetup   MidoNet with scalable virtual router
12th Japan CloudStack User Group Meetup MidoNet with scalable virtual routerTakeshi Nakajima
 
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 Preview
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 PreviewCloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 Preview
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 PreviewChip Childers
 
Testing real-time Linux. What to test and how
Testing real-time Linux. What to test and how Testing real-time Linux. What to test and how
Testing real-time Linux. What to test and how Chirag Jog
 
Digital Records Deployment Plan 2010 (OPVMC)
Digital Records Deployment Plan 2010 (OPVMC)Digital Records Deployment Plan 2010 (OPVMC)
Digital Records Deployment Plan 2010 (OPVMC)Dave Warnes
 
Mit big data_sep2012
Mit big data_sep2012Mit big data_sep2012
Mit big data_sep2012Ratul Ray
 

What's hot (19)

Awrrpt 1 3004_3005
Awrrpt 1 3004_3005Awrrpt 1 3004_3005
Awrrpt 1 3004_3005
 
Virtualization Primer for Java Developers
Virtualization Primer for Java DevelopersVirtualization Primer for Java Developers
Virtualization Primer for Java Developers
 
Building Business Continuity Solutions With Hyper V
Building Business Continuity Solutions With Hyper VBuilding Business Continuity Solutions With Hyper V
Building Business Continuity Solutions With Hyper V
 
Condroid WSN/DTN Gateway - Work Procedure
Condroid WSN/DTN Gateway - Work ProcedureCondroid WSN/DTN Gateway - Work Procedure
Condroid WSN/DTN Gateway - Work Procedure
 
Tudor Damian - Hyper-V 3.0 overview
Tudor Damian - Hyper-V 3.0 overviewTudor Damian - Hyper-V 3.0 overview
Tudor Damian - Hyper-V 3.0 overview
 
Recompile
RecompileRecompile
Recompile
 
Webinar: General Technical Overview of MongoDB
Webinar: General Technical Overview of MongoDBWebinar: General Technical Overview of MongoDB
Webinar: General Technical Overview of MongoDB
 
Advance Java Programming( CM5I) 4. Networking Basics
Advance Java Programming( CM5I) 4. Networking BasicsAdvance Java Programming( CM5I) 4. Networking Basics
Advance Java Programming( CM5I) 4. Networking Basics
 
16 August 2012 - SWUG - Hyper-V in Windows 2012
16 August 2012 - SWUG - Hyper-V in Windows 201216 August 2012 - SWUG - Hyper-V in Windows 2012
16 August 2012 - SWUG - Hyper-V in Windows 2012
 
Analysis of interrupt latencies in a real-time kernel
Analysis of interrupt latencies in a real-time kernelAnalysis of interrupt latencies in a real-time kernel
Analysis of interrupt latencies in a real-time kernel
 
Exch2007 sp1 win2008
Exch2007 sp1 win2008Exch2007 sp1 win2008
Exch2007 sp1 win2008
 
Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2
Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2
Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2
 
淺談探索 Linux 系統設計之道
淺談探索 Linux 系統設計之道 淺談探索 Linux 系統設計之道
淺談探索 Linux 系統設計之道
 
Avnet & Rorke Data - Open Compute Summit '13
Avnet & Rorke Data - Open Compute Summit '13Avnet & Rorke Data - Open Compute Summit '13
Avnet & Rorke Data - Open Compute Summit '13
 
12th Japan CloudStack User Group Meetup MidoNet with scalable virtual router
12th Japan CloudStack User Group Meetup   MidoNet with scalable virtual router12th Japan CloudStack User Group Meetup   MidoNet with scalable virtual router
12th Japan CloudStack User Group Meetup MidoNet with scalable virtual router
 
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 Preview
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 PreviewCloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 Preview
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 Preview
 
Testing real-time Linux. What to test and how
Testing real-time Linux. What to test and how Testing real-time Linux. What to test and how
Testing real-time Linux. What to test and how
 
Digital Records Deployment Plan 2010 (OPVMC)
Digital Records Deployment Plan 2010 (OPVMC)Digital Records Deployment Plan 2010 (OPVMC)
Digital Records Deployment Plan 2010 (OPVMC)
 
Mit big data_sep2012
Mit big data_sep2012Mit big data_sep2012
Mit big data_sep2012
 

Similar to LXF #102 - Linux Virtual Server

Drupal Continuous Integration with Jenkins - The Basics
Drupal Continuous Integration with Jenkins - The BasicsDrupal Continuous Integration with Jenkins - The Basics
Drupal Continuous Integration with Jenkins - The BasicsJohn Smith
 
Telepresence - Fast Development Workflows for Kubernetes
Telepresence - Fast Development Workflows for KubernetesTelepresence - Fast Development Workflows for Kubernetes
Telepresence - Fast Development Workflows for KubernetesAmbassador Labs
 
Kubernetes Architecture with Components
 Kubernetes Architecture with Components Kubernetes Architecture with Components
Kubernetes Architecture with ComponentsAjeet Singh
 
Kubernetes for Docker Developers
Kubernetes for Docker DevelopersKubernetes for Docker Developers
Kubernetes for Docker DevelopersRed Hat Developers
 
Cloud computing virtualization
Cloud computing virtualizationCloud computing virtualization
Cloud computing virtualizationAyaz Shahid
 
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISORLOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISORVanika Kapoor
 
Gruntwork Executive Summary
Gruntwork Executive SummaryGruntwork Executive Summary
Gruntwork Executive SummaryYevgeniy Brikman
 
HA System-First presentation
HA System-First presentationHA System-First presentation
HA System-First presentationAvin Chan
 
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1Etsuji Nakai
 
Austin Web Architecture
Austin Web ArchitectureAustin Web Architecture
Austin Web Architecturejoaquincasares
 
The Lies We Tell Our Code (#seascale 2015 04-22)
The Lies We Tell Our Code (#seascale 2015 04-22)The Lies We Tell Our Code (#seascale 2015 04-22)
The Lies We Tell Our Code (#seascale 2015 04-22)Casey Bisson
 
Sharing your-internet-connection-on-linux
Sharing your-internet-connection-on-linuxSharing your-internet-connection-on-linux
Sharing your-internet-connection-on-linuxjasembo
 
Consolidating database servers with Lenovo ThinkServer RD630
Consolidating database servers with Lenovo ThinkServer RD630Consolidating database servers with Lenovo ThinkServer RD630
Consolidating database servers with Lenovo ThinkServer RD630Principled Technologies
 

Similar to LXF #102 - Linux Virtual Server (20)

Drupal Continuous Integration with Jenkins - The Basics
Drupal Continuous Integration with Jenkins - The BasicsDrupal Continuous Integration with Jenkins - The Basics
Drupal Continuous Integration with Jenkins - The Basics
 
Telepresence - Fast Development Workflows for Kubernetes
Telepresence - Fast Development Workflows for KubernetesTelepresence - Fast Development Workflows for Kubernetes
Telepresence - Fast Development Workflows for Kubernetes
 
Server 2008 R2 Yeniliklər
Server 2008 R2 YeniliklərServer 2008 R2 Yeniliklər
Server 2008 R2 Yeniliklər
 
Kubernetes Architecture with Components
 Kubernetes Architecture with Components Kubernetes Architecture with Components
Kubernetes Architecture with Components
 
Kubernetes for Docker Developers
Kubernetes for Docker DevelopersKubernetes for Docker Developers
Kubernetes for Docker Developers
 
Howto Pxeboot
Howto PxebootHowto Pxeboot
Howto Pxeboot
 
10
1010
10
 
Cloud computing virtualization
Cloud computing virtualizationCloud computing virtualization
Cloud computing virtualization
 
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISORLOAD BALANCING OF APPLICATIONS  USING XEN HYPERVISOR
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
 
Gruntwork Executive Summary
Gruntwork Executive SummaryGruntwork Executive Summary
Gruntwork Executive Summary
 
Rhel7 vs rhel6
Rhel7 vs rhel6Rhel7 vs rhel6
Rhel7 vs rhel6
 
HA System-First presentation
HA System-First presentationHA System-First presentation
HA System-First presentation
 
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
Architecture Overview: Kubernetes with Red Hat Enterprise Linux 7.1
 
Austin Web Architecture
Austin Web ArchitectureAustin Web Architecture
Austin Web Architecture
 
Windows Azure IaaS
Windows Azure IaaSWindows Azure IaaS
Windows Azure IaaS
 
The Lies We Tell Our Code (#seascale 2015 04-22)
The Lies We Tell Our Code (#seascale 2015 04-22)The Lies We Tell Our Code (#seascale 2015 04-22)
The Lies We Tell Our Code (#seascale 2015 04-22)
 
Creating your own datacenter
Creating your own datacenterCreating your own datacenter
Creating your own datacenter
 
A0950107
A0950107A0950107
A0950107
 
Sharing your-internet-connection-on-linux
Sharing your-internet-connection-on-linuxSharing your-internet-connection-on-linux
Sharing your-internet-connection-on-linux
 
Consolidating database servers with Lenovo ThinkServer RD630
Consolidating database servers with Lenovo ThinkServer RD630Consolidating database servers with Lenovo ThinkServer RD630
Consolidating database servers with Lenovo ThinkServer RD630
 

Recently uploaded

DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsPixlogix Infotech
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 

Recently uploaded (20)

DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
The Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and ConsThe Ultimate Guide to Choosing WordPress Pros and Cons
The Ultimate Guide to Choosing WordPress Pros and Cons
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 

LXF #102 - Linux Virtual Server

  • 1. Hardcore Linux Challenge yourself Tutorial Hardcore Linux with advanced projects for power users Linux Virtual Building load-balancing clusters with LVS is as good at keeping out the cold as chopping logs on a winter’s day, maintains Dr Chris Brown. backend servers; in principle you could use anything that offers a web server. In my case I used a couple of laptops, one running SUSE 10.3 and one running Ubuntu 7.04. In reality you’d probably want more than two backend servers, but two is enough to prove the point. You might well be wondering why I’m running a different operating system on each of the four computers. It’s a fair question, and one I often ask myself! Unfortunately, it’s simply a fact of life for those of us who make our living by messing around with Linux. How a load-balancing cluster works Client machines send service requests (for example an HTTP GET request for a web page) to the public IP address of the director (192.168.0.41 on the figure; as far as the clients are concerned, this is the IP address at which the service is provided). The director chooses one of the backend servers to forward the request to. It rewrites the destination IP address of the packet to be that of the chosen backend server and forwards the packet out onto the internal network. The chosen backend server processes the request and sends an HTTP response, which hopefully includes the requested page. This response is addressed to the client machine (192.168.0.2) but is initially sent back to the director T his month’s Hardcore tutorial shows you how to provide (10.0.0.1), because the director is configured as the default a load-balanced cluster of web servers, with a scalable gateway for the backend server. The director rewrites the source performance that far outstrips the capability of an individual server. We’ll be using the Red Hat cluster software, but the core functionality described here is based on the Linux virtual server kernel module and is not specific to Red Hat. Our If you want to follow along with the tutorial for real, you’ll expert need at least four computers, as shown in Figure 1 (opposite Dr Chris Brown page). I realise this might put it out of the reach of some readers A Unix user for over as a practical project, but hopefully it can remain as a thought 25 years, his own Interactive Digital experiment, like that thing Schr dinger did with his cat. If you’ve Learning company got some old machines lying about the place, why not refer to this provides Linux month’s cover feature to find out how to get them up and running training, content and consultancy. again – then network them? He also specialises Machine 1 is simply a client used for testing – it could be in computer-based classroom training anything that has a web browser. In my case, I commandeered delivery systems. my wife’s machine, which runs Vista. Machine 2 is where the real action is. This machine must be running Linux, and should preferably have two network interfaces, as shown in the figure. (It’s possible to construct the cluster using just a single network, but this arrangement wouldn’t be used on a production system, and it needs a bit of IP-level trickery to make it work). This machine performs load balancing and routing into the cluster, and is known as the “director” in some LVS documentation. In my case, machine 2 was running CentOS5 (essentially equivalent to RHEL5). Machines 3 and 4 are the Figure 2: Configuring a virtual server with Piranha. Last month Use ‘heartbeat’ to provide automatic failover to a backup server. 98 Linux Format February 2008 LXF102.tut_adv Sec2:98 14/12/07 15:41:05
  • 2. Hardcore Linux Tutorial Server IP address of the packet to refer to its own public IP address and sends the packet back to the client. All of this IP-level tomfoolery is carried out by the LVS module within the Linux kernel. The director is not simply acting as a 1 reverse web proxy, and commands such as netstat -ant will NOT show any user-level process listening on port 80. The address re-writing is a form of NAT (Network Address Translation) and allows the director to masquerade as the real server, and to hide the presence of the internal network that’s doing the real work. Balancing the load A key feature of LVS is load balancing – that is, making sure that each backend server receives roughly the same amount of work. LVS has several algorithms for doing this; we’ll mention four. 2 Round-robin scheduling is the simplest algorithm. The director just works its way around the backend servers in turn, starting round with the first one again when it reaches the end of the list. This is good for testing because it’s easy to verify that all your backend servers are working, but it may not be best in a production environment. Weighted round-robin is similar but lets you assign a weight to each backend server to indicate the relative speed of each server. A server with a weight of two is assumed to be twice as powerful as a server with a weight of one and will receive twice as many requests. Least-connection load balancing tracks the number of currently active connections to each backend server and 3 4 forwards the request to the one with the least number. Weighted least connection method also takes into account the relative processing power (weight) of the server. The weighted least connection method is good for production clusters because it works well when service requests take widely varying amounts of time to process and/or when the backend servers in the cluster Figure 1: A simple load-balancing cluster. are not equally powerful. The latter two algorithms are dynamic; that is, they take account of the current load on the machines in the cluster. The tools for the job Worried about bottlenecks? To make all this work, the kernel maintains a virtual server table which contains, among other things, the IP addresses of the Since all the traffic into and out of the cluster passes through backend servers. The command-line tool ipvsadm is used to the director, you might be wondering if these machines will maintain and inspect this table. create a performance bottleneck. In fact this is unlikely to be the case. Remember, the director is performing only simple packet Assuming that your kernel is built to include the LVS module header manipulation inside the kernel, and it turns out that even (and most of them are these days) ipvsadm is the only program a modest machine is capable of saturating a 100 MBps network. you absolutely must have to make LVS work; however there are It’s much more likely that the network bandwidth into the site some other toolsets around that make life easier. (The situation will be the bottleneck. For lots more information about using old is similar to the packet-filtering machinery in the kernel. In theory PCs for tasks like networking, see this month’s cover feature we only need the command-line tool iptables to manage this and starting on page 45. create a packet-filtering firewall, in practice most of us use higher- If you missed last issue Call 0870 837 4773 or +44 1858 438795. February 2008 Linux Format 99 LXF102.tut_adv Sec2:99 14/12/07 15:41:07
  • 3. Tutorial Hardcore Linux level tools – often graphical – to build our firewall rulesets.) For this tutorial we’ll use the clustering toolset from Red Hat which includes a browser-based configuration tool called Piranha. Now, the configuration shown in Figure 1 has one major shortcoming – the director presents a single point of failure within the cluster. To overcome this, it’s possible to use a primary and backup server pair as the director, using a heartbeat message exchange to allow the backup to detect the failure of the primary and take over its resources. For this tutorial I’ve ignored this issue, partly because I discussed it in detail last month and partly because I would have needed a fifth machine to construct the cluster, and I only have four! It should be noted, however, that the clustering toolset from Red Hat does indeed include functionality to failover from a primary to a backup director. Setting up the backend servers If you want to actually build the cluster described in this tutorial, start with the backend servers. These can be any machines able to offer HTTP-based web service on port 80; in my case they were running Linux and Apache. You will need to create some content in the DocumentRoot Figure 3: Specifying the real (backend) servers. directory of each server, and for testing purposes it’s a good idea to make the content different on each machine so we can easily “You’ll need at least four computers Setting up the director On the director machine, begin by downloading and installing to follow along for real; otherwise the Red Hat clustering tools. In my case (remember that I’m running CentOS5 on this machine) I simply used the see this as a thought experiment” graphical install tool (pirut) to install the packages ipvsadm and Piranha from the CentOS repository. My next step was to run the distinguish which one actually served the request. For example, command piranha-passwd to set a password for the piranha on backend server 1, I created a file called proverb.html with the configuration tool: line “A stitch in time saves nine” On server 2 the line was “Look . # /etc/init.d/piranha-gui start before you leap” Of course in reality you’d need to ensure that all . This service listens on port 3636 and provides a browser-based the backend servers were ‘in sync’ and serving the same content interface for configuring the clustering toolset, so once it’s running – an issue we’ll return to later. I can point my browser at http://localhost:3636. I’ll need to log Give the machines static IP addresses appropriate for the in, using the username piranha and the password I just set. internal network. In my case I used 10.0.0.2 and 10.0.0.3. Set the From here I’m presented with four main screens: Control/ default route on these machines to be the private IP address of Monitoring, Global Settings, Redundancy and Virtual Servers (you the director (10.0.0.1). can see the links to these in Figure 3 (above). To begin, go to the If it doesn’t work Chances are good that your cluster won’t work the 192.168.0.0 network and the 10.0.0.0 network via If it reports ‘1’ that’s fine, but if it reports ‘0’ you’ll , first time you try it. You have a couple of choices at the appropriate interfaces. need to enable IP forwarding like this: this point. First, you could hurl one of the laptops If you can ping your backend servers, fire up # echo 1 > /proc/sys/net/ipv4/ip_forward across the room and curse Microsoft. the web browser on the LVS machine (assuming You might also verify that your kernel is actually These are not, of course, productive there is a browser installed there – you don’t really configured to use LVS. If your distro provides a copy responses, even if they do make you feel better. need one). Verify that you can reach the URLs of the config file used to build the kernel (it will And Microsoft really shouldn’t be blamed since http://10.0.0.2 and http://10.0.0.3, and see the likely be /boot/config-something-or-other) use we’re not actually using any of its software. The web content served by these two machines. (If the grep to look for the string CONFIG_IP_VS in this second response is to take some deep, calming pings are OK but the web servers aren’t accessible, file, and verify that you see a line like CONFIG_IP_ breaths, then conduct a few diagnostic tests: make sure that the web servers are actually VS=m, among others. You might also try: A good first step would be to verify connectivity running on the backend servers, and that the # lsmod | grep vs from the LVS machine, using good ol’ ping. backend servers don’t have firewall rules in place to verify that the ip_vs module is loaded. If you First, make sure you can ping your test machine that prevent them from being accessed.) can’t find evidence for virtual server support in the (192.168.0.2). Then check that you can ping both You should also carefully check the LVS routing kernel you may have to reconfigure and rebuild your backend servers (10.0.0.2 and 10.0.0.3). If table displayed by the Control/Monitoring page of your kernel, an activity that is beyond the scope of this doesn’t work, run ifconfig -a and verify that Piranha and verify that your backend servers show this tutorial. eth0 has IP address 192.168.0.41 and eth1 has up there. If you’re still having no luck, verify that IP If all these tests look OK and things still don’t address 10.0.0.1. forwarding is enabled on the LVS machine. You can work, you may now hurl one of the laptops across Also, run route -n to check the routing do this with the command: the room. This won’t make your cluster work, but at table – verify that you have routes out onto the # cat /proc/sys/net/ipv4/ip_forward least you’ll have a better idea of why it’s broken .... 100 Linux Format February 2008 LXF102.tut_adv Sec1:100 14/12/07 15:41:07
  • 4. Hardcore Linux Tutorial Virtual Servers screen and add a new service. Figure 2 (page 98) shows the form you’ll fill in. Among other things, you need to give the service a name, specify the port number and ERRATA: Figure 1 from LXF101 interface it will accept request packets on, and select the scheduling algorithm (for initial testing I chose ‘Round Robin’). Clicking the Real Server link at the top of this page takes you to the screen shown in Figure 3. Here you can specify the names, IP addresses and weights of your backend servers. Behind the scenes, most of the configuration captured by Piranha is stored in the config file /etc/sysconfig/ha/lvs.cf. Other tools in the clustering toolset read this file; it’s plain text and there’s no reason why you can’t just hand-edit it directly if you prefer. With this configuration in place, you should be good to go. Launch the clustering service from the command line with: # /etc/init.d/pulse start (On a production system you’d have this service start automatically at boot time.) Now go to the Piranha control/monitoring screen, shown in Figure 4 (below). Look carefully at the LVS routing table. You should see an entry there for your virtual service (that’s the line starting TCP...) and below that, a line for each of your backend servers. You can obtain the same information from the command In LXF101’s Clustering tutorial, we managed to print the wrong figure. Apologies line with the command to any readers who struggled to make sense of the diagram in the context of the # ipvsadm -L tutorial. Here’s the Figure 1 that should have appeared last month! The periodic health check Also on the control/monitoring screen there’s an LVS process table. Here you’ll see a couple of ‘nanny’ processes. These are requests to that machine. The nanny will continue to probe the responsible for verifying that the backend servers are still up and server, however, and should it come back up, the nanny will run running, and there’s one nanny for each server. They work by ipvsadm again to reinstate the routing entry. periodically sending a simple HTTP GET request and verifying You can observe this behaviour by unplugging the network that there’s a reply. If you look carefully at the -s and -x options cable from a backend server (or simply stopping the httpd specified to nanny, you’ll see the send and expect strings that are daemon) and examining the LVS routing table. You should see used for this test. (You can customise these if you want, using the your dead server’s entry disappear. If you reconnect the network Virtual Servers > Monitoring Scripts page of Piranha.) cable, or restart the server, the entry will re-appear. Be patient, If you look at the Apache access logs on the backend servers, though, it can take 20 seconds or so for these changes to show up. you’ll see these probe requests arriving every six seconds. If a nanny process detects that its server has stopped responding, The Denouement it will invoke ipvsadm to remove that machine’s entry from the If all seems well, it’s time to go to the client machine (machine 1 on LVS routing table, so that the director will no longer forward any the first figure) and try to access the page in the browser. In this example, you’d browse to http://192.168.0.41/proverbs.html, and you should see the Proverbs page served from one of your backend servers. If you force a page refresh, you’ll see the proverbs page from the next server in the round-robin sequence. You should also be able to verify the round-robin behaviour by examining the apache access logs of the individual backend servers. (If you look carefully at the log file entries, you see that these accesses come from 192.168.0.2 whereas the probes from the nannies come from 10.0.0.1.) If all of this works, congratulations! You’ve just built your first load-balancing Linux cluster. Postscript Let’s pause and consider what we’ve demonstrated in these two tutorials. We’ve seen how to build failover clustering solutions that add another nine onto the availability of your service (for example, to go from 99.9% to 99.99% availability); and we’ve seen how to build load-balancing clusters able to far outstrip the performance of an individual web server. In both cases we’ve used free, open source software running on a free, open source operating system. I sometimes wonder what, as he gets into bed at night with his mug of Horlicks, Steve Ballmer makes of it all. It sure impresses Figure 4: The control and monitor screen of Piranha. the hell out of me. LXF Next month Distribute workload of a ‘virtual’ server across multiple backend servers. February 2008 Linux Format 101 LXF102.tut_adv Sec1:101 14/12/07 15:41:08