This document discusses setting up a load balancing cluster with Linux virtual server (LVS). It describes using four computers - one as a client, one as the director to perform load balancing and routing, and two as backend servers. The director uses LVS and network address translation to distribute client requests to the backend servers to balance the load. It discusses four load balancing algorithms available in LVS, including round-robin, weighted round-robin, least-connection, and weighted least-connection.
This document provides instructions for setting up a basic Linux Virtual Server (LVS) using one of three forwarding methods: LVS-NAT, LVS-DR, or LVS-Tun. It discusses the minimum hardware and software requirements, including needing at least three machines - one client, one director, and one real server. It also provides examples for setting up an LVS using LVS-NAT and LVS-DR forwarding to provide the telnet and HTTP services. The document is intended to help users quickly set up a basic demonstration LVS without requiring an in-depth understanding of how LVS works under the hood.
A Survey of Performance Comparison between Virtual Machines and Containersprashant desai
Since the onset of Cloud computing and its inroads into infrastructure as a service, Virtualization has become peak
of importance in the field of abstraction and resource management. However, these additional layers of abstraction provided by virtualization come at a trade-off between performance and cost in a cloud environment where everything is on a pay-per-use basis. Containers which are perceived to be the future of virtualization are developed to address these issues. This study paper scrutinizes the performance of a conventional virtual machine and contrasts them with the containers. We cover the critical
assessment of each parameter and its behavior when its subjected to various stress tests. We discuss the implementations and their performance metrics to help us draw conclusions on which one is ideal to use for desired needs. After assessment of the result and discussion of the limitations, we conclude with prospects for future research
This document discusses virtualization technologies including server virtualization using Hyper-V, desktop virtualization, application virtualization, and presentation virtualization. It covers key features of Hyper-V like live migration, failover clustering, thin provisioning, and improvements in Windows Server 2008 R2. Management techniques for virtualized environments are also addressed.
This report summarizes the workload on the ERPSIT database with the following key details:
- The database has 2 instances and is hosted on a Linux server with 4 CPUs and 7.8GB of memory.
- Between snapshots 3004 and 3005, there was 60.1 minutes of activity with 174 sessions.
- The largest consumers of database time were SQL execute elapsed time at 94.7% and DB CPU time at 63.4%.
The document discusses different architectures for multiplayer online games, including fully centralized, fully decentralized, and hybrid architectures. It describes several hybrid architectures such as peer-to-peer with a central arbiter, mirrored servers, zoned servers, and supporting seamless game worlds. Generic gaming proxies are also introduced that can provide specialized services like timestamping and message ordering to prevent cheating.
This document discusses different architectures for networked games, including centralized, peer-to-peer with central arbiter, mirrored servers, zoned servers, and supporting seamless game worlds. It proposes algorithms for partitioning a game world grid among servers to balance load and minimize communication costs, such as sorting cells by load and assigning them sequentially, or merging adjacent clusters iteratively. The optimal partitioning problem is NP-complete, so the clustering approach provides a heuristic solution.
Building a Two Node SLES 11 SP2 Linux Cluster with VMwaregeekswing
Linux clustering is one of the most cost-effective way to have redundant servers serving your applications. In this document we show you how to build a two node cluster using SUSE Linux Enterprise Server (SLES) utilizing a virtual environment (ESXi 4) as your testing ground. You can check for more info at our website:
http://geekswing.com/geek/building-a-two-node-sles11-sp2-linux-cluster-on-vmware/
This document provides instructions for setting up a basic Linux Virtual Server (LVS) using one of three forwarding methods: LVS-NAT, LVS-DR, or LVS-Tun. It discusses the minimum hardware and software requirements, including needing at least three machines - one client, one director, and one real server. It also provides examples for setting up an LVS using LVS-NAT and LVS-DR forwarding to provide the telnet and HTTP services. The document is intended to help users quickly set up a basic demonstration LVS without requiring an in-depth understanding of how LVS works under the hood.
A Survey of Performance Comparison between Virtual Machines and Containersprashant desai
Since the onset of Cloud computing and its inroads into infrastructure as a service, Virtualization has become peak
of importance in the field of abstraction and resource management. However, these additional layers of abstraction provided by virtualization come at a trade-off between performance and cost in a cloud environment where everything is on a pay-per-use basis. Containers which are perceived to be the future of virtualization are developed to address these issues. This study paper scrutinizes the performance of a conventional virtual machine and contrasts them with the containers. We cover the critical
assessment of each parameter and its behavior when its subjected to various stress tests. We discuss the implementations and their performance metrics to help us draw conclusions on which one is ideal to use for desired needs. After assessment of the result and discussion of the limitations, we conclude with prospects for future research
This document discusses virtualization technologies including server virtualization using Hyper-V, desktop virtualization, application virtualization, and presentation virtualization. It covers key features of Hyper-V like live migration, failover clustering, thin provisioning, and improvements in Windows Server 2008 R2. Management techniques for virtualized environments are also addressed.
This report summarizes the workload on the ERPSIT database with the following key details:
- The database has 2 instances and is hosted on a Linux server with 4 CPUs and 7.8GB of memory.
- Between snapshots 3004 and 3005, there was 60.1 minutes of activity with 174 sessions.
- The largest consumers of database time were SQL execute elapsed time at 94.7% and DB CPU time at 63.4%.
The document discusses different architectures for multiplayer online games, including fully centralized, fully decentralized, and hybrid architectures. It describes several hybrid architectures such as peer-to-peer with a central arbiter, mirrored servers, zoned servers, and supporting seamless game worlds. Generic gaming proxies are also introduced that can provide specialized services like timestamping and message ordering to prevent cheating.
This document discusses different architectures for networked games, including centralized, peer-to-peer with central arbiter, mirrored servers, zoned servers, and supporting seamless game worlds. It proposes algorithms for partitioning a game world grid among servers to balance load and minimize communication costs, such as sorting cells by load and assigning them sequentially, or merging adjacent clusters iteratively. The optimal partitioning problem is NP-complete, so the clustering approach provides a heuristic solution.
Building a Two Node SLES 11 SP2 Linux Cluster with VMwaregeekswing
Linux clustering is one of the most cost-effective way to have redundant servers serving your applications. In this document we show you how to build a two node cluster using SUSE Linux Enterprise Server (SLES) utilizing a virtual environment (ESXi 4) as your testing ground. You can check for more info at our website:
http://geekswing.com/geek/building-a-two-node-sles11-sp2-linux-cluster-on-vmware/
This document provides an overview and summary of key concepts around virtualization that will be covered in more depth at a technical deep dive session, including:
- Virtualization capabilities for desktops/laptops and servers including workstation virtualization and server consolidation.
- How virtual machines work and the overhead associated with virtualization.
- Properties of virtualization like partitioning, isolation, and encapsulation.
- Benefits of server virtualization like consolidation, simpler management, and automated resource pooling.
- Comparison of "hosted" and vSphere virtualization architectures.
- Technologies used in virtualization like binary translation, hardware assistance from Intel VT/AMD-V.
- Ability to virtualize CPU intensive applications with
Hyper-V 3.0 provides several performance, scalability, and disaster recovery improvements over previous versions. New features include Hyper-V Replica for replication between sites, VHDX and data deduplication for improved storage, and live storage migration for non-disruptive workload mobility. Networking is enhanced with extensible virtual switching, QoS, and multi-tenant isolation using technologies like GRE encapsulation and address rewrite. Management is improved through integration with System Center and PowerShell automation.
Building Business Continuity Solutions With Hyper Vrsnarayanan
This document provides an overview and agenda for a session on virtualization and high availability. It discusses types of high availability enabled by virtualization including cluster creation and making virtual machines highly available. It also covers demos of Windows Server 2008 cluster creation and configuring virtual machine high availability. Additional topics include stretch clusters, guest clustering best practices, Hyper-V and network load balancing, disaster recovery and virtualization, and new features in Windows Server 2008 R2 such as live migration.
This document provides information about networking basics including client-server computing, sockets, TCP, UDP, ports, proxies, and internet addressing. It discusses how client-server computing uses clients and servers, how sockets provide endpoints for communication, and how TCP and UDP are used for reliable and unreliable data transmission respectively. It also covers common port numbers, how proxies cache requests and act as intermediaries, and how internet addresses are represented in IPv4 and IPv6 formats. The document is intended as a teaching aid for a class on advanced Java programming and networking concepts.
This document provides an overview of database availability groups (DAGs) in Exchange Server 2010. It defines what a DAG is and explains some of its key components and functions, including mailbox databases, copies, the active manager, continuous replication, log shipping, and failover processes. It also briefly reviews some clustering basics such as quorum, standard quorum, and majority node set quorum to provide context before delving into the specifics of DAGs.
Exchange 2007 SP1 provides additional features for the Exchange Management Console including enhanced LCR configuration and new toolbox tools. It also expands Outlook Web App capabilities such as personal distribution lists and S/MIME. Continuous replication is introduced to provide high availability and recovery options between Exchange servers. Exchange 2007 SP1 supports both Windows Server 2008 and 2003, though there are some differences in features and requirements between the server operating systems. The document provides guidance on installing SP1 and transitioning to Windows Server 2008.
Design and implementation of a reliable and cost-effective cloud computing in...Francesco Taurino
This document summarizes the INFN Napoli experience in designing and implementing a reliable and cost-effective cloud computing infrastructure. Key aspects included using existing hardware, virtualization and clustering technologies to consolidate services and reduce costs. A network with redundant switches and storage servers using GlusterFS provided high availability. Custom tools were developed to simplify administration tasks like provisioning, migration, and load balancing of virtual machines. The solution provided an efficient and reliable private cloud with over one year of uninterrupted uptime.
Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2Damir Bersinic
This is the second of a 2-part series delivered at Prairie DevCon in Calgry on March 15. 2012. The sessions provided a quick overview of the new features of Hyper-V in Windows Server "8" Beta and how these compare to VMware vSphere 5.
16 August 2012 - SWUG - Hyper-V in Windows 2012Daniel Mar
Windows Server 8 Beta includes new features that improve Hyper-V host scalability, virtual machine density, and high availability. Key enhancements include support for up to 320 logical processors, 4 TB of memory per host, 1,024 virtual CPUs and 1 TB of memory per virtual machine, and 1,024 active VMs per host. It also features improved live migration, storage migration, and failover clustering capabilities.
This presentation covers the working model about Process, Thread, system call, Memory operations, Binder IPC, and interactions with Android frameworks.
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)Boden Russell
This document summarizes a presentation on Linux containers (LXC) as an alternative to virtual machines for cloud computing and containerization. It finds that LXC provides significantly better performance than virtual machines in terms of provisioning time, CPU and memory usage, and load on the compute node. Specifically, when packing 15 active VMs/containers on a node, LXC uses 0.54% CPU on average compared to 7.64% for KVM, and 734 MB of memory total compared to 4,387 MB for KVM. When booting VMs/containers serially, the average boot time is 3.5 seconds for LXC versus 5.8 seconds for KVM, and CPU usage is lower overall for
Linux uses /proc/iomem as a "Rosetta Stone" to establish relationships between software and hardware. /proc/iomem maps physical memory addresses to devices, similar to how the Rosetta Stone helped map Egyptian hieroglyphs to Greek and decode ancient Egyptian texts. This virtual file allows the kernel to interface with devices by providing address translations between physical and virtual memory spaces.
Avnet & Rorke Data - Open Compute Summit '13DaWane Wanek
Avnet introduces a new archive storage enclosure designed for Open Compute projects. The enclosure holds 20 3.5 inch hard drives and connects to servers through SAS expanders. It is a lightweight and efficient design that allows single-person handling and includes features for airflow, power management, and monitoring. Avnet worked with several partners in developing the open specification enclosure to support the Open Compute community.
What's LUM Got To Do with It: Deployment Considerations for Linux User Manage...Novell
LUM it or leave it! Join us in this session to explore the inner workings of Linux User Management. You'll learn about the architecture, configuration and implementation of Linux User Management on Novell Open Enterprise Server and SUSE Linux Enterprise Desktop. We'll also cover placement of LUM objects in a tree, services and tree design to enhance performance for LUM-enabled users, and LUM considerations when upgrading from NetWare to Linux.
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 PreviewChip Childers
Chip Childers is the VP of Apache CloudStack and Principal Engineer at SunGard Availability Services.
Apache CloudStack is open source software that can deploy and manage large networks of virtual machines as a scalable IaaS cloud platform. It is a top-level project at the Apache Software Foundation.
CloudStack enables cloud operators to design, install, support, upgrade and scale diverse cloud environments. It also allows application owners to easily consume infrastructure services so that infrastructure does not get in the way of delivering applications to end users.
Performance characteristics of traditional v ms vs docker containers (dockerc...Boden Russell
Docker containers provide significantly lower resource usage and higher density than traditional virtual machines when running multiple workloads concurrently on a server.
When booting 15 Ubuntu VMs with MySQL sequentially, Docker containers boot on average 3.5 seconds compared to 5.8 seconds for KVMs. During steady state operation of 15 active VMs, Docker uses on average 0.2% CPU and 49MB RAM per container, while KVMs use 1.9% CPU and 292MB RAM each. Docker maintains low 1-minute load averages of 0.15, while KVMs average 35.9% under load.
The document provides an overview of User Mode Linux (UML), including what it is, how it works, alternatives, and how to use it. UML allows running the Linux kernel as a userspace process, enabling uses like kernel debugging, security testing, and hosting virtual servers. It works by modifying the host kernel to create separate address spaces for guest kernels and processes using hardware virtualization. Key components discussed include filesystems, networking using TUN/TAP devices, management scripts, backups using LVM snapshots and blocksync, and network monitoring tools like MRTG and iftop.
Drupal Continuous Integration with Jenkins - The BasicsJohn Smith
Please check out our new SlideShow of setting up and configuring a Jenkins Continuous Integration server for use within a Drupal development environment. We walk you through the steps of installing Ubuntu 10.04 LTS, Jenkins, Drush and several other PHP coding tools and Drupal Modules to help check your code against current Drupal standards. Then we walk you through creating a git post-receive script, and Jenkins job to pull it all together.
Telepresence - Fast Development Workflows for KubernetesAmbassador Labs
Telepresence allows developers to test Kubernetes applications locally on their laptop for a fast development workflow. It works by replacing deployments with a proxy that redirects traffic to the local machine, providing access to cluster resources and unlimited compute. This avoids the slow process of building containers and deploying to the cluster for each change. The developer's laptop joins the cluster network namespace, allowing local tools to integrate seamlessly. Telepresence is an open source CNCF project that is actively developed to support additional features like multi-user development and persistent proxy deployments.
This document provides an overview and summary of key concepts around virtualization that will be covered in more depth at a technical deep dive session, including:
- Virtualization capabilities for desktops/laptops and servers including workstation virtualization and server consolidation.
- How virtual machines work and the overhead associated with virtualization.
- Properties of virtualization like partitioning, isolation, and encapsulation.
- Benefits of server virtualization like consolidation, simpler management, and automated resource pooling.
- Comparison of "hosted" and vSphere virtualization architectures.
- Technologies used in virtualization like binary translation, hardware assistance from Intel VT/AMD-V.
- Ability to virtualize CPU intensive applications with
Hyper-V 3.0 provides several performance, scalability, and disaster recovery improvements over previous versions. New features include Hyper-V Replica for replication between sites, VHDX and data deduplication for improved storage, and live storage migration for non-disruptive workload mobility. Networking is enhanced with extensible virtual switching, QoS, and multi-tenant isolation using technologies like GRE encapsulation and address rewrite. Management is improved through integration with System Center and PowerShell automation.
Building Business Continuity Solutions With Hyper Vrsnarayanan
This document provides an overview and agenda for a session on virtualization and high availability. It discusses types of high availability enabled by virtualization including cluster creation and making virtual machines highly available. It also covers demos of Windows Server 2008 cluster creation and configuring virtual machine high availability. Additional topics include stretch clusters, guest clustering best practices, Hyper-V and network load balancing, disaster recovery and virtualization, and new features in Windows Server 2008 R2 such as live migration.
This document provides information about networking basics including client-server computing, sockets, TCP, UDP, ports, proxies, and internet addressing. It discusses how client-server computing uses clients and servers, how sockets provide endpoints for communication, and how TCP and UDP are used for reliable and unreliable data transmission respectively. It also covers common port numbers, how proxies cache requests and act as intermediaries, and how internet addresses are represented in IPv4 and IPv6 formats. The document is intended as a teaching aid for a class on advanced Java programming and networking concepts.
This document provides an overview of database availability groups (DAGs) in Exchange Server 2010. It defines what a DAG is and explains some of its key components and functions, including mailbox databases, copies, the active manager, continuous replication, log shipping, and failover processes. It also briefly reviews some clustering basics such as quorum, standard quorum, and majority node set quorum to provide context before delving into the specifics of DAGs.
Exchange 2007 SP1 provides additional features for the Exchange Management Console including enhanced LCR configuration and new toolbox tools. It also expands Outlook Web App capabilities such as personal distribution lists and S/MIME. Continuous replication is introduced to provide high availability and recovery options between Exchange servers. Exchange 2007 SP1 supports both Windows Server 2008 and 2003, though there are some differences in features and requirements between the server operating systems. The document provides guidance on installing SP1 and transitioning to Windows Server 2008.
Design and implementation of a reliable and cost-effective cloud computing in...Francesco Taurino
This document summarizes the INFN Napoli experience in designing and implementing a reliable and cost-effective cloud computing infrastructure. Key aspects included using existing hardware, virtualization and clustering technologies to consolidate services and reduce costs. A network with redundant switches and storage servers using GlusterFS provided high availability. Custom tools were developed to simplify administration tasks like provisioning, migration, and load balancing of virtual machines. The solution provided an efficient and reliable private cloud with over one year of uninterrupted uptime.
Prairie DevCon-What's New in Hyper-V in Windows Server "8" Beta - Part 2Damir Bersinic
This is the second of a 2-part series delivered at Prairie DevCon in Calgry on March 15. 2012. The sessions provided a quick overview of the new features of Hyper-V in Windows Server "8" Beta and how these compare to VMware vSphere 5.
16 August 2012 - SWUG - Hyper-V in Windows 2012Daniel Mar
Windows Server 8 Beta includes new features that improve Hyper-V host scalability, virtual machine density, and high availability. Key enhancements include support for up to 320 logical processors, 4 TB of memory per host, 1,024 virtual CPUs and 1 TB of memory per virtual machine, and 1,024 active VMs per host. It also features improved live migration, storage migration, and failover clustering capabilities.
This presentation covers the working model about Process, Thread, system call, Memory operations, Binder IPC, and interactions with Android frameworks.
LXC – NextGen Virtualization for Cloud benefit realization (cloudexpo)Boden Russell
This document summarizes a presentation on Linux containers (LXC) as an alternative to virtual machines for cloud computing and containerization. It finds that LXC provides significantly better performance than virtual machines in terms of provisioning time, CPU and memory usage, and load on the compute node. Specifically, when packing 15 active VMs/containers on a node, LXC uses 0.54% CPU on average compared to 7.64% for KVM, and 734 MB of memory total compared to 4,387 MB for KVM. When booting VMs/containers serially, the average boot time is 3.5 seconds for LXC versus 5.8 seconds for KVM, and CPU usage is lower overall for
Linux uses /proc/iomem as a "Rosetta Stone" to establish relationships between software and hardware. /proc/iomem maps physical memory addresses to devices, similar to how the Rosetta Stone helped map Egyptian hieroglyphs to Greek and decode ancient Egyptian texts. This virtual file allows the kernel to interface with devices by providing address translations between physical and virtual memory spaces.
Avnet & Rorke Data - Open Compute Summit '13DaWane Wanek
Avnet introduces a new archive storage enclosure designed for Open Compute projects. The enclosure holds 20 3.5 inch hard drives and connects to servers through SAS expanders. It is a lightweight and efficient design that allows single-person handling and includes features for airflow, power management, and monitoring. Avnet worked with several partners in developing the open specification enclosure to support the Open Compute community.
What's LUM Got To Do with It: Deployment Considerations for Linux User Manage...Novell
LUM it or leave it! Join us in this session to explore the inner workings of Linux User Management. You'll learn about the architecture, configuration and implementation of Linux User Management on Novell Open Enterprise Server and SUSE Linux Enterprise Desktop. We'll also cover placement of LUM objects in a tree, services and tree design to enhance performance for LUM-enabled users, and LUM considerations when upgrading from NetWare to Linux.
CloudStack DC Meetup - Apache CloudStack Overview and 4.1/4.2 PreviewChip Childers
Chip Childers is the VP of Apache CloudStack and Principal Engineer at SunGard Availability Services.
Apache CloudStack is open source software that can deploy and manage large networks of virtual machines as a scalable IaaS cloud platform. It is a top-level project at the Apache Software Foundation.
CloudStack enables cloud operators to design, install, support, upgrade and scale diverse cloud environments. It also allows application owners to easily consume infrastructure services so that infrastructure does not get in the way of delivering applications to end users.
Performance characteristics of traditional v ms vs docker containers (dockerc...Boden Russell
Docker containers provide significantly lower resource usage and higher density than traditional virtual machines when running multiple workloads concurrently on a server.
When booting 15 Ubuntu VMs with MySQL sequentially, Docker containers boot on average 3.5 seconds compared to 5.8 seconds for KVMs. During steady state operation of 15 active VMs, Docker uses on average 0.2% CPU and 49MB RAM per container, while KVMs use 1.9% CPU and 292MB RAM each. Docker maintains low 1-minute load averages of 0.15, while KVMs average 35.9% under load.
The document provides an overview of User Mode Linux (UML), including what it is, how it works, alternatives, and how to use it. UML allows running the Linux kernel as a userspace process, enabling uses like kernel debugging, security testing, and hosting virtual servers. It works by modifying the host kernel to create separate address spaces for guest kernels and processes using hardware virtualization. Key components discussed include filesystems, networking using TUN/TAP devices, management scripts, backups using LVM snapshots and blocksync, and network monitoring tools like MRTG and iftop.
Drupal Continuous Integration with Jenkins - The BasicsJohn Smith
Please check out our new SlideShow of setting up and configuring a Jenkins Continuous Integration server for use within a Drupal development environment. We walk you through the steps of installing Ubuntu 10.04 LTS, Jenkins, Drush and several other PHP coding tools and Drupal Modules to help check your code against current Drupal standards. Then we walk you through creating a git post-receive script, and Jenkins job to pull it all together.
Telepresence - Fast Development Workflows for KubernetesAmbassador Labs
Telepresence allows developers to test Kubernetes applications locally on their laptop for a fast development workflow. It works by replacing deployments with a proxy that redirects traffic to the local machine, providing access to cluster resources and unlimited compute. This avoids the slow process of building containers and deploying to the cluster for each change. The developer's laptop joins the cluster network namespace, allowing local tools to integrate seamlessly. Telepresence is an open source CNCF project that is actively developed to support additional features like multi-user development and persistent proxy deployments.
Kubernetes Architecture with ComponentsAjeet Singh
This document provides an overview of Kubernetes architecture and components. It describes how to run a simple Kubernetes setup using a Docker container. The container launches all key Kubernetes components including the API server, scheduler, etcd and controller manager. Using kubectl, the document demonstrates deploying an nginx pod and exposing it as a service. This allows curling the nginx default page via the service IP to confirm the basic setup is functioning.
Yes, Docker is great! We are all very aware of that but now it’s time to take the next step: wrapping it all and deploying to a production environment. For this scenario we need something more. For that “more” we have Kubernetes by Google - a container platform based on the same technology used to deploy billions of containers per month on Google’s infrastructure.
Ready to leverage your Docker skills? Come to this session to see how your current Docker skillset can be easily mapped to Kubernetes concepts and commands. And get ready to deploy your containers in production!
This document describes how to set up a thin client deployment using PXE boot in a Microsoft-dominated network environment. Key steps include:
1. Configuring the DHCP server to provide PXE boot options and boot file information.
2. Preparing the RIS server by creating a PXE directory structure and boot images using the PXES tool.
3. Addressing bugs in PXES related to USB support, Samba password changes, and keyboard mappings to allow booting into a Linux environment and connecting to Windows terminal servers.
The document summarizes key aspects of the Coda distributed file system. Coda was designed by Carnegie Mellon University to be scalable, secure, and highly available. It aims for transparency in naming, location, and failures. Coda uses a client-server model where clients interact with file servers through Venus processes. File data and metadata are cached at clients to allow for continued access when disconnected from servers. Coda employs replication and unique file identifiers to manage distributed data access transparently.
This document discusses cloud computing concepts including cloud characteristics, architectural layers, infrastructure models, and virtualization. It focuses on the cloud ecosystem including cloud consumers, management, virtual infrastructure management using tools like OpenNebula, and virtual machine managers like Xen and KVM. OpenNebula is described as providing a unified view of virtual resources across platforms and managing VM lifecycles through orchestrating image, network, and hypervisor management.
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISORVanika Kapoor
Xen virtualization allows multiple virtual machines to run simultaneously on a single physical server. This increases hardware utilization and makes provisioning new servers easier. NFS allows files to be accessed remotely over a network, enabling file sharing between systems. NFS uses RPC to perform file operations like reads, writes and attribute retrieval. It has advantages of flexibility but also security risks if not configured properly. Newer NFS versions aim to improve performance and mandate strong authentication.
A brief overview of what we do at Gruntwork. Learn what we mean by "DevOps as a Service" and how you can get your entire infrastructure, defined as code, in about a day. https://www.gruntwork.io/
High-availability web server clusters use redundancy to ensure continuous service if any single component fails. They distribute workload across multiple servers for improved efficiency and scalability. The document proposes using Linux Virtual Server to implement such a cluster, with virtual servers distributing requests to real servers via network address translation, direct routing, or IP tunneling. Hardware requirements include servers, switches, and cabling to physically set up the redundant infrastructure.
An overview of the cloud technologies that I've used and the nuances between them.
This presentation was talking primarily about:
https://github.com/riptano/ComboAMI/tree/2.2
Until recently Windows Azure has been a Platform-as-a-Service (PaaS) offering. PaaS is great in terms of scalability, availability, lower TCO and time-to-market, but there are a lot of real world scenarios that either are hard to implement on PaaS or still require on-premises infrastructure. June 7th this year Microsoft launched a preview offering of Infrastructure-as-a-Service as well. Now, we have Windows Azure Virtual Machines and Windows Azure Virtual Network at our disposal, which makes a lot of these real world scenarios feasible in Windows Azure without harming the business case for that scenario.
The Lies We Tell Our Code (#seascale 2015 04-22)Casey Bisson
This document discusses various lies and forms of virtualization that are commonly used in computing. It begins by summarizing different virtualization technologies used at Joyent like zones, SmartOS, and Triton. It then discusses lies told at different layers of the stack, from virtual memory to network virtualization. Some key lies discussed include hyperthreading, paravirtualization, hardware virtual machines, Docker containers, filesystem virtualization techniques, and network virtualization. The document argues that many of these lies are practical choices that improve performance and workload density despite not perfectly representing the underlying hardware. It concludes by acknowledging the need to be mindful of security issues but also not to stop lying at the edge of the compute node.
Virtualization software was implemented on two servers to efficiently utilize hardware resources and meet various software needs. VMware ESXi was selected as the virtualization solution. It allows running multiple virtual machines on a single server for Linux, Windows, databases, and other environments. A virtual machine was configured as a NAT router to provide internet access and port forwarding for services. Periodic backups were set up to address issues with software-based RAID. A separate virtual machine provides network attached storage. The VMware Vsphere management interface allows remote administration of virtual machines and networks.
1) The document proposes using thin client technology with private cloud computing to create a green computing environment that reduces IT costs and power consumption.
2) Thin client technology runs applications on a centralized server instead of individual workstations, transmitting only screen images over the network. This reduces bandwidth usage compared to traditional server-based computing.
3) When combined with virtualization and private cloud computing, thin client technology allows multiple operating systems to run on a single machine, creating a low-cost and low-maintenance computing environment referred to as "pure green IT" or "carbon free computing".
This document provides a guide to configure an Linux computer to share an internet connection with multiple other devices on a local network. It discusses planning the network topology, setting up DHCP and IP forwarding on the Linux box, and configuring firewall rules to masquerade traffic and allow sharing of a single public IP address among private devices.
Populating your data center with new, more powerful and energy efficient servers can deliver numerous benefits to your organization. By consolidating multiple older servers onto a new platform, you can save in the areas of data center space and port costs, management costs, and power and cooling costs.
In our tests, we found that the Lenovo ThinkServer RD630 could consolidate the workloads of three HP ProLiant DL385 G5 servers, while increasing overall performance by 82.6 percent and reducing power consumption by 58.8 percent, making the ThinkServer RD630 an excellent choice to reduce the costs associated with running your data center.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
20240609 QFM020 Irresponsible AI Reading List May 2024
LXF #102 - Linux Virtual Server
1. Hardcore Linux Challenge yourself
Tutorial Hardcore Linux
with advanced projects for power users
Linux Virtual
Building load-balancing clusters with LVS is as good at keeping out
the cold as chopping logs on a winter’s day, maintains Dr Chris Brown.
backend servers; in principle you could use anything that offers
a web server. In my case I used a couple of laptops, one running
SUSE 10.3 and one running Ubuntu 7.04. In reality you’d probably
want more than two backend servers, but two is enough to prove
the point.
You might well be wondering why I’m running a different
operating system on each of the four computers. It’s a fair
question, and one I often ask myself! Unfortunately, it’s simply
a fact of life for those of us who make our living by messing
around with Linux.
How a load-balancing cluster works
Client machines send service requests (for example an HTTP GET
request for a web page) to the public IP address of the director
(192.168.0.41 on the figure; as far as the clients are concerned,
this is the IP address at which the service is provided).
The director chooses one of the backend servers to forward
the request to. It rewrites the destination IP address of the packet
to be that of the chosen backend server and forwards the packet
out onto the internal network.
The chosen backend server processes the request and
sends an HTTP response, which hopefully includes the
requested page. This response is addressed to the client
machine (192.168.0.2) but is initially sent back to the director
T
his month’s Hardcore tutorial shows you how to provide (10.0.0.1), because the director is configured as the default
a load-balanced cluster of web servers, with a scalable gateway for the backend server. The director rewrites the source
performance that far outstrips the capability of an
individual server. We’ll be using the Red Hat cluster software, but
the core functionality described here is based on the Linux virtual
server kernel module and is not specific to Red Hat.
Our If you want to follow along with the tutorial for real, you’ll
expert need at least four computers, as shown in Figure 1 (opposite
Dr Chris Brown page). I realise this might put it out of the reach of some readers
A Unix user for over as a practical project, but hopefully it can remain as a thought
25 years, his own
Interactive Digital experiment, like that thing Schr dinger did with his cat. If you’ve
Learning company got some old machines lying about the place, why not refer to this
provides Linux month’s cover feature to find out how to get them up and running
training, content
and consultancy. again – then network them?
He also specialises Machine 1 is simply a client used for testing – it could be
in computer-based
classroom training
anything that has a web browser. In my case, I commandeered
delivery systems. my wife’s machine, which runs Vista. Machine 2 is where the
real action is. This machine must be running Linux, and should
preferably have two network interfaces, as shown in the figure. (It’s
possible to construct the cluster using just a single network, but
this arrangement wouldn’t be used on a production system, and it
needs a bit of IP-level trickery to make it work).
This machine performs load balancing and routing into
the cluster, and is known as the “director” in some LVS
documentation. In my case, machine 2 was running CentOS5
(essentially equivalent to RHEL5). Machines 3 and 4 are the Figure 2: Configuring a virtual server with Piranha.
Last month Use ‘heartbeat’ to provide automatic failover to a backup server.
98 Linux Format February 2008
LXF102.tut_adv Sec2:98 14/12/07 15:41:05
2. Hardcore Linux Tutorial
Server
IP address of the packet to refer to its own public IP address and
sends the packet back to the client.
All of this IP-level tomfoolery is carried out by the LVS module
within the Linux kernel. The director is not simply acting as a
1
reverse web proxy, and commands such as netstat -ant will NOT
show any user-level process listening on port 80.
The address re-writing is a form of NAT (Network Address
Translation) and allows the director to masquerade as the real
server, and to hide the presence of the internal network that’s
doing the real work.
Balancing the load
A key feature of LVS is load balancing – that is, making sure that
each backend server receives roughly the same amount of work.
LVS has several algorithms for doing this; we’ll mention four.
2
Round-robin scheduling is the simplest algorithm. The
director just works its way around the backend servers in turn,
starting round with the first one again when it reaches the end
of the list. This is good for testing because it’s easy to verify that
all your backend servers are working, but it may not be best in a
production environment.
Weighted round-robin is similar but lets you assign a weight
to each backend server to indicate the relative speed of each
server. A server with a weight of two is assumed to be twice as
powerful as a server with a weight of one and will receive twice
as many requests.
Least-connection load balancing tracks the number of
currently active connections to each backend server and 3 4
forwards the request to the one with the least number.
Weighted least connection method also takes into account
the relative processing power (weight) of the server. The weighted
least connection method is good for production clusters because
it works well when service requests take widely varying amounts
of time to process and/or when the backend servers in the cluster Figure 1: A simple load-balancing cluster.
are not equally powerful.
The latter two algorithms are dynamic; that is, they take
account of the current load on the machines in the cluster.
The tools for the job Worried about bottlenecks?
To make all this work, the kernel maintains a virtual server table
which contains, among other things, the IP addresses of the Since all the traffic into and out of the cluster passes through
backend servers. The command-line tool ipvsadm is used to the director, you might be wondering if these machines will
maintain and inspect this table. create a performance bottleneck. In fact this is unlikely to be the
case. Remember, the director is performing only simple packet
Assuming that your kernel is built to include the LVS module
header manipulation inside the kernel, and it turns out that even
(and most of them are these days) ipvsadm is the only program
a modest machine is capable of saturating a 100 MBps network.
you absolutely must have to make LVS work; however there are It’s much more likely that the network bandwidth into the site
some other toolsets around that make life easier. (The situation will be the bottleneck. For lots more information about using old
is similar to the packet-filtering machinery in the kernel. In theory PCs for tasks like networking, see this month’s cover feature
we only need the command-line tool iptables to manage this and starting on page 45.
create a packet-filtering firewall, in practice most of us use higher-
If you missed last issue Call 0870 837 4773 or +44 1858 438795.
February 2008 Linux Format 99
LXF102.tut_adv Sec2:99 14/12/07 15:41:07
3. Tutorial Hardcore Linux
level tools – often graphical – to build our firewall rulesets.) For
this tutorial we’ll use the clustering toolset from Red Hat which
includes a browser-based configuration tool called Piranha.
Now, the configuration shown in Figure 1 has one major
shortcoming – the director presents a single point of failure within
the cluster. To overcome this, it’s possible to use a primary and
backup server pair as the director, using a heartbeat message
exchange to allow the backup to detect the failure of the primary
and take over its resources.
For this tutorial I’ve ignored this issue, partly because I
discussed it in detail last month and partly because I would have
needed a fifth machine to construct the cluster, and I only have
four! It should be noted, however, that the clustering toolset
from Red Hat does indeed include functionality to failover from a
primary to a backup director.
Setting up the backend servers
If you want to actually build the cluster described in this tutorial,
start with the backend servers. These can be any machines able
to offer HTTP-based web service on port 80; in my case they were
running Linux and Apache.
You will need to create some content in the DocumentRoot Figure 3: Specifying the real (backend) servers.
directory of each server, and for testing purposes it’s a good idea
to make the content different on each machine so we can easily
“You’ll need at least four computers Setting up the director
On the director machine, begin by downloading and installing
to follow along for real; otherwise the Red Hat clustering tools. In my case (remember that
I’m running CentOS5 on this machine) I simply used the
see this as a thought experiment” graphical install tool (pirut) to install the packages ipvsadm and
Piranha from the CentOS repository. My next step was to run the
distinguish which one actually served the request. For example, command piranha-passwd to set a password for the piranha
on backend server 1, I created a file called proverb.html with the configuration tool:
line “A stitch in time saves nine” On server 2 the line was “Look
. # /etc/init.d/piranha-gui start
before you leap” Of course in reality you’d need to ensure that all
. This service listens on port 3636 and provides a browser-based
the backend servers were ‘in sync’ and serving the same content interface for configuring the clustering toolset, so once it’s running
– an issue we’ll return to later. I can point my browser at http://localhost:3636. I’ll need to log
Give the machines static IP addresses appropriate for the in, using the username piranha and the password I just set.
internal network. In my case I used 10.0.0.2 and 10.0.0.3. Set the From here I’m presented with four main screens: Control/
default route on these machines to be the private IP address of Monitoring, Global Settings, Redundancy and Virtual Servers (you
the director (10.0.0.1). can see the links to these in Figure 3 (above). To begin, go to the
If it doesn’t work
Chances are good that your cluster won’t work the 192.168.0.0 network and the 10.0.0.0 network via If it reports ‘1’ that’s fine, but if it reports ‘0’ you’ll
,
first time you try it. You have a couple of choices at the appropriate interfaces. need to enable IP forwarding like this:
this point. First, you could hurl one of the laptops If you can ping your backend servers, fire up # echo 1 > /proc/sys/net/ipv4/ip_forward
across the room and curse Microsoft. the web browser on the LVS machine (assuming You might also verify that your kernel is actually
These are not, of course, productive there is a browser installed there – you don’t really configured to use LVS. If your distro provides a copy
responses, even if they do make you feel better. need one). Verify that you can reach the URLs of the config file used to build the kernel (it will
And Microsoft really shouldn’t be blamed since http://10.0.0.2 and http://10.0.0.3, and see the likely be /boot/config-something-or-other) use
we’re not actually using any of its software. The web content served by these two machines. (If the grep to look for the string CONFIG_IP_VS in this
second response is to take some deep, calming pings are OK but the web servers aren’t accessible, file, and verify that you see a line like CONFIG_IP_
breaths, then conduct a few diagnostic tests: make sure that the web servers are actually VS=m, among others. You might also try:
A good first step would be to verify connectivity running on the backend servers, and that the # lsmod | grep vs
from the LVS machine, using good ol’ ping. backend servers don’t have firewall rules in place to verify that the ip_vs module is loaded. If you
First, make sure you can ping your test machine that prevent them from being accessed.) can’t find evidence for virtual server support in the
(192.168.0.2). Then check that you can ping both You should also carefully check the LVS routing kernel you may have to reconfigure and rebuild
your backend servers (10.0.0.2 and 10.0.0.3). If table displayed by the Control/Monitoring page of your kernel, an activity that is beyond the scope of
this doesn’t work, run ifconfig -a and verify that Piranha and verify that your backend servers show this tutorial.
eth0 has IP address 192.168.0.41 and eth1 has up there. If you’re still having no luck, verify that IP If all these tests look OK and things still don’t
address 10.0.0.1. forwarding is enabled on the LVS machine. You can work, you may now hurl one of the laptops across
Also, run route -n to check the routing do this with the command: the room. This won’t make your cluster work, but at
table – verify that you have routes out onto the # cat /proc/sys/net/ipv4/ip_forward least you’ll have a better idea of why it’s broken ....
100 Linux Format February 2008
LXF102.tut_adv Sec1:100 14/12/07 15:41:07
4. Hardcore Linux Tutorial
Virtual Servers screen and add a new service. Figure 2 (page
98) shows the form you’ll fill in. Among other things,
you need to give the service a name, specify the port number and
ERRATA: Figure 1 from LXF101
interface it will accept request packets on, and select the
scheduling algorithm (for initial testing I chose ‘Round Robin’).
Clicking the Real Server link at the top of this page takes you to the
screen shown in Figure 3. Here you can specify the names, IP
addresses and weights of your backend servers.
Behind the scenes, most of the configuration captured by
Piranha is stored in the config file /etc/sysconfig/ha/lvs.cf.
Other tools in the clustering toolset read this file; it’s plain text and
there’s no reason why you can’t just hand-edit it directly if you
prefer. With this configuration in place, you should be good to go.
Launch the clustering service from the command line with:
# /etc/init.d/pulse start
(On a production system you’d have this service start
automatically at boot time.)
Now go to the Piranha control/monitoring screen, shown
in Figure 4 (below). Look carefully at the LVS routing table. You
should see an entry there for your virtual service (that’s the line
starting TCP...) and below that, a line for each of your backend
servers. You can obtain the same information from the command In LXF101’s Clustering tutorial, we managed to print the wrong figure. Apologies
line with the command to any readers who struggled to make sense of the diagram in the context of the
# ipvsadm -L tutorial. Here’s the Figure 1 that should have appeared last month!
The periodic health check
Also on the control/monitoring screen there’s an LVS process
table. Here you’ll see a couple of ‘nanny’ processes. These are requests to that machine. The nanny will continue to probe the
responsible for verifying that the backend servers are still up and server, however, and should it come back up, the nanny will run
running, and there’s one nanny for each server. They work by ipvsadm again to reinstate the routing entry.
periodically sending a simple HTTP GET request and verifying You can observe this behaviour by unplugging the network
that there’s a reply. If you look carefully at the -s and -x options cable from a backend server (or simply stopping the httpd
specified to nanny, you’ll see the send and expect strings that are daemon) and examining the LVS routing table. You should see
used for this test. (You can customise these if you want, using the your dead server’s entry disappear. If you reconnect the network
Virtual Servers > Monitoring Scripts page of Piranha.) cable, or restart the server, the entry will re-appear. Be patient,
If you look at the Apache access logs on the backend servers, though, it can take 20 seconds or so for these changes to show up.
you’ll see these probe requests arriving every six seconds. If a
nanny process detects that its server has stopped responding, The Denouement
it will invoke ipvsadm to remove that machine’s entry from the If all seems well, it’s time to go to the client machine (machine 1 on
LVS routing table, so that the director will no longer forward any the first figure) and try to access the page in the browser. In this
example, you’d browse to http://192.168.0.41/proverbs.html,
and you should see the Proverbs page served from one of your
backend servers. If you force a page refresh, you’ll see the proverbs
page from the next server in the round-robin sequence. You should
also be able to verify the round-robin behaviour by examining the
apache access logs of the individual backend servers. (If you look
carefully at the log file entries, you see that these accesses come
from 192.168.0.2 whereas the probes from the nannies come
from 10.0.0.1.) If all of this works, congratulations! You’ve just built
your first load-balancing Linux cluster.
Postscript
Let’s pause and consider what we’ve demonstrated in these two
tutorials. We’ve seen how to build failover clustering solutions that
add another nine onto the availability of your service (for example,
to go from 99.9% to 99.99% availability); and we’ve seen how to
build load-balancing clusters able to far outstrip the performance
of an individual web server. In both cases we’ve used free, open
source software running on a free, open source operating system.
I sometimes wonder what, as he gets into bed at night with his
mug of Horlicks, Steve Ballmer makes of it all. It sure impresses
Figure 4: The control and monitor screen of Piranha. the hell out of me. LXF
Next month Distribute workload of a ‘virtual’ server across multiple backend servers.
February 2008 Linux Format 101
LXF102.tut_adv Sec1:101 14/12/07 15:41:08